Skip to Main Content
In this paper we report on the results of a four-year study of how automated tools are used in application development (AD). Drawing on data collected from over 100 projects at 22 sites in 15 Fortune 500 companies, we focus on understanding the relationship between using such automated AD tools and various measures of AD performance—including user satisfaction, labor cost per function point, schedule slippage, and stakeholder-rated effectiveness. Using extensive data from numerous surveys, on-site observations, and field interviews, we found that the direct effects of automated tool use on AD performance were mixed, and that the use of such tools by themselves makes little difference in the results. Further analysis of key intervening factors finds that training, structured methods use, project size, design quality, and focusing on the combined use of AD tools adds a great deal of insight into what contributes to the successful use of automated tools in AD. Despite the many grand predictions of the trade press over the past decade, computer-assisted software engineering (CASE) tools failed to emerge as the promised “silver bullet.” The mixed effects of CASE tools use on AD performance that we found, coupled with the complex impact of other key factors such as training, methods, and group interaction, suggest that a cautious approach is appropriate for predicting the impact of similar AD tools (e.g., object-oriented, visual environments, etc.) in the future, and highlight the importance of carefully managing the introduction and use of such tools if they are to be used successfully in the modern enterprise.
Note: The Institute of Electrical and Electronics Engineers, Incorporated is distributing this Article with permission of the International Business Machines Corporation (IBM) who is the exclusive owner. The recipient of this Article may not assign, sublicense, lease, rent or otherwise transfer, reproduce, prepare derivative works, publicly display or perform, or distribute the Article.