By Topic

Empirical Software Engineering, 2003. ISESE 2003. Proceedings. 2003 International Symposium on

Date 30 Sept.-1 Oct. 2003

Filter Results

Displaying Results 1 - 25 of 35
  • Composable process elements for developing COTS-based applications

    Publication Year: 2003 , Page(s): 8 - 17
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (321 KB) |  | HTML iconHTML  

    Data collected from five years of developing e-service applications at USC-CSE reveals that an increasing fraction have been commercial-off-the-shelf (COTS)-based applications (CBA) projects: from 28% in 1997 to 60% in 2001. Data from both small and large CBA projects show that CBA effort is primarily distributed among the three activities of COTS assessments, COTS tailoring, and glue code development and integration, with wide variations in their distribution across projects. We have developed a set of data-motivated composable process elements, in terms of these three activities, for developing CBA's as well an overall decision framework for applying the process elements. We present data regarding the movement towards CBA's and effort distribution among them; we then proceed to describe the decision framework and to present a real-world example showing how it operates within the WinWin Spiral process model generator to orchestrate, execute, and adapt the process elements to changing project circumstances. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Comprehensibility and efficiency of multiview framework for measurement plan design

    Publication Year: 2003 , Page(s): 89 - 98
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (434 KB) |  | HTML iconHTML  

    Understanding the results of measurements is a primary issue for continuous software process improvement. Models provide support for better understanding measures. One of the problems often encountered in defining a measurement plan is its dimensions in terms of goals and metrics. This inevitably impacts on the usability of a measurement plan in terms of effort needed for interpreting the measurement results and accuracy of interpretation itself. The authors validate an approach (multiview framework) for designing a measurement plan, according to the GQM model, and structured in order to improve usability. For this reason an experiment was executed to validate the approach and provide evidence that a GQM designed according to the multiview framework is more usable, and that interpretation depends on the collected measures and is independent of who interprets them. In the experiment the authors verify that a measurement plan designed according to the proposed model doesn't negatively impact on efficiency of interpretation. The experimental results are positive and encourage further replications and studies. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Guidelines for managing bias in project risk management

    Publication Year: 2003 , Page(s): 272 - 280
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (283 KB) |  | HTML iconHTML  

    Risk management is often seen as a project manager's job. However, the information and knowledge required make a realistic assessment of project risks is often dispersed among people in and around the project. Also people will tend to focus their attention on different aspects and as a consequence on different risks because their different roles with regard to the project. Our assumption is that it is wise to have a team of relevant people making a joint risk assessment, based on knowledge and information dispersed in, but not necessarily shared by, the team. The team corrects the filters and biases of individuals in their specialized roles and positions and creates both a richer "knowledge base" and increased variety in interpretations. To test these assumptions, we formulated design requirements for a risk management method on the basis of the theory of human group and individual decision-making and information processing. Based on these requirements a risk management method was developed and used in eight IT projects. The results confirmed the assumption that lack of information and bias are relevant issues in risk assessment. The proposed guidelines resulted in a method capable of handling these issues. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An experimental evaluation of inspection and testing for detection of design faults

    Publication Year: 2003 , Page(s): 174 - 184
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (317 KB) |  | HTML iconHTML  

    The two most common strategies for verification and validation, inspection and testing, are in a controlled experiment evaluated in terms of their fault detection capabilities. These two techniques are in the previous work compared applied to code. In order to compare the efficiency and effectiveness of these techniques on a higher abstraction level than code, this experiment investigates inspection of design documents and testing of the corresponding program, to detect faults originating from the design document. Usage-based reading (UBR) and usage-based testing (UBT) were chosen for inspections and testing, respectively. These techniques provide similar aid to the reviewers as to the testers. The purpose of both fault detection techniques is to focus the inspection and testing from a user's viewpoint. The experiment was conducted with 51 Master's students in a two-factor blocked design; each student applied each technique once, each application on different versions of the same program. The two versions contained different sets of faults, including 13 and 14 faults, respectively. The general results from this study show that when the two groups of subjects are combined, the efficiency and effectiveness are significantly higher for usage-based reading and that testing tends to require more learning. Rework is not taken into account, thus the experiment indicates strong support for design inspection over testing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A replicated assessment of the use of adaptation rules to improve Web cost estimation

    Publication Year: 2003 , Page(s): 100 - 109
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (293 KB) |  | HTML iconHTML  

    Analogy-based estimation has, over the last 15 years, and particularly over the last 7 years, emerged as a promising approach with comparable accuracy to, or better than, algorithmic methods. In addition, it is potentially easier to both understand and apply; these two important factors can contribute to the successful adoption of estimation methods within Web development companies. We believe therefore, analogy-based estimation should be examined further. This paper replicates previous work that investigated the use of two types of adaptation rules as a contributing factor to better estimation accuracy. In addition, it also investigates the use of feature subset selection, in addition to adaptation rules. Two datasets are used in the analysis; results show that adaptation rules improved estimation accuracy for the less "messy" dataset. Feature subset selection also seems to help improve the adaptation results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Applying the software evaluation framework "SEF" to the software development life cycle

    Publication Year: 2003 , Page(s): 281 - 290
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (309 KB) |  | HTML iconHTML  

    The primary objective of this paper is to present an exploratory on the different measurements used at different milestones throughout a development project. The paper presents the result of a study, which uses qualitative techniques to investigate the cognitive structures appropriate to the requirements phase and the implementation phase of a software development cycle. The study involved an e-commerce project, and two stakeholder groups, the users and the developers. The results show that measurements between the different phases are not the same, though the motivation behind the choice of these measurements is the same for a stakeholder group. The study also finds that the two groups of stakeholders are very similar in the measurements they choose for evaluating requirements documents, however, the motivation behind their choice of these measurements differ between the stakeholder groups. These results are a contrast to that of the implementation phase. These results, whilst still exploratory, are valuable as they highlight the differences and similarities of not just the stakeholder groups, but more importantly the choice of measurements at the different milestones. As a result of this study, the software evaluation framework to guide practitioners as they evaluate, test, review, walkthrough, or inspect the different artifacts that the many milestones deliver. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Management of interdependencies in collaborative software development

    Publication Year: 2003 , Page(s): 294 - 303
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (263 KB) |  | HTML iconHTML  

    In this paper we report results of an informal field study of a software development team conducted during an eight week internship at the NASA/Ames Research center. The team develops a suite tools called MVP, and is composed of 31 co-located software engineers, who design, test, document, and maintain the different MVP tools. We describe the formal and informal approaches used by this group to manage the interdependencies that occur during the software development process. Formal approaches emerge due to the needs of the developers. We also describe how the software development tools used by this team support these approaches and explore where explicit support is needed. Finally, based on our findings, we discuss implications for software engineering research. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The anatomy of an experience repository

    Publication Year: 2003 , Page(s): 162 - 171
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (461 KB) |  | HTML iconHTML  

    This paper presents empirical data on the use of a software engineering experience repository in a small software organisation. The data contains information about use, usefulness and structure of the repository. Analysis of the data provides insights into how experience management can support software development in a small software organisation. The data shows that the organisation used the experience repository extensively, found the repository positively useful and realized tangible benefits from its use. The most entered and retrieved types of experience were code examples and document templates and examples, suggesting that the experience repository supported the organisation by providing a vehicle for reuse of concrete development artifacts. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Quantitative studies in software release planning under risk and resource constraints

    Publication Year: 2003 , Page(s): 262 - 270
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (303 KB) |  | HTML iconHTML  

    Delivering software in an incremental fashion implicitly reduces many of the risks associated with delivering large software projects. However, adopting a process, where requirements are delivered in releases means decisions have to be made on which requirements should be delivered in which release. This paper describes a method called EVOLVE+, based on a genetic algorithm and aimed at the evolutionary planning of incremental software development. The method is initially evaluated using a sample project. The evaluation involves an investigation of the tradeoff relationship between risk and the overall benefit. The link to empirical research is two-fold: firstly, our model is based on interaction with industry and randomly generated data for effort and risk of requirements. The results achieved this way are the first step for a more comprehensive evaluation using real-world data. Secondly, we try to approach uncertainty of data by additional computational effort providing more insight into the problem solutions: (i) effort estimates are considered to be stochastic variables following a given probability function; (ii) instead of offering just one solution, the L-best (L > 1) solutions are determined. This provides support in finding the most appropriate solution, reflecting implicit preferences and constraints of the actual decision-maker. Stability intervals are given to indicate the validity of solutions and to allow the problem parameters to be changed without adversely affecting the optimality of the solution. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A review of software surveys on software effort estimation

    Publication Year: 2003 , Page(s): 223 - 230
    Cited by:  Papers (38)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (263 KB) |  | HTML iconHTML  

    This paper summarizes estimation knowledge through a review of surveys on software effort estimation. Main findings were that: (1) most projects (60-80%) encounter effort and/or schedule overruns. The overruns, however, seem to be lower than the overruns reported by some consultancy companies. For example, Standish Group's "Chaos Report" describes an average cost overrun of 89%, which is much higher than the average overruns found in other surveys, i.e. 3040%. (2) The estimation methods in most frequent use of expert judgment is that there is no evidence that formal estimation models lead to more accurate estimates. (3) There is a lack of surveys including extensive analyses of the reasons for effort and schedule overruns. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Experience-based model-driven improvement management with combined data sources from industry and academia

    Publication Year: 2003 , Page(s): 154 - 161
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (259 KB) |  | HTML iconHTML  

    Experience-based improvement using various modeling techniques is an important issue in software engineering. Many approaches have been proposed and applied in both industry and academia, e.g., case studies, pilot projects, controlled experiments, assessments, expert opinion polls, experience bases, goal-oriented measurement, process modeling, statistical modeling, data mining, and simulation. Although these approaches can be combined and organized according to the principles of the quality improvement paradigm (QIP) and the associated experience factory (EF) concepts, there are serious problems with: a) effective and efficient integration of the various approaches; and, b) the exchange of experience and data between industry and academia. In particular, the second problem strongly limits opportunities for joint research efforts and cross-organizational synergy. Based upon lessons learned from large-scale European joint research initiatives involving both industry and academia, this paper proposes the vision of an integrated software process improvement framework that facilitates solutions to the problems mentioned above. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analogy based prediction of work item flow in software projects: a case study

    Publication Year: 2003 , Page(s): 110 - 119
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (308 KB) |  | HTML iconHTML  

    A software development project coordinates work by using work items that represent customer, tester and developer found defects, enhancements, and new features. We set out to facilitate software project planning by modeling the flow of such work items and using information on historic projects to predict the work flow of an ongoing project. The history of the work items is extracted from problem tracking or configuration management databases. The Web-based prediction tool allows project managers to select relevant past projects and adjust the prediction based on staffing, type, and schedule of the ongoing project. We present the workflow model, and briefly describe project prediction of a large software project for customer relationship management (CRM). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Applying use cases to design versus validate class diagrams - a controlled experiment using a professional modeling tool

    Publication Year: 2003 , Page(s): 50 - 60
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (273 KB) |  | HTML iconHTML  

    Several processes have been proposed for the transition from functional requirements to an object-oriented design, but these processes have been subject to little empirical validation. A use case driven development process is often recommended when applying UML. Nevertheless, it has been reported that this process leads to problems, such as the developers missing some requirements and mistaking requirements for design. This paper describes a controlled experiment, width 53 students as subjects, conducted to investigate two alternative processes for applying a use case model in an object-oriented design process. One process was use case driven, while the other was a responsibility-driven process in which the use case model was applied as a means of validating the resulting class diagram. Half of the subjects used the modeling tool Tau UML Suite from Telelogic; the other half used pen and paper. The results show that the validation process led to class diagrams implementing more of the requirements. The use case driven process did, however, result in class diagrams with a better structure. The results also show that those who used the modeling tool spent more time on constructing class diagrams than did those who used pen and paper. We experienced that it requires much more effort to organize an experiment with a professional modeling tool than with only pen and paper. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An initial framework for research on pair programming

    Publication Year: 2003 , Page(s): 132 - 142
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (323 KB) |  | HTML iconHTML  

    In recent years, several claims have been put forward in favour of pair programming, as opposed to individual programming. However, results from existing studies on pair programming contain apparent contradictions. The differences in the context in which the studies were conducted may be one explanation for such results. The paper presents an initial framework for research on pair programming. The aim is to support empirical studies and meta-analysis for developing theories about pair programming. The framework is based on: (1) existing studies on pair programming, (2) ongoing studies by the authors, and (3) theories from group dynamics. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The application of capture-recapture log-linear models to software inspections data

    Publication Year: 2003 , Page(s): 213 - 222
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (306 KB) |  | HTML iconHTML  

    Re-inspection has been deployed in industry to improve the quality of software inspections. The number of remaining defects after inspection is an important factor affecting whether to re-inspect the document or not. Models based on capture-recapture (CR) sampling techniques have been proposed to estimate the number of defects remaining in the document after inspection. Several publications have studied the robustness of some of these models using software engineering data. Unfortunately, most of the existing studies did not examine the log linear models with respect software inspection data. In order o explore the performance of the log linear models, we evaluated their performance for three person inspection teams. Furthermore, we evaluated the models using an inspection data set that was previously used to asses different CR models. Generally speaking, the study provided very promising results. According to our results, the log linear models proved to be more robust that all CR based models previously assessed for three-person inspections. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A study of collaboration in software design

    Publication Year: 2003 , Page(s): 304 - 313
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (528 KB) |  | HTML iconHTML  

    This paper presents a study of collaboration in software design at a large software company. Ethnographic studies of development teams in the field are relatively rare, so this paper contributes to a small, but growing, body of knowledge about the collaborative activities involved in such design work. Five separate development groups were studied over a six-week period. The methodology included shadowing, interviews and communication event logging. A novel LPDA-based application was used for real-time data collection. The results of the study indicate that designers communicate frequently, using a wide variety of communication and collaboration modalities. Designers prefer general-purpose tools to domain-specific applications. In support of communication, designers frequently change their physical location throughout the day. Finally, designers frequently change the ways in which they communicate, changing their communication modalities and styles. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using empirical knowledge from replicated experiments for software process simulation: a practical example

    Publication Year: 2003 , Page(s): 18 - 27
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (386 KB) |  | HTML iconHTML  

    Empirical knowledge from software engineering studies is an important source for the creation of accurate simulation models. This article describes the development of a simulation model using empirical knowledge gained from an experiment at the NASA/GSFC Software Engineering Laboratory and from two replications at the University of Kaiserslautern. Data and analysis results are used to identify influence dependencies between parameters, and to calibrate models. The goal of the model is the determination of the effects (i.e. defect detection efficacy) of a requirements inspection process under varying contexts. The purpose is to provide decision support for project managers and process engineers when planning or changing a development process. This article describes the systematic model development with a focus on the use of empirical knowledge. Additionally, limitations of the model, lessons learned, and research questions for future work are sketched. The model performed well in an initial validation run, with only little deviation from experimental values. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An experiment on software project size and effort estimation

    Publication Year: 2003 , Page(s): 120 - 129
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (321 KB) |  | HTML iconHTML  

    Expert judgment is still the dominant estimation technique in practice today for software project size and effort. In this paper, we evaluate two techniques that are frequently suggested as effective support for human estimators: checklists and group discussions. A student experiment was conducted to investigate how checklists and group discussions help estimators to improve their estimates. The results suggest that both checklists and group discussions significantly contribute to improved estimation, but in distinct and complementary ways. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A study on agreement between participants in an architecture assessment

    Publication Year: 2003 , Page(s): 61 - 70
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (300 KB) |  | HTML iconHTML  

    When conducting an architecture evaluation it is important that the right persons participate, so that as many views and aspects as possible of the architecture candidates are examined before development begins. At the same time, it is not cost efficient to include all stakeholders that might possibly have an opinion about the system. In this paper we investigate the amount of agreement between participants in an architecture assessment. The purpose of this s to identify which participants will provide unique views during a discussion and which participants share a similar view. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An experience in combining flexibility and control in a small company's software product development process

    Publication Year: 2003 , Page(s): 28 - 37
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (293 KB) |  | HTML iconHTML  

    This paper presents a longitudinal case study at Smartner Information Systems, a small software product company operating in a dynamic and uncertain environment. Smartner successfully combines flexibility and control in their product development process. Flexibility is gained with monthly sprints, after which new decisions about project scope can be made in planning the following sprint. Control is achieved through mapping the sprints to management decision points, where the management team makes decisions concerning the whole project portfolio. The development team and other stakeholders of the product participate in sprint planning, facilitating communication of business/customer needs to development. Product roadmapping and sprint demonstrations give visibility of development plans and progress to the whole organization. Freezing the development scope for a month at a time helps in giving the development team a chance to work on their assigned tasks and creates a more relaxed atmosphere. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Conducting on-line surveys in software engineering

    Publication Year: 2003 , Page(s): 80 - 88
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (265 KB) |  | HTML iconHTML  

    One purpose of empirical software engineering is to enable an understanding of factors that influence software development. Surveys are an appropriate empirical strategy to gather data from a large population (e.g., about methods, tools, developers, companies) and to achieve an understanding of that population. Although surveys are quite often performed, for example, in social sciences and marketing research, they are underrepresented in empirical software engineering research, which most often uses controlled experiments and case studies. Consequently, also the methodological support how to perform such studies in software engineering is rather low. However, with the increasing pervasion of the Internet it is possible to perform surveys easily and cost-effectively over Internet pages (i.e., on-line), while at the same time the interest in performing surveys is growing. The purpose of this paper is twofold. First we want to arise the awareness of on-line surveys and discuss methods how to perform these in the context of software engineering. Second, we report our experience in performing on-line surveys in the form of lessons learned and guidelines. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Building pair programming knowledge through a family of experiments

    Publication Year: 2003 , Page(s): 143 - 152
    Cited by:  Papers (19)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (278 KB) |  | HTML iconHTML  

    Pair programming is a practice in which two programmers work collaboratively at one computer on the same design, algorithm, code, or test. Pair programming is becoming increasingly popular in industry and in university curricula. A family of experiments was run with over 1200 students at two US universities, North Carolina State University and the University of California Santa Cruz, to assess the efficacy of pair programming as an alternative learning technique in introductory programming courses. Students who used the pair programming technique were at least as likely to complete the introductory course with a grade of C or better when compared with students who used the solo programming technique. Paired students earned exam and project scores equal to or better than solo students. Paired students had a positive attitude toward collaboration and were significantly more likely to be registered as computer science-related majors one year later. Our findings also suggest that students in paired classes continue to be successful in subsequent programming classes continue to be successful in subsequent programming classes that require solo programming. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An empirical study of Web-based inspection meetings

    Publication Year: 2003 , Page(s): 244 - 251
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (407 KB) |  | HTML iconHTML  

    Software inspections are a software engineering "best practice" for defect detection and rework reduction. In this paper, we describe an empirical evaluation with using a tool aiming to provide Internet groupware support for distributed software inspections. The tool is based on a restructured inspection process where inspection meetings have the only goal of removing false positives rather than finding additional defects. In place of face-to-face meetings, the tool provides Web-based discussion forums and support for voting. We preset an empirical study of nine remote inspections which were held as part of a university course. We investigated whether all collected defects are worth to be discussed as a group. Results show that discussions for filtering out false positives (non true defects) might be restricted to defects which were discovered by only one inspector. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An empirical analysis of fault persistence through software releases

    Publication Year: 2003 , Page(s): 206 - 212
    Cited by:  Papers (7)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (275 KB) |  | HTML iconHTML  

    This work is based on the idea of analyzing the behavior all over the life-cycle of source files having a high number of faults at their first release. In terms of predictability, our study helps to understand if files that are faulty in their first release tend to remain faulty in later releases, and investigates the ways to assure a higher reliability to the faultiest programs, testing them carefully or lowering the complexity of their structure. The purpose of this paper is to verify empirically our hypothesis, through an experimental analysis on two different projects, and to find causes observing the structure of the faulty files. As a conclusion, we can say that the number of faults at the first release of source files is an early and significant index of its expected defect rate and reliability. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Identification of key factors in software process management - a case study

    Publication Year: 2003 , Page(s): 316 - 325
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (290 KB) |  | HTML iconHTML  

    When conducting process related work within an organization, it is important to be aware of which factors that are most important to consider. This paper presents an empirical study that was performed in order to find the key success factors in process management. One factor, namely synchronization of processes, was considered as much more important within the studied organization than within the studied literature. This shows that more research might be needed in this area. The study further shows that it is important to relate process improvement work to the properties of the affected organization and that the key factors identified are highly interrelated. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.