Scheduled System Maintenance:
On May 6th, single article purchases and IEEE account management will be unavailable from 8:00 AM - 5:00 PM ET (12:00 - 21:00 UTC). We apologize for the inconvenience.
By Topic

Software Engineering, 2009. ICSE 2009. IEEE 31st International Conference on

Date 16-24 May 2009

Filter Results

Displaying Results 1 - 25 of 80
  • [Front and back covers]

    Publication Year: 2009 , Page(s): c1 - c4
    Save to Project icon | Request Permissions | PDF file iconPDF (1316 KB)  
    Freely Available from IEEE
  • [Title page]

    Publication Year: 2009 , Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (262 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Publication Year: 2009 , Page(s): ii
    Save to Project icon | Request Permissions | PDF file iconPDF (86 KB)  
    Freely Available from IEEE
  • Foreword

    Publication Year: 2009 , Page(s): iii
    Save to Project icon | Request Permissions | PDF file iconPDF (601 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • ICSE 2009 conference organization

    Publication Year: 2009 , Page(s): iv - ix
    Save to Project icon | Request Permissions | PDF file iconPDF (103 KB)  
    Freely Available from IEEE
  • ICSE 2009 sponsors and supporters

    Publication Year: 2009 , Page(s): x - xi
    Save to Project icon | Request Permissions | PDF file iconPDF (699 KB)  
    Freely Available from IEEE
  • Notes

    Publication Year: 2009 , Page(s): xii
    Save to Project icon | Request Permissions | PDF file iconPDF (51 KB)  
    Freely Available from IEEE
  • Table of contents

    Publication Year: 2009 , Page(s): xiii - xvii
    Save to Project icon | Request Permissions | PDF file iconPDF (70 KB)  
    Freely Available from IEEE
  • Notes

    Publication Year: 2009 , Page(s): xviii
    Save to Project icon | Request Permissions | PDF file iconPDF (51 KB)  
    Freely Available from IEEE
  • Predicting build failures using social network analysis on developer communication

    Publication Year: 2009 , Page(s): 1 - 11
    Cited by:  Papers (25)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (284 KB) |  | HTML iconHTML  

    A critical factor in work group coordination, communication has been studied extensively. Yet, we are missing objective evidence of the relationship between successful coordination outcome and communication structures. Using data from IBM's Jazztrade project, we study communication structures of development teams with high coordination needs. We conceptualize coordination outcome by the result of their code integration build processes (successful or failed) and study team communication structures with social network measures. Our results indicate that developer communication plays an important role in the quality of software integrations. Although we found that no individual measure could indicate whether a build will fail or succeed, we leveraged the combination of communication structure measures into a predictive model that indicates whether an integration will fail. When used for five project teams, our predictive model yielded recall values between 55% and 75%, and precision values between 50% to 76%. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • How tagging helps bridge the gap between social and technical aspects in software development

    Publication Year: 2009 , Page(s): 12 - 22
    Cited by:  Papers (14)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (329 KB) |  | HTML iconHTML  

    Empirical research on collaborative software development practices indicates that technical and social aspects of software development are often intertwined. The processes followed are tacit and constantly evolving, thus not all of them are amenable to formal tool support. In this paper, we explore how ldquotaggingrdquo, a lightweight social computing mechanism, is used to bridge the gap between technical and social aspects of managing work items. We present the results from an empirical study on how tagging has been adopted and adapted over the past two years of a large project with 175 developers. Our research shows that the tagging mechanism was eagerly adopted by the team, and that it has become a significant part of many informal processes. Our findings indicate that lightweight informal tool support, prevalent in the social computing domain, may play an important role in improving team-based software development practices. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Tesseract: Interactive visual exploration of socio-technical relationships in software development

    Publication Year: 2009 , Page(s): 23 - 33
    Cited by:  Papers (21)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (581 KB) |  | HTML iconHTML  

    Software developers have long known that project success requires a robust understanding of both technical and social linkages. However, research has largely considered these independently. Research on networks of technical artifacts focuses on techniques like code analysis or mining project archives. Social network analysis has been used to capture information about relations among people. Yet, each type of information is often far more useful when combined, as when the ldquogoodnessrdquo of social networks is judged by the patterns of dependencies in the technical artifacts. To bring such information together, we have developed Tesseract, an interactive exploratory environment that utilizes cross-linked displays to visualize the myriad relationships between artifacts, developers, bugs, and communications. We evaluated Tesseract by (1) demonstrating its feasibility with GNOME project data (2) assessing its usability via informal user evaluations, and (3) verifying its suitability for the open source community via semi-structured interviews. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • HOLMES: Effective statistical debugging via efficient path profiling

    Publication Year: 2009 , Page(s): 34 - 44
    Cited by:  Papers (37)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (696 KB) |  | HTML iconHTML  

    Statistical debugging aims to automate the process of isolating bugs by profiling several runs of the program and using statistical analysis to pinpoint the likely causes of failure. In this paper, we investigate the impact of using richer program profiles such as path profiles on the effectiveness of bug isolation. We describe a statistical debugging tool called HOLMES that isolates bugs by finding paths that correlate with failure. We also present an adaptive version of HOLMES that uses iterative, bug-directed profiling to lower execution time and space overheads. We evaluate HOLMES using programs from the SIR benchmark suite and some large, real-world applications. Our results indicate that path profiles can help isolate bugs more precisely by providing more information about the context in which bugs occur. Moreover, bug-directed profiling can efficiently isolate bugs with low overheads, providing a scalable and accurate alternative to sparse random sampling. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Taming coincidental correctness: Coverage refinement with context patterns to improve fault localization

    Publication Year: 2009 , Page(s): 45 - 55
    Cited by:  Papers (16)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (392 KB) |  | HTML iconHTML  

    Recent techniques for fault localization leverage code coverage to address the high cost problem of debugging. These techniques exploit the correlations between program failures and the coverage of program entities as the clue in locating faults. Experimental evidence shows that the effectiveness of these techniques can be affected adversely by coincidental correctness, which occurs when a fault is executed but no failure is detected. In this paper, we propose an approach to address this problem. We refine code coverage of test runs using control- and data-flow patterns prescribed by different fault types. We conjecture that this extra information, which we call context patterns, can strengthen the correlations between program failures and the coverage of faulty program entities, making it easier for fault localization techniques to locate the faults. To evaluate the proposed approach, we have conducted a mutation analysis on three real world programs and cross-validated the results with real faults. The experimental results consistently show that coverage refinement is effective in easing the coincidental correctness problem in fault localization techniques. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Lightweight fault-localization using multiple coverage types

    Publication Year: 2009 , Page(s): 56 - 66
    Cited by:  Papers (44)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (353 KB) |  | HTML iconHTML  

    Lightweight fault-localization techniques use program coverage to isolate the parts of the code that are most suspicious of being faulty. In this paper, we present the results of a study of three types of program coverage-statements, branches, and data dependencies-to compare their effectiveness in localizing faults. The study shows that no single coverage type performs best for all faults-different kinds of faults are best localized by different coverage types. Based on these results, we present a new coverage-based approach to fault localization that leverages the unique qualities of each coverage type by combining them. Because data dependencies are noticeably more expensive to monitor than branches, we also investigate the effects of replacing data-dependence coverage with an approximation inferred from branch coverage. Our empirical results show that (1) the cost of fault localization using combinations of coverage is less than using any individual coverage type and closer to the best case (without knowing in advance which kinds of faults are present), and (2) using inferred data-dependence coverage retains most of the benefits of combinations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Succession: Measuring transfer of code and developer productivity

    Publication Year: 2009 , Page(s): 67 - 77
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (168 KB) |  | HTML iconHTML  

    Code ownership transfer or succession is a crucial ingredient in open source code reuse and in offshoring projects. Measuring succession can help understand factors that affect the success of such transfers and suggest ways to make them more efficient. We propose and evaluate several methods to measure succession based on the chronology and traces of developer activities. Using ten instances of offshoring succession identified through interviews, we find that the best succession measure can accurately pinpoint the most likely mentors. We model the productivity ratio of more than 1000 developer pairs involved in the succession to test conjectures formulated using the organizational socialization theory and find the ratio to decrease for instances of offshoring and for mentors who have worked primarily on a single project or have transferred ownership for their non-primary project code, thus supporting a theory-based conjectures and providing practical suggestions on how to improve succession. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Predicting faults using the complexity of code changes

    Publication Year: 2009 , Page(s): 78 - 88
    Cited by:  Papers (48)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (193 KB) |  | HTML iconHTML  

    Predicting the incidence of faults in code has been commonly associated with measuring complexity. In this paper, we propose complexity metrics that are based on the code change process instead of on the code. We conjecture that a complex code change process negatively affects its product, i.e., the software system. We validate our hypothesis empirically through a case study using data derived from the change history for six large open source projects. Our case study shows that our change complexity metrics are better predictors of fault potential in comparison to other well-known historical predictors of faults, i.e., prior modifications and prior faults. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A case-study on using an Automated In-process Software Engineering Measurement and Analysis system in an industrial environment

    Publication Year: 2009 , Page(s): 89 - 99
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (293 KB) |  | HTML iconHTML  

    Automated systems for measurement and analysis are not adopted on a large scale in companies, despite the opportunities they offer. The fear of the ldquobig brotherrdquo and the lack of reports giving insights into the real adoption process and concrete usages in industry are barriers to this adoption. We report on a case-study on the adoption and long-term usage (2 years of running system) of such a system in a company focusing on the adoption process and the related challenges we encountered. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using quantitative analysis to implement autonomic IT systems

    Publication Year: 2009 , Page(s): 100 - 110
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (518 KB) |  | HTML iconHTML  

    The software underpinning today's IT systems needs to adapt dynamically and predictably to rapid changes in system workload, environment and objectives. We describe a software framework that achieves such adaptiveness for IT systems whose components can be modelled as Markov chains. The framework comprises (i) an autonomic architecture that uses Markov-chain quantitative analysis to dynamically adjust the parameters of an IT system in line with its state, environment and objectives; and (ii) a method for developing instances of this architecture for real-world systems. Two case studies are presented that use the framework successfully for the dynamic power management of disk drives, and for the adaptive management of cluster availability within data centres, respectively. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Model evolution by run-time parameter adaptation

    Publication Year: 2009 , Page(s): 111 - 121
    Cited by:  Papers (24)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (951 KB) |  | HTML iconHTML  

    Models can help software engineers to reason about design-time decisions before implementing a system. This paper focuses on models that deal with non-functional properties, such as reliability and performance. To build such models, one must rely on numerical estimates of various parameters provided by domain experts or extracted by other similar systems. Unfortunately, estimates are seldom correct. In addition, in dynamic environments, the value of parameters may change over time. We discuss an approach that addresses these issues by keeping models alive at run time and feeding a Bayesian estimator with data collected from the running system, which produces updated parameters. The updated model provides an increasingly better representation of the system. By analyzing the updated model at run time, it is possible to detect or predict if a desired property is, or will be, violated by the running implementation. Requirement violations may trigger automatic reconfigurations or recovery actions aimed at guaranteeing the desired goals. We illustrate a working framework supporting our methodology and apply it to an example in which a Web service orchestrated composition is modeled through a discrete time Markov chain. Numerical simulations show the effectiveness of the approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Taming Dynamically Adaptive Systems using models and aspects

    Publication Year: 2009 , Page(s): 122 - 132
    Cited by:  Papers (23)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (427 KB) |  | HTML iconHTML  

    Since software systems need to be continuously available under varying conditions, their ability to evolve at runtime is increasingly seen as one key issue. Modern programming frameworks already provide support for dynamic adaptations. However the high-variability of features in Dynamic Adaptive Systems (DAS) introduces an explosion of possible runtime system configurations (often called modes) and mode transitions. Designing these configurations and their transitions is tedious and error-prone, making the system feature evolution difficult. While Aspect-Oriented Modeling (AOM) was introduced to improve the modularity of software, this paper presents how an AOM approach can be used to tame the combinatorial explosion of DAS modes. Using AOM techniques, we derive a wide range of modes by weaving aspects into an explicit model reflecting the runtime system. We use these generated modes to automatically adapt the system. We validate our approach on an adaptive middleware for home-automation currently deployed in Rennes metropolis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Accurate Interprocedural Null-Dereference Analysis for Java

    Publication Year: 2009 , Page(s): 133 - 143
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (278 KB) |  | HTML iconHTML  

    Null dereference is a commonly occurring defect in Java programs, and many static-analysis tools identify such defects. However, most of the existing tools perform a limited interprocedural analysis. In this paper, we present an interprocedural path-sensitive and context-sensitive analysis for identifying null dereferences. Starting at a dereference statement, our approach performs a backward demand-driven analysis to identify precisely paths along which null values may flow to the dereference. The demand-driven analysis avoids an exhaustive program exploration, which lets it scale to large programs. We present the results of empirical studies conducted using large open-source and commercial products. Our results show that: (1) our approach detects fewer false positives, and significantly more interprocedural true positives, than other commonly used tools; (2) the analysis scales to large subjects; and (3) the identified defects are often deleted in subsequent releases, which indicates that the reported defects are important. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The road not taken: Estimating path execution frequency statically

    Publication Year: 2009 , Page(s): 144 - 154
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (337 KB) |  | HTML iconHTML  

    A variety of compilers, static analyses, and testing frameworks rely heavily on path frequency information. Uses for such information range from optimizing transformations to bug finding. Path frequencies are typically obtained through profiling, but that approach is severely restricted: it requires running programs in an indicative environment, and on indicative test inputs. We present a descriptive statistical model of path frequency based on features that can be readily obtained from a program's source code. Our model is over 90% accurate with respect to several benchmarks, and is sufficient for selecting the 5% of paths that account for over half of a program's total runtime. We demonstrate our technique's robustness by measuring its performance as a static branch predictor, finding it to be more accurate than previous approaches on average. Finally, our qualitative analysis of the model provides insight into which source-level features indicate ldquohot pathsrdquo. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatic dimension inference and checking for object-oriented programs

    Publication Year: 2009 , Page(s): 155 - 165
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (163 KB) |  | HTML iconHTML  

    This paper introduces UniFi, a tool that attempts to automatically detect dimension errors in Java programs. UniFi infers dimensional relationships across primitive type and string variables in a program, using an inter-procedural, context-sensitive analysis. It then monitors these dimensional relationships as the program evolves, flagging inconsistencies that may be errors. UniFi requires no programmer annotations, and supports arbitrary program-specific dimensions, thus providing fine-grained dimensional consistency checking. UniFi exploits features of object-oriented languages, but can be used for other languages as well. We have run UniFi on real-life Java code and found that it is useful in exposing dimension errors. We present a case study of using UniFi on nightly builds of a 19,000 line code base as it evolved over 10 months. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • In-field healing of integration problems with COTS components

    Publication Year: 2009 , Page(s): 166 - 176
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (271 KB) |  | HTML iconHTML  

    Developers frequently integrate complex COTS frameworks and components in software applications. COTS products are often only partially documented, and developers may misuse technologies and introduce integration faults, as witnessed by the many entries in fault repositories. Once identified, common integration problems and their fixes are usually documented in forums and fault repositories on the Web, but this does not prevent them to occur in the field when COTS products are reused. In this paper, we propose a methodology and a self- healing technology that can reduce the occurrence of infield failures caused by common integration problems that are identified and documented by COTS developers. Our methodology supports COTS developers in producing healing connectors for common misuses of COTS products. Our technology produces information that facilitate debugging and patching of applications that use COTS products. Application developers inject healing connectors into their systems to automatically repair problems caused by misuses of COTS products. Healing takes place at run-time, on-the-fly and in-the-field. The activity of healing connectors is traced in log files, to facilitate debugging and patching of integration problems. Empirical experiences with several applications and COTS products show the feasibility of the approach and the efficiency of the technology. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.