By Topic

Software Engineering, 2004. ICSE 2004. Proceedings. 26th International Conference on

Date 23-28 May 2004

Filter Results

Displaying Results 1 - 25 of 137
  • A weakly constrained approach to software change coordination

    Page(s): 66 - 68
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (207 KB) |  | HTML iconHTML  

    The development of a software system - of any reasonable size - from initial conception through ongoing maintenance and evolution accrues significant coordination overheads. Often the mechanisms used to manage change and coordination detract from the time developers have to pursue the principal goal of constructing the desired system. This is one of the motivators behind the emerging 'agile' methodologies. By permitting people to work as independently as possible and yet be aware of each other's dependencies and constraints, it is believed that these secondary costs can be minimised. The position taken in the research summarised here is that better support can be provided for this type of weakly constrained coordination by enhancing the awareness, automated traceability, and constraint checking capabilities of software configuration management systems. Current progress in the research and plans for future work are described. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • χ-SCTL/MUS: a formal methodology to evolve multi-perspective software requirements specifications

    Page(s): 72 - 74
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (234 KB) |  | HTML iconHTML  

    The objective of this thesis is to extend the formal methodology of refinement of requirements specifications SCTL/MUS to a multi-perspective environment where coexist requirements specifications which belong to each stakeholder involved in the software development of the system. To reach this goal, the new methodology (referred to as χ-SCTL/MUS) bets on using a viewpoint-based approach which allows to gather and maintain (possibly inconsistent and incomplete) information gathered from multiple sources. It explicitly separates the descriptions provided by different stakeholders, and concentrates on identifying and resolving conflicts between them. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improving UML design tools by formal games

    Page(s): 75 - 77
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (224 KB) |  | HTML iconHTML  

    The Unified Modeling Language (UML) is a standard language for modelling the design of object oriented software systems. The currently available UML design tools mainly provide support for drawing the UML diagrams, i.e. for recording a chosen design, but not for choosing a design. The design of a system is a non-trivial, iterative process and errors which are introduced at this level are usually very expensive to fix. Hence we argue that UML design tools should provide more support for the design activity as such. Ideally a UML design tool should allow the modeller to explore different design options, provide feedback about the design in its current state, and even make suggestions for improvements where this is possible. The usage of such a tool would be highly interactive and very much like a game, played repeatedly between modeller and tool. We claim that this similarity makes formal games a natural and intuitive choice for the definition of tool concepts. Since formal games can be used for verification, a game-based tool can provide feedback about flaws in the design that is formally founded. Games as used in verification normally require a complete formal model of the software system, and a formal specification of the property that is to be verified. Instead of this we would like to let the designer play a game directly on the basis of the UML model, even though a UML model is often incomplete and informally defined. We also want to allow the modeller explore variations of the design while the game is being played. The research hypothesis for this work is that formal games are a suitable technique for more advanced UML design tools which point the modeller to flaws in the design, help to improve the design and provide support for making design decisions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Towards safe distributed application development

    Page(s): 347 - 356
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (328 KB) |  | HTML iconHTML  

    Distributed application development is overly tedious, as the dynamic composition of distributed components is hard to combine with static safety with respect to types (type safety) and data (encapsulation). Achieving such safety usually goes through specific compilation to generate the glue between components, or making use of a single programming language for all individual components with a hardwired abstraction for the distributed interaction. In this paper, we investigate general-purpose programming language features for supporting third-party implementations of programming abstractions for distributed interaction among components. We report from our experiences in developing a stock market application based on type-based publish/subscribe (TPS) implemented (1) as a library in standard Java as well as with (2) a homegrown extension of the Java language augmented with specific primitives for TPS, motivated by the lacks of former implementation. We then revisit the library approach, investigating the impact of genericity, reflective features, and the type system, on the implementation of a satisfactory TPS library. We then discuss the impact of these features also on other distributed programming abstractions, and hence on the engineering of distributed applications in general, pointing out lacks of mainstream programming environments such as Java as well as .NET. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using simulation to empirically investigate test coverage criteria based on statechart

    Page(s): 86 - 95
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (312 KB) |  | HTML iconHTML  

    A number of testing strategies have been proposed using state machines and statecharts as test models in order to derive test sequences and validate classes or class clusters. Though such criteria have the advantage of being systematic, little is known on how cost effective they are and how they compare to each other. This article presents a precise simulation and analysis procedure to analyze the cost-effectiveness of statechart-based testing techniques. We then investigate, using this procedure, the cost and fault detection effectiveness of adequate test sets for the most referenced coverage criteria for statecharts on three different representative case studies. Through the analysis of common results and differences across studies, we attempt to draw more general conclusions regarding the costs and benefits of using the criteria under investigation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Precise service level agreements

    Page(s): 179 - 188
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (311 KB) |  | HTML iconHTML  

    SLAng is an XML language for defining service level agreements, the part of a contract between the client and provider of an Internet service that describes the quality attributes that the service is required to possess. We define the semantics of SLAng precisely by modelling the syntax of the language in UML, then relating the language model to a model that describes the structure and behaviour of services. The presence of SLAng elements imposes behavioural constraints on service elements, and the precise definition of these constraints using OCL constitutes the semantic description of the language. We use the semantics to define a notion of SLA compatibility, and an extension to UML that enables the modelling of service situations as a precursor to analysis, implementation and provisioning activities. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Dublo architecture pattern for smooth migration of business information systems: an experience report

    Page(s): 117 - 126
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (272 KB) |  | HTML iconHTML  

    While the importance of multi-tier architectures for enterprise information systems is widely accepted and their benefits are well published, the systematic migration from monolithic legacy systems toward multi-tier architectures is known to a much lesser extent. In this paper we present a pattern on how to re-use elements of legacy systems within multi-tier architectures, which also allows for a smooth migration path. We report on experience we made with migrating existing municipal information systems towards a multitier architecture. The experience is generalized by describing the underlying pattern such that it can be re-used for similar architectural migration tasks. The emerged Dublo pattern is based on the partial duplication of business logic among legacy system and newly deployed application server. While this somehow contradicts the separation-of-concerns principle, it offers a high degree of flexibility in the migration process and allows for a smooth transition. Experience with the combination of outdated database technology with modern server-side component and Web services technologies is discussed. In this context, we also report on technology and architecture selection processes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Finding latent code errors via machine learning over program executions

    Page(s): 480 - 490
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (328 KB) |  | HTML iconHTML  

    This paper proposes a technique for identifying program properties that indicate errors. The technique generates machine learning models of program properties known to result from errors, and applies these models to program properties of user-written code to classify and rank properties that may lead the user to errors. Given a set of properties produced by the program analysis, the technique selects a subset of properties that are most likely to reveal an error. An implementation, the fault invariant classifier, demonstrates the efficacy of the technique. The implementation uses dynamic invariant detection to generate program properties. It uses support vector machine and decision tree learning tools to classify those properties. In our experimental evaluation, the technique increases the relevance (the concentration of fault-revealing properties) by a factor of 50 on average for the C programs, and 4.8 for the Java programs. Preliminary experience suggests that most of the fault-revealing properties do lead a programmer to an error. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Assume-guarantee verification of source code with design-level assumptions

    Page(s): 211 - 220
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (293 KB) |  | HTML iconHTML  

    Model checking is an automated technique that can be used to determine whether a system satisfies certain required properties. To address the "state explosion" problem associated with this technique, we propose to integrate assume-guarantee verification at different phases of system development. During design, developers build abstract behavioral models of the system components and use them to establish key properties of the system. To increase the scalability of model checking at this level, we have previously developed techniques that automatically decompose the verification task by generating component assumptions for the properties to hold. The design artifacts are subsequently used to guide the implementation of the system, but also to enable more efficient reasoning of the source code. In particular, we propose to use assumptions generated for the design to similarly decompose the verification of the actual system implementation. We demonstrate our approach on a significant NASA application, where design models were used to identify and correct a safety property violation, and the generated assumptions allowed us to check successfully that the property was preserved by the implementation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Feature-based decomposition of inductive proofs applied to real-time avionics software: an experience report

    Page(s): 304 - 313
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (396 KB) |  | HTML iconHTML  

    The hardware and software in modern aircraft control systems are good candidates for verification using formal methods: they are complex, safety-critical, and challenge the capabilities of test-based verification strategies. We have previously reported on our use of model checking to verify the time partitioning property of the Deos™ real-time operating system for embedded avionics. The size and complexity of this system have limited us to analyzing only one configuration at a time. To overcome this limit and generalize our analysis to arbitrary configurations we have turned to theorem proving. This paper describes our use of the PVS theorem prover to analyze the Deos scheduler. In addition to our inductive proof of the time partitioning invariant, we present a feature-based technique for modeling state-transition systems and formulating inductive invariants. This technique facilitates an incremental approach to theorem proving that scales well to models of increasing complexity, and has the potential to be applicable to a wide range of problems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Imposing a memory management discipline on software deployment

    Page(s): 583 - 592
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (506 KB) |  | HTML iconHTML  

    The deployment of software components frequently fails because dependencies on other components are not declared explicitly or are declared imprecisely. This results in an incomplete reproduction of the environment necessary for proper operation, or in interference between incompatible variants. In this paper, we show that these deployment hazards are similar to pointer hazards in memory models of programming languages and can be countered by imposing a memory management discipline on software deployment. Based on this analysis, we have developed a generic, platform and language independent, discipline for deployment that allows precise dependency verification; exact identification of component variants; computation of complete closures containing all components on which a component depends; maximal sharing of components between such closures; and concurrent installation of revisions and variants of components. We have implemented the approach in the Nix deployment system, and used it for the deployment of a large number of existing Linux packages. We compare its effectiveness to other deployment systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Tutorial: an overview of UML 2.0

    Page(s): 741 - 742
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (203 KB)  

    This paper covers the salient aspects of the first major revision of the Unified Modeling Language - UML 2.0. In this brief summary, we briefly review some of the main points covered in the paper. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Parametric analysis of real-time embedded systems with abstract approximation interpretation

    Page(s): 39 - 41
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (520 KB) |  | HTML iconHTML  

    My research area is fundamental of formal analysis of real-time embedded systems. The main objective of this research is the theoretical and practical development of a verification algorithm for the formal analysis of real-time embedded systems based on the combination of real-time model checking and abstract interpretation of real-time models. The objective of the proposed combination is an improved behavior both in time and space requirement of the resulting algorithm. One of drawbacks of all current real-time model-checking tools is the limited size of the systems that can be analyzed. By combination of state-space exploration with abstract interpretation we expect to scale up the size of applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Skoll: distributed continuous quality assurance

    Page(s): 459 - 468
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (419 KB) |  | HTML iconHTML  

    Quality assurance (QA) tasks, such as testing, profiling, and performance evaluation, have historically been done in-house on developer-generated workloads and regression suites. Since this approach is inadequate for many systems, tools and processes are being developed to improve software quality by increasing user participation in the QA process. A limitation of these approaches is that they focus on isolated mechanisms, not on the coordination and control policies and tools needed to make the global QA process efficient, effective, and scalable. To address these issues, we have initiated the Skoll project, which is developing and validating novel software QA processes and tools that leverage the extensive computing resources of worldwide user communities in a distributed, continuous manner to significantly and rapidly improve software quality. This paper provides several contributions to the study of distributed continuous QA. First, it illustrates the structure and functionality of a generic around-the-world, around-the-clock QA process and describes several sophisticated tools that support this process. Second, it describes several QA scenarios built using these tools and process. Finally, it presents a feasibility study applying these scenarios to a 1MLOC+ software package called ACE+TAO. While much work remains to be done, the study suggests that the Skoll process and tools effectively manage and control distributed, continuous QA processes. Using Skoll we rapidly identified problems that had taken the ACE+TAO developers substantially longer to find and several of which had previously not been found. Moreover, automatic analysis of QA task results often provided developers information that quickly led them to the root cause of the problems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Supporting reflective practitioners

    Page(s): 688 - 690
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (210 KB) |  | HTML iconHTML  

    The theme and title for this panel is inspired by Donald Schon's writings about the reflective practitioner in which he describes professional practice as being a process of reflection in action. Ill-defined problems including design decisions lead to breakdowns, which become opportunities for reflection and modification of practice. This panel seeks to provide ICSE attendees with a broad cross section of the history, state of the art, and open issues with some of the methods and tools directed at supporting reflective software practitioners. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An experimental, pluggable infrastructure for modular configuration management policy composition

    Page(s): 573 - 582
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (292 KB) |  | HTML iconHTML  

    Building a configuration management (CM) system is a difficult endeavor that regularly requires tens of thousands of lines of code to be written. To reduce this effort, several experimental infrastructures have been developed that provide reusable repositories upon which to build a CM system. In this paper, we push the idea of reusability even further. Whereas existing infrastructures only reuse a generic CM model (i.e., the data structures used to capture the evolution of artifacts), we have developed an experimental infrastructure, called MCCM, that additionally allows reuse of CM policies (i.e., the rules by which a user evolves artifacts stored in a CM system). The key contribution underlying MCCM is that a CM policy is not a monolithic entity; instead, it can be composed from small modules that each addresses a unique dimension of concern. Using the pluggable architecture and base set of modules of MCCM, then, the core of a desired new CM system can be rapidly composed by choosing appropriate existing modules and implementing any remaining modules only as needed. We demonstrate our approach by showing how the use of MCCM significantly reduces the effort involved in creating several representative CM systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Elaborating security requirements by construction of intentional anti-models

    Page(s): 148 - 157
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (273 KB) |  | HTML iconHTML  

    Caring for security at requirements engineering time is a message that has finally received some attention recently. However, it is not yet very clear how to achieve this systematically through the various stages of the requirements engineering process. The paper presents a constructive approach to the modeling, specification and analysis of application-specific security requirements. The method is based on a goal-oriented framework for generating and resolving obstacles to goal satisfaction. The extended framework addresses malicious obstacles (called anti-goals) set up by attackers to threaten security goals. Threat trees are built systematically through anti-goal refinement until leaf nodes are derived that are either software vulnerabilities observable by the attacker or anti-requirements implementable by this attacker. New security requirements are then obtained as countermeasures by application of threat resolution operators to the specification of the anti-requirements and vulnerabilities revealed by the analysis. The paper also introduces formal epistemic specification constructs and patterns that may be used to support a formal derivation and analysis process. The method is illustrated on a Web-based banking system for which subtle attacks have been reported recently. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A hybrid architectural style for distributed parallel processing of generic data streams

    Page(s): 367 - 376
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (394 KB) |  | HTML iconHTML  

    Immersive, interactive applications grouped under the concept of Immersipresence require on-line processing and mixing of multimedia data streams and structures. One critical issue seldom addressed is the integration of different solutions to technical challenges, developed independently in separate fields, into working systems, that operate under hard performance constraints. In order to realize the Immersipresence vision, a consistent, generic approach to system integration is needed, that is adapted to the constraints of research development. This paper introduces SAI, a new software architecture model for designing, analyzing and implementing applications performing distributed, asynchronous parallel processing of generic data streams. SAI provides a universal framework for the distributed implementation of algorithms and their easy integration into complex systems that exhibit desirable software engineering qualities such as efficiency, scalability, extensibility, reusability and interoperability. The SAI architectural style and its properties are described. The use of SAI and of its supporting open source middleware (MFSM) is illustrated with integrated, distributed interactive systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Responsibilities and rewards: specifying design patterns

    Page(s): 666 - 675
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (326 KB) |  | HTML iconHTML  

    Design patterns provide guidance to system designers on how to structure individual classes or groups of classes, as well as constraints on the interactions among these classes, to enable them to implement flexible and reliable systems. Patterns are usually described informally. While such informal descriptions are useful and even essential, if we want to be sure that designers precisely and unambiguously understand the requirements that must be met when applying a given pattern, and be able to reliably predict the behaviors the resulting system exhibits, we also need formal characterizations of the patterns. In this paper, we develop an approach to formalizing design patterns. The requirements that a designer must meet with respect to the structures of the classes, as well as with respect to the behaviors exhibited by the relevant methods, are captured in the responsibilities component of the pattern's specification; the benefits that results by applying the pattern, in terms of specific behaviors that the resulting system is guaranteed to exhibit, are captured in the rewards component. One important aspect of many design patterns is their flexibility; our approach is designed to ensure that this flexibility is retained in the formalization of the pattern. We illustrate the approach by applying it to a standard design pattern. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • DMS®: program transformations for practical scalable software evolution

    Page(s): 625 - 634
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (314 KB) |  | HTML iconHTML  

    While a number of research systems have demonstrated the potential value of program transformations, very few of these systems have made it into practice. The core technology for such systems is well understood; what remains is integration and more importantly, the problem of handling the scale of the applications to be processed. This paper describes DMS, a practical, commercial program analysis and transformation system, and sketches a variety of tasks to which it has been applied, from redocumenting to large-scale system migration. Its success derives partly from a vision of design maintenance and the construction of infrastructure that appears necessary to support that vision. DMS handles program scale by careful space management, computational scale via parallelism and knowledge acquisition scale via domains. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An empirical study of software reuse vs. defect-density and stability

    Page(s): 282 - 291
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (282 KB) |  | HTML iconHTML  

    The paper describes results of an empirical study, where some hypotheses about the impact of reuse on defect-density and stability, and about the impact of component size on defects and defect-density in the context of reuse are assessed, using historical data (data mining) on defects, modification rate, and software size of a large-scale telecom system developed by Ericsson. The analysis showed that reused components have lower defect-density than non-reused ones. Reused components have more defects with highest severity than the total distribution, but less defects after delivery, which shows that that these are given higher priority to fix. There are an increasing number of defects with component size for non-reused components, but not for reused components. Reused components were less modified (more stable) than non-reused ones between successive releases, even if reused components must incorporate evolving requirements from several application products. The study furthermore revealed inconsistencies and weaknesses in the existing defect reporting system, by analyzing data that was hardly treated systematically before. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Breaking the ice for agile development of embedded software: an industry experience report

    Page(s): 378 - 386
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (309 KB) |  | HTML iconHTML  

    A software engineering department in a Daimler-Chrysler business unit was highly professional at developing embedded software for busses and coaches. However, customer specific add-ons were a regular source of hassle. Simple as they are, those individual requirements have to be implemented in hours or days rather than weeks or months. Poor quality or late upload into the bus hardware would cause serious cost and overhead. Established software engineering methods were considered inadequate and needed to be cut short. Agile methods offer guidance when quality, flexibility and high speed need to be reconciled. However, we did not adopt any full agile method, but added single agile practices to our process improvement toolbox. We suggested a number of classical process improvement activities (such as more systematic documentation and measurement) and combined them with agile elements (e.g. Test First Process). This combination seemed to foster acceptance of agile ideas and may help us to break the ice for a cautious extension of agile process improvement. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design of large-scale polylingual systems

    Page(s): 357 - 366
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (319 KB) |  | HTML iconHTML  

    Building systems from existing applications written in two or more languages is common practice. Such systems are polylingual. Polylingual systems are relatively easy to build when the number of APIs needed to achieve language interoperability is small. However, when the number of distinct APIs become large, maintaining and evolving polylingual systems becomes a notoriously difficult task. In this paper, we present a simple, practical, and effective way to develop, maintain, and evolve large-scale polylingual systems. Our approach relies on recursive type systems whose instances can be manipulated by reflection. Foreign objects (i.e. objects that are not defined in a host programming language) are abstracted as graphs and path expressions are used for accessing and manipulating data. Path expressions are implemented by type reification - turning foreign type instances into first-class objects and enabling access to and manipulation of them in a host programming language. Doing this results in multiple benefits, including coding simplicity and uniformity that we demonstrate in a complex commercial project. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • GlueQoS: middleware to sweeten quality-of-service policy interactions

    Page(s): 189 - 199
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (786 KB) |  | HTML iconHTML  

    A holy grail of component-based software engineering is write-once, reuse everywhere. However, in modern distributed, component-based systems supporting emerging application areas such as service-oriented e-business (where Web services are viewed as components) and peer-to-peer computing, this is difficult. Non-functional requirements (related to quality-of-service (QoS) issues such as security, reliability, and performance) vary with deployment context, and sometimes even at run-time, complicating the task of re-using components. In this paper, we present a middleware-based approach to managing dynamically changing QoS requirements of components. Policies are used to advertise non-functional capabilities and vary at run-time with operating conditions. We also provide middleware enhancements to match, interpret, and mediate QoS requirements of clients and servers at deployment time and/or runtime. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An introduction to computing system dependability

    Page(s): 730 - 731
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (208 KB) |  | HTML iconHTML  

    It is important that computer engineers, software engineers, project managers, and users understand the major elements of current technology in the field of dependability, yet this material tends to be unfamiliar to researchers and practitioners alike. Researchers are often concerned in one way or another with some aspect of what is mistakenly called software "reliability". All practitioners are concerned with the "reliability" of the software that they produce but researchers and practitioners tend not to understand fully the broader impact of their work. A lot of research, such as that on testing, is concerned directly with software dependability. Understanding dependability more fully allows researchers to be more effective. Similarly, practitioners can direct their efforts during development more effectively if they have a better understanding of dependability. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.