By Topic

Software, IET

Issue 1 • Date February 2011

Filter Results

Displaying Results 1 - 8 of 8
  • Using autonomous components to improve runtime qualities of software

    Page(s): 1 - 20
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (910 KB)  

    In the development of software systems, quality properties should be considered along with the development process so that the qualities of software systems can be inferred and predicted at the specification and design stages and be evaluated and verified at the deployment and execution stages. However, distributed autonomous software entities are developed and maintained independently by third parties and their executions and qualities are beyond the control of system developers. In this study, the notion of an autonomous component is used to model an independent autonomous software entity. An autonomous component encapsulates data types, associated operations and quality properties into a uniform syntactical unit, which provides a way to reason about the functional and non-functional properties of software systems and meanwhile offers a means of evaluating and assuring the qualities of software systems at runtime. This study also describes the implementation and running support of autonomous components and studies a case application system to demonstrate how autonomous components can be used to improve the qualities of the application system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Weather data sharing system: an agent-based distributed data management

    Page(s): 21 - 31
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (560 KB)  

    Severe weather causes human disasters. The most useful way to decrease a national disaster is by building more atmospheric sensing equipments to monitor the climate change. The data produced by these sensing equipments are of a huge amount and play an important role for weather prediction. Moreover, new sensing equipments enrich weather data. Everyday terabyte and petabyte-scale data are collected. Retrieval of such information requires access to large volumes of data; thus an efficient organisation is necessary both to reduce access time and to allow for efficient knowledge extraction. A new class of `data grid` infrastructure is efficient to support management, transportation, distributed access and analysis of these data sets by thousands of potential users. Intelligent agents can play an important role in helping achieve the `data grid` vision. In this study, the authors present a multi-agent-based framework to implement manage, share and query weather data in a geographical distributed environment, named weather data sharing system (WDSS). In each node, some services are designed for querying and accessing data sets based on agent environment. Information retrieval can be conducted locally, by considering portions of weather data, or in a distributed scenario, by exploiting global metadata. The agents` local and remote search is evaluated. The transfer speeds for different file types are also evaluated. From the presented platform, the system extensibility is analysed. The authors believe that this will be a useful platform for research on WDSS in a national area. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Formalisation and verification of programmable logic controllers timers in Coq

    Page(s): 32 - 42
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (490 KB)  

    Programmable logic controllers (PLCs) are widely used in embedded systems. Timers play a pivotal role in PLC real-time applications. The formalisation of timers is of great importance. The study presents a formalisation of PLC timers in the theorem proving system Coq, in which the behaviours of timers are characterised by a set of axioms at an abstract level. The authors discuss how to model timers at a proper and sound abstract level. PLC programs with timers are modelled. As a case study, a quiz machine problem with a timer is investigated. This work demonstrates the complexity of formal timer modelling. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Systems engineering and safety - a framework

    Page(s): 43 - 53
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (627 KB)  

    This study provides a definition of safety and assesses currently available systems engineering approaches for managing safety in systems development. While most work in relation to safety is of a `safety critical` nature, the authors concentrate on wider issues associated with safety. The outcomes of the assessment lead to a proposal for a framework providing the opportunity to develop a model incorporating the safety requirements of a system. The concept of a framework facilitates an approach of combining a number of disparate methods and at the same time utilising only the beneficial features of each. Such a safety framework when combined with an approach which addresses the management of safety will enhance system effectiveness thus ensuring the non-functional requirements of stakeholders are met. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Nine-areas-tree-bit-patterns-based method for continuous range queries over moving objects

    Page(s): 54 - 69
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1450 KB)  

    A continuous range query is defined to periodically re-evaluated to locate moving objects that are currently inside the boundary of the range query and is widely used to support the location-based services. However, the query processing becomes complicated because of frequent locations update of moving objects. The query indexing relies on incremental evaluation, building the index on range queries instead of moving objects and exploiting the relation between locations objects and queries. The cell-based query indexing method has been proved to have the better performance of query processing than that of the R*-tree-based query indexing method with the overlapping problem in internal nodes. However, it takes a lot of space and time for the cell-based method to maintain the index structure, when the number of range queries increases. The nine-areas (NA) tree has been proved to solve the overlapping problem in the R*-tree to minimise the number of disk accesses during a tree search for the range queries. In this study, the authors propose the NA-tree-bit-patterns-based (NABP) query indexing method based on the NA-tree. We use the bit-patterns to denote the regions and to preserve the locality of range queries and moving objects. Therefore our NABP method can incrementally local update the affected range queries over moving objects by bit-patterns operations, especially with the increase of the number of range queries. From their simulation study, the authors show that their NABP method requires less CPU time and storage cost than the cell-based method for large number of range queries update. The authors also show that their NABP method requires less CPU time than the R*-tree-based method for large number of moving objects update. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Functional testing of feature model analysis tools: a test suite

    Page(s): 70 - 82
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (314 KB)  

    A feature model is a compact representation of all the products of a software product line. Automated analysis of feature models is rapidly gaining importance: new operations of analysis have been proposed, new tools have been developed to support those operations and different logical paradigms and algorithms have been proposed to perform them. Implementing operations is a complex task that easily leads to errors in analysis solutions. In this context, the lack of specific testing mechanisms is becoming a major obstacle hindering the development of tools and affecting their quality and reliability. In this article, the authors present FaMa test suite, a set of implementation-independent test cases to validate the functionality of feature model analysis tools. This is an efficient and handy mechanism to assist in the development of tools, detecting faults and improving their quality. In order to show the effectiveness of their proposal, the authors evaluated the suite using mutation testing as well as real faults and tools. Their results are promising and directly applicable in the testing of analysis solutions. The authors intend this work to be a first step towards the development of a widely accepted test suite to support functional testing in the community of automated analysis of feature models. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Nomenclature unification of software product measures

    Page(s): 83 - 102
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (913 KB)  

    A large number of software quality prediction models are based on software product measures (SPdMs). There are different interpretations and representations of these measures which generate inconsistencies in their naming conventions. These inconsistencies affect the efforts to develop a generic approach to predict software quality. This study identifies two types of such inconsistencies and categorises them into Type I and Type II. Type I inconsistency emerges when different labels are suggested for the same software product measure. Type II inconsistency appears when same label is used for different measures. This study suggests a unification and categorisation framework to remove Type I and Type II inconsistencies. The proposed framework categorises SPdMs with respect to three dimensions: usage frequency, software development paradigm and software lifecycle phase. The framework is applied to 140 SPdMs and a searchable unified measures database (UMD) is developed. Overall, 48.5% of the measures are found inconsistent. Out of the total measures studied 34.28% measures are frequently used. It has been found that 30.71% measures are used in object oriented paradigm and 31.43% measures are used in conventional paradigm. There is an overlap of 37.86% measures between the two paradigms. The UMD reveals that the percentages of measures used in design and implementation phases are 52.86 and 35%, respectively. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Software cost estimation for componentbased fourth-generation-language software applications

    Page(s): 103 - 110
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (142 KB)  

    Software cost estimation is important for budgeting, risk analysis, project planning and software improvement analysis. There are numerous estimation techniques. During the past three decades there had been some significant developments in effort estimation, size of software and cost estimation methodology. Current software cost estimation models have been experiencing increasing difficulties in estimating the costs of software, as new software development methodologies and technologies are emerging very rapidly. Most of the software cost models generally rely on such inputs as estimates of lines of source code, delivered sets of instructions, function points and processing complexity or experience levels to produce cost estimates. These models generally produce inaccurate results when used to estimate the cost of software development in current development environments such as those that use component-based software development environments like visual languages. The authors present in this study a new technique for software cost estimation that can be used for software projects developed using component-based fourth-generation-language environment. The model was calibrated using the empirical data collected from 19 software systems. Efficiency of the model was also compared with an existing model used for such environment. The proposed model achieved better predictive accuracy. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IET Software publishes papers on all aspects of the software lifecycle, including design, development, implementation and maintenance.

Full Aims & Scope

Meet Our Editors

Publisher
IET Research Journals
iet_sen@theiet.org