By Topic

Software Engineering (CONSEG), 2012 CSI Sixth International Conference on

Date 5-7 Sept. 2012

Filter Results

Displaying Results 1 - 25 of 61
  • Software effort estimation using Neuro-fuzzy approach

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (778 KB) |  | HTML iconHTML  

    A successful project is one that is delivered on time, within budget and with the required quality. Accurate software estimation such as cost estimation, quality estimation and risk analysis is a major issue in software project management. A number of estimation models exist for effort prediction. However, there is a need for novel model to obtain more accurate estimations. As Artificial Neural Networks (ANN's) are universal approximators, Neuro-fuzzy system is able to approximate the non-linear function with more precision by formulating the relationship based on its training. In this paper we explore Neuro-fuzzy techniques to design a suitable model to utilize improved estimation of software effort for NASA software projects. Comparative Analysis between Neuro-fuzzy model and the traditional software model(s) such as Halstead, WalstonFelix, Bailey-Basili and Doty models is provided. The evaluation criteria are based upon MMRE (Mean Magnitude of Relative Error) and RMSE (Root mean Square Error). Integration of neural networks, fuzzy logic and algorithmic models into one scheme has resulted in providing robustness to imprecise and uncertain inputs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The performance enhancement approach for parameterized queries

    Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (366 KB) |  | HTML iconHTML  

    All database systems must be able to respond to requests for information from the user - i.e. execute user's queries and obtain the required information from a database system in a expected and reliable fashion. Database queries are usually define in declarative manner and the database system has to choose an appropriate execution plan for the query. There are two basic ways to write a query first is to explicitly specify the values for each parameter in the WHERE clause and second way is to replace the values in the WHERE clause with variable placeholders i.e. known as Parametric query. A query optimizer in a database system is responsible for transforming an SQL query into an execution plan. All query optimizers are cost-based in that they decide between alternative execution plans by comparing their estimated execution costs. The cost of a query plan depends on many parameters such as predicate selectivity, available memory, and presence of access paths, whose values may not be known at optimization time. PQO optimizes a query into a number of candidate plans and when the actual parameter values are known, the candidate plan corresponding to the actual parameter values is select and use. Parametric query optimization attempts to recognize at compile time several execution plans, each one of which is optimal for a subset of all possible values of the run-time parameters. The objective of this paper is to define a technique when the actual parameter values are known the appropriate plan should be express with fundamentally no overhead. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analytical Hierarchy Process issues and mitigation strategy for large number of requirements

    Page(s): 1 - 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (563 KB) |  | HTML iconHTML  

    Now-a-days most software projects have more candidate requirements. So it is vital for software companies to use different prioritization techniques to select valuable requirements among the candidate requirements. But software companies usually face a lot of challenges in using AHP such as increase in time and complexity with respect to number of comparisons. In this paper, we present previous work carried out in this research area and industrial study to identify the challenges software companies face while prioritizing large number of requirements using AHP. Different types of prioritization techniques have been developed to resolve these challenges. This paper focus on Numeral assignment technique which groups requirements into three categories: critical, standard and optional and AHP which prioritize requirements based on pair-wise comparisons. In this article we proposed a model i.e., NAcAHP where in which pair-wise comparison of AHP is applied on critical group of Numeral assignment technique for prioritizing the requirements. The result shows that the proposed model minimizes the time and complexity of pair wise comparison. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Implementing the concept of refactoring in software development

    Page(s): 1 - 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (507 KB) |  | HTML iconHTML  

    According to the changes that were made to existing software the quality of the code is known to impair in quality, which is complex for editing or adding new changes in future. In real when the changes that are done to the existing code it may not be optimal. Refactoring is a recognized solution for deteriorating code, and by implementing it the future changes to the code base will be easier. Refactoring is largely practiced by experienced developers in the industry wide. As there is still some reluctance with the management to support it, because it is not a productive effort as no behavior is added or altered. The code is merely reshaped so why not spend that effort on doing something productive instead? In this paper we have conducted an experiment to make even the most narrow-minded manager a spokesman for the use of refactoring. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multistage content-based image retrieval

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (489 KB) |  | HTML iconHTML  

    Content based image retrieval has found its application in many areas like government, academia and hospitals. A new image retrieval technique is presented in this paper, which retrieve similar images in stages. The images are first retrieved based o n their colour feature similarity. The relevance of the retrieved images is then further improved by matching their texture and shape features respectively. Generally a CBIR compare query image feature vector with all other images in the database. This decreases the accuracy of the system as the search encompasses the whole database which contains a wide variety of images. Moreover success of shape based CBIR depends on accuracy of Segmentation technique employed. Unfortunately it has been shown that accurate segmentation is still an open problem. Present approach eliminates the dependency over precise segmentation technique to some extent by narrowing down the search range at each stage. The proposed system has three layers feed forward architecture where each stage output is the input for next stage. Proposed approach also reduces the problem of high dimensionality of feature vector because at each stage only a part of the feature vector representing the desired feature need to be compared thereby resulting in reduction of computational overhead of the overall system. Retrieval in stages also reduces the semantic gap. Advantages of global and region features are also combined to achieve better retrieval accuracy. Experimental results have shown that the proposed system can improve the retrieval accuracy while consuming less computation time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Software Reliability Growth Model with testing effort using learning function

    Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (106 KB) |  | HTML iconHTML  

    Software Reliability Growth Models have been proposed in the literature to measure the quality of software and to release the software at minimum cost. Testing is an important part to find out faults during Software Development Life Cycle of integrated software. Testing can be defined as the execution of a program to find a fault which might have been introduced during the testing time under different assumptions. The testing team may not be able to remove the fault perfectly on the detection of the failure and the original fault may remain or get replaced by another fault. While the former phenomenon is known as imperfect fault removal, the latter is called error generation. In this paper, we have proposed a new SRGM with two types of imperfect debugging with testing effort using learning function reflecting the expertise gained by testing team depending on its complexity, the skills of the debugging team, the available manpower and the software development environment and it is estimated and compared other existing models on real time data sets. These estimation result shows compare performance and application of different SRGM with testing effort. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A technique to search log records using system of linear equations

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (437 KB) |  | HTML iconHTML  

    Growth of network technology and easy internet access are the key features for development and use of Database-as-a-Service (DaaS) technique. In DaaS technique it is necessary to ensure safe and correct operation of any industry. In many cases, the types of data records used to conduct security audits are considered sensitive, as they could reveal information about internal network structures, the types of software running, private customer or employee information. In this paper we suggest a technique based on system of linear equations, which enables a trusted party to give the service provider's server the ability to test whether a given keyword appears in log records but the server learns nothing about the keyword and the log content. We compare our scheme with existing encryption schemes and prove that our scheme is efficient and secure. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Object Oriented versus Ontology Oriented software reliability development

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (409 KB) |  | HTML iconHTML  

    Presently, reliability accomplishment in software systems has become essential to promote software maturity. Object-Oriented software development practices are being rapidly adopted to address reliability issues. In practice, though, current Object-Oriented software design techniques are still predominantly focused on regularity and efficiency. Therefore, an Ontology-Oriented software engineering methodology is stranded. This paper peruses the Ontology- Oriented software reliability (OnO-Reliability) development aligned with Object Oriented software reliability (OO-Reliability) development. In addition, some attributes related to process, product and resources have been identified, and respective procedural concerns for achieving reliability are examined. An evident concern has been resolute to reliability of Ontology-Oriented software systems enlargement over the Object Oriented software systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Achievements and challenges of Model Based Testing in industry

    Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (530 KB) |  | HTML iconHTML  

    Model Based Testing or MBT has gained wide ranging popularity in very short span of time on account of its accuracy and inherent advantages. Still the industries are unable to reap its full benefits. This paper highlights salient features of MBT in industry and also enlists the factors preventing MBT from realizing its full potential. The paper also gives some solutions to mitigate these factors and indicates some new directions for extensive research in this field. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A model for quality assurance of OSS architecture

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (321 KB) |  | HTML iconHTML  

    The software industry is at its peak all over the world. Different software follows diverse development models. Based on the development model, any software can be divided into two categories namely proprietary and OSS. The OSS development is now very crucial for economic growth of any country. An estimated 2 billion dollars can be saved in India alone by adapting to open source software development methodology [26]. OSS development has some challenges associated with it. This study aims for improvements in the security and quality concerns of these systems. In this paper we have qualitatively addressed major challenges in OSS development and hence a new model for quality assurance of open source software is proposed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Developmental approaches for Agent Oriented system — A critical review

    Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (432 KB) |  | HTML iconHTML  

    Agent Oriented Paradigm is inheritor of Object Oriented Paradigm. The concept of agent is evolved from artificial intelligence. Agents are social, proactive, and reactive in nature. Agent Oriented methodologies are used to develop the agent oriented software. Agent Oriented methodologies follows the phases of software development life cycle. In this paper, prominent methodologies like Gaia, MaSE, Tropos and others are discussed. All methodologies do not follow all phases of software development life cycle. These methodologies are discussed on the basis of their working pattern. A comparative study is done among methodologies on different parameters. A result is derived that all methodologies are helpful in different scenarios. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Treebank based deep grammar acquisition and Part-Of-Speech Tagging for Sanskrit sentences

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (518 KB) |  | HTML iconHTML  

    Sanskrit since many thousands of years has been the oriental language of India. It is the base for most of the Indian Languages. Ambiguity is inherent in the Natural Language sentences. Here, one word can be used in multiple senses. Morphology process takes word in isolation and fails to disambiguate correct sense of a word. Part-Of-Speech Tagging (POST) takes word sequences in to consideration to resolve the correct sense of a word present in the given sentence. Efficient POST have been developed for processing of English, Japanese, and Chinese languages but it is lacking for Indian languages. In this paper our work present simple rule-based POST for Sanskrit language. It uses rule based approach to tag each word of the sentence. These rules are stored in the database. It parses the given Sanskrit sentence and assigns suitable tag to each word automatically. We have tested this approach for 15 tags and 100 words of the language this rule based tagger gives correct tags for all the inflected words in the given sentence. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Identifying context of text documents using Naïve Bayes classification and Apriori association rule mining

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (111 KB) |  | HTML iconHTML  

    Huge amount of unstructured data is available in the form of text documents. Ranking these text documents by considering their context will be very useful in information retrieval. We propose classification of abstracts by considering their context using Naïve Bayes classifier and Apriori association rule algorithm - i.e. Context Based Naive Bayesian and Apriori (CBNBA). In proposed approach, we initially classify the documents using Naïve Bayes. We find the context of an abstract by looking for associated terms which help us understand the focus of the abstract and interpret the information beyond simple keywords. The results indicate that context based classification increases accuracy of classification to great extent and in turn discovers different contexts of the documents. Further this approach can found to be very useful for applications beyond abstract classification where word speaks very little and lead to ambiguous state but context can lead you to right decision/classification. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A significant approach for cloud database using shared-disk architecture

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (502 KB) |  | HTML iconHTML  

    A Cloud database is a database that relies on cloud technology. Both the database and most of its DBMS reside remotely, "in the cloud," while its applications are both developed by programmers and later maintained and utilized by (application's) end-users through a Web browser and Open APIs. More and more such database products are emerging, both of new vendors and by virtually all established database vendors are increasing drastically. Previously, there are many database architecture viz., shared-nothing, shared cache, nonsql. Proposed for maintaining data in different storage systems like Oracle, IBM DB2, Microsoft SQL Server, Microsoft Access, PostgreSQL and MySQL. The paper discusses on the effective usage of database sharing and lays more emphasis on the perfect handling of data that resides in various remote places. The vital role of the data that stores in databases has more security, time consuming problems in the cloud computing. But, among them the shared disk architecture is well suited for cloud environment since the data is stored in remote place. The shared-disk database architecture is ideally suites to cloud computing. The shared-disk architecture requires fewer and lower-cost servers, it provides high-availability, it reduces maintenance costs by eliminating partitioning, and it delivers dynamic scalability in cloud. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An approach of attribute selection for reducing false alarms

    Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (112 KB) |  | HTML iconHTML  

    Defect Prediction is one of the method in SQA (Software Quality Assurance) that attracts developers because it can reduce the testing efforts as well as development time. One problem in defect prediction is `curse of dimensionality', as hundreds of attributes are there in a dataset in software repository. In this paper we try to analyze whether there is any way to remove more attributes after attribute selection and the effect of this reduction of attributes on performance of defect prediction. We found that False positive rate (False Alarms) is reduced by using our method of attribute selection, which in turn can be used to reduce the resource allocation for detecting defective modules. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An efficient framework for high-quality web service discovery

    Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (860 KB) |  | HTML iconHTML  

    Support for dynamic attributes is increasingly important for facilitating high-quality web service discovery for service-oriented architectures. we proposed the static discovery dynamic selection (SDDS) architecture to overcome limitations of existing web service discovery methods. Our service discovery algorithms consider dynamic attributes to reduce the number of viable and acceptable services and thereby improve the consumer experience. In this paper, we propose a semantic model based on finite state automata to validate interactions among the SDDS components that deal with dynamic attributes. Our analysis using the automata model revealed a flaw in the SDDS communication synchronization. This flaw was corrected by introducing a webbased synchronization component to enforce valid communication patterns. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Techno-management view of Secured Software Development

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (447 KB) |  | HTML iconHTML  

    Secured Software Development Process (SSDP) is analyzed in two aspects viz. technical and process management aspects. The technical aspect focuses on understanding the security features to be covered during Software Development Process (SDP) whereas the process management aspect focus on the security features from the view point of security managers of an organization. This paper describes the technical and process management aspects of security. It also attempts to link the technical and process management aspects of security by providing techno-management view of security. The techno-management view is further verified by IT professionals from industry. The professionals concluded that the view presented will be able to bridge the gap between the developers and managers and help in secured development of a software product. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Enforcing the security within mobile devices using clouds and its infrastructure

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (478 KB) |  | HTML iconHTML  

    Mobile devices communicate for achieving various types' of requirements and sometimes the data transmitted using the mobile devices are very confidential needing very high level security for the same. As a consequence of executing a function within the Mobile device can be subjected to various kinds of risks due to the Vulnerabilities exposed by data paths. The applications that run on Desktop system can be made to be secure through implementation of software and hardware. The desktops have all the resources required to build the security systems through which the applications running on the desktops can be made to be fully secure. However the mobile devices are constrained with the availability of adequate resources much inferior to what we have on desktop computer systems. Using desktops in the neighborhood of mobile devices will sort-out the issue of inadequate resources which are required for implementing security. In this paper, we present an architecture that helps enforcing the security within Mobile devices using clouds built on desktops. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed intrusion detection scheme for wireless Ad-Hoc Networks: A review

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (128 KB) |  | HTML iconHTML  

    Wireless adhoc network is a network which does not have a fixed infrastructure; it works in open medium so it is more prone to attacks. Security is main element for a network to show good performance. Intrusion detection is to protect network from an unknown or known attack. This system acts as a second line of defense for adhoc networks. In this paper we study about the adhoc network, its characteristics, about its attacks, routing protocols and show there insecurity due to attacks. Most importantly we present distributed intrusion detection system of adhoc networks and give a comparative study with the existing intrusion detection system (IDS). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Investigating object-oriented design metrics to predict fault-proneness of software modules

    Page(s): 1 - 10
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (259 KB) |  | HTML iconHTML  

    This paper empirically investigates the relationship of class design level object-oriented metrics with fault proneness of object-oriented software system. The aim of this study is to evaluate the capability of the design attributes related to coupling, cohesion, complexity, inheritance and size with their corresponding metrics in predicting fault proneness both in independent and combine basis. In this paper, we conducted two set of systematic investigations using publicly available project datasets over its multiple subsequent releases to performed our investigation and four machine learning techniques to validated our results. The first set of investigation consisted of applying the univariate logistic regression (ULR), Spearman's correlation and AUC (Area under ROC curve) analysis on four PROMISE datasets. This investigation evaluated the capability of each metric to predict fault proneness, when used in isolation. The second set of experiments consisted of applying the four machine learning techniques on the next two subsequent versions of the same project datasets to validate the effectiveness of the metrics. Based on the results of individual performance of metrics, we used only those metrics that are found significant, to build multivariate prediction models. Next, we evaluated the significant metrics related to design attributes both in isolation and in combination to validated their capability of predicting fault proneness. Our results suggested that models built on coupling and complexity metrics are better and more accurate than those built on using the rest of metrics. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Agility Evaluation Factor: Identification of flexibility level

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (664 KB) |  | HTML iconHTML  

    Since evolution of software development process, researchers are involved in developing new software methods for reducing the software failure rate. Agile Software Development Process (ASDP) has addressed the various issues related to software failure such as low customer satisfaction, delay in software delivery etc. However, many new developing methods claimed themselves to follow ASDP due to its elicit practices of ASDP. These practices are mainly; just enough document, short releases, higher customer interaction, self organized team etc. Thus, there is a strong need to identify the agility level in any software development method. In this paper, we have presented the Agility Evaluation Factor (AEF) that will be useful to determine agility level of any software development method. We have proposed to compute AEF for quantification of agile practices used and flexibility available in a particular software development method. Identification of AEF provides scope of improvement in existing AMs as well as defines the criteria for entry in set of agile methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Selecting requirement elicitation techniques for software projects

    Page(s): 1 - 10
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (164 KB) |  | HTML iconHTML  

    Software development process consists of many knowledge intensive processes, among which requirement elicitation process is perhaps the most critical for the success of the software system. Requirement elicitation process is intended to gain knowledge about user's requirement or need. Usually, the selection of requirement elicitation techniques is based on the company practice or on the personal experience. Moreover, there is a little guidance available on how to select elicitation techniques for a new software project. In this paper, we first provide a brief overview of the techniques available to support requirement elicitation process and identify their contextual applications. Next, we developed a framework to select elicitation techniques for a given software project based on the alignment of project's contextual information and the elicitation techniques. We demonstrate the applicability of the proposed framework by using illustrative examples and show how the framework uses the contextual knowledge of the software being develop to select useful requirement elicitation techniques. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Software requirements selection using Quantum-inspired Multi-objective Differential Evolution Algorithm

    Page(s): 1 - 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (828 KB) |  | HTML iconHTML  

    This paper presents a Quantum-inspired Multi-objective Differential Evolution Algorithm (QMDEA) for the selection of software requirements, an issue in Requirements engineering phase of software development life cycle. Generally the software development process is iterative or incremental in nature, as request for new requirements keep coming from the customers from time to time for inclusion in the next release of the software. Due to the feasibility reasons it is not possible for a company to incorporate all the requirements in the software product. Consequently, it becomes a challenging task for the company to select a subset of the requirements to be included, by keeping the business goals in view. The problem is to identify a set of requirements to be included in the next release of the product, by minimizing the cost and maximizing the customer satisfaction. As minimizing the cost and maximizing the customer satisfaction are contradictory objectives, the problem is multi-objective and is also NP-hard in nature. Therefore it cannot be solved efficiently using traditional optimization techniques especially for the large problem instances. QMDEA combines the preeminent features of Differential Evolution and Quantum Computing. The features of QMDEA help in achieving quality Pareto-optimal front solutions with faster convergence. The performance of QMDEA is tested on six benchmark problems derived from the literature. The comparison of the obtained results indicates superior performance over the other methods reported in the literature. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A quantitative model for the evaluation of reengineering risk in infrastructure perspective of legacy system

    Page(s): 1 - 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (839 KB)  

    Competitive business environment wants to revolutionize existing legacy system in to self-adaptive ones. Nowadays legacy system reengineering has emerged as a well-known system renovation technique. Reengineering rapidly replace legacy development for keeping up with modern business and user requirements. However renovation of legacy system through reengineering is a risky and error-prone mission due to widespread changes it requires in the majority of case. Quantifiable risk measures are necessary for the measurement of reengineering risk to take decision about when the modernization of legacy system through reengineering is successful. We present a quantifiable measurement model to measure comprehensive impact of different reengineering risk arises from infrastructure perspective of legacy system. The model consists of five reengineering risk component, including Deployment Risk, Organizational Risk, Resource Risk, Development Process Risk and Personal Risk component. The results of proposed measurement model provide guidance to take decision about the evolution of a legacy system through reengineering. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Application of Model Oriented Security Requirements Engineering Framework for secure E-Voting

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (512 KB) |  | HTML iconHTML  

    The election system is in need of a secure electronic web application that voters can rely and have trust in. E-Voting is the most security sensitive processes handled electronically. The highest achievable security is never too much for an E-Voting application. So when the web application is being built, tasks such as Security Requirements elicitation, specification and validation are essential to assure the security of the resulting E-Voting web application. By considering the Security requirements as functional requirements in the Requirement phase, the complete specification of Security Requirements for E-Voting application can be developed and flaws can be reduced. In this paper we propose to use Model Oriented Security Requirements Engineering (MOSRE) Framework in the early phases of E-Voting application development so as to identify assets, threats and vulnerabilities. This helps the developers to analyze and elicit the Security Requirements in the early stage of secure E-Voting application development. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.