By Topic

Advanced Computing and Communication Technologies (ACCT), 2013 Third International Conference on

Date 6-7 April 2013

Filter Results

Displaying Results 1 - 25 of 80
  • [Front and back cover]

    Page(s): C4
    Save to Project icon | Request Permissions | PDF file iconPDF (699 KB)  
    Freely Available from IEEE
  • [Title page i]

    Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (106 KB)  
    Freely Available from IEEE
  • [Title page iii]

    Page(s): iii
    Save to Project icon | Request Permissions | PDF file iconPDF (281 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Page(s): iv
    Save to Project icon | Request Permissions | PDF file iconPDF (255 KB)  
    Freely Available from IEEE
  • Table of contents

    Page(s): v - x
    Save to Project icon | Request Permissions | PDF file iconPDF (141 KB)  
    Freely Available from IEEE
  • Message from General Chair

    Page(s): xi
    Save to Project icon | Request Permissions | PDF file iconPDF (136 KB)  
    Freely Available from IEEE
  • Message from Program Chair

    Page(s): xii
    Save to Project icon | Request Permissions | PDF file iconPDF (135 KB)  
    Freely Available from IEEE
  • Organizing Committee

    Page(s): xiii
    Save to Project icon | Request Permissions | PDF file iconPDF (120 KB)  
    Freely Available from IEEE
  • Program Committee

    Page(s): xiv
    Save to Project icon | Request Permissions | PDF file iconPDF (121 KB)  
    Freely Available from IEEE
  • Reviewers

    Page(s): xv
    Save to Project icon | Request Permissions | PDF file iconPDF (121 KB)  
    Freely Available from IEEE
  • Component Criticality Approach towards Minimizing the Risks of System Failure

    Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (338 KB) |  | HTML iconHTML  

    Component based approach mitigating the risk of system failure has been proposed by detecting of most critical components which's malfunction leads the system towards failure. Individual components have their own chances of occurring fault and these occurrences are silent most often as well as risky, the probability of system failure also becomes high in such phenomena resulting large amount of wretchedness. Protection of components from being faulty can be ensured at the early phase of any structure design or modeling, if the criticality is measured previously. Most reliability assessments of risk minimization have overlooked criticality consideration. However, Criticality is determined in a significant way by measuring each component's complexity and meaningful ranking of components based on random error occurrence. This approach obviously results an improved performance from previously studied cases on system failure risk minimization based on criticality ranking. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Extended Decision Support Matrix for Selection of SDLC-Models on Traditional and Agile Software Development Projects

    Page(s): 8 - 15
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (451 KB) |  | HTML iconHTML  

    Software Development Projects can vary considerably in difficulty, size and type. This has led to evolution and development of many associated project management methodologies and standard SDLC-Models. This paper acknowledges the risks associated with wrong selection of SDLC-models on business critical software projects and offers a pragmatic solution by proposing a handy selection matrix for choosing best-fit SDLC models on different types of Software Development Projects, covering both traditional and agile methodologies. This paper is the result of an study carried out to evaluate the methods & practices of Project Life Cycle Model Selection actually used and practiced on the projects selected for this study (from businesses and IT-industry in India), with overall objective of proposing better methods and prescriptive guidance for decision making process for right selection of SDLC-Model on business critical software development projects. Right selection of SDLC-Methodology using a decision support tool can and will help successful completion of business critical software development projects and realization of business objectives for which the projects were undertaken. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analysis of the Techniques for Software Cost Estimation

    Page(s): 16 - 19
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (227 KB) |  | HTML iconHTML  

    One of the most valuable asset in any software industry is the correct estimation of effort and hence cost estimation of the software to be developed by them. Because of the highly dynamic nature of the Software development, it becomes more and more difficult to get a correct software effort estimation and software cost estimation, which is one of the most important factor which makes software more competitive and is essential for controlling Software Development Cost. Software Cost Estimation is one of the challenging managerial activity, because values of many of the variables are not known and not easy to predict at an early stage of Software Development. An ideal Software Cost Estimation Model should provide ample confidence, precision and accuracy from its predictions. In this paper, we have performed an analysis of most of the algorithmic techniques which has been developed till now for Software Cost Estimation. We have also tried to analyze the advantages and shortcomings of every technique. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Estimating of Software Quality with Clustering Techniques

    Page(s): 20 - 27
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (339 KB) |  | HTML iconHTML  

    Software faults are one of major criteria to estimate the software quality or the software reliability. There is number of matrices defined that uses the software faults to estimate the software quality. When we have a large software system with thousands of class modules, then it is not easy to apply the software matrices on each module of software system. The present work is the solution of the defined problem. This paper aims at comparing different models based on clustering techniques: k-means (KM), fuzzy c-means (FCM) and hierarchical agglomerative clustering (HAC) for building software quality estimation system. We propose quality measure of partition clustering technique (KM, FCM) in order to evaluate the results and we comparatively analyze the obtained results on two case studies. This paper focuses on clustering with very large datasets and very many attributes of different types. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Methodology of the Heuristic Based Hybrid Clustering Technique for Pattern Classification and Recognition

    Page(s): 28 - 35
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (344 KB) |  | HTML iconHTML  

    In this paper we investigate the problem in different data sets to form similar objects into identical groups. Our technique is an unsupervised based algorithm. Unsupervised portion so high that the no input are given by user. Automatically judge the threshold applying threshold which is selected heuristic manner. It can also be resolve Singleton sets which can be identified in some special condition. Clustering is the clubbing of similar objects into identical groups, or more precisely, the partitioning of a data set into subsets (clusters), so that the data in each subset (ideally) share some common feature - often proximity according to some defined distance measure. Clustering is the clubbing of similar objects into identical groups, or more precisely, the partitioning of a data set into subsets (clusters), so that the data in each subset (ideally) share some common feature - often proximity according to some defined distance measure. The capability of recognizing and classifying patterns is one of the most fundamental characteristics of human intelligence. The primary goal of pattern recognition is supervised or unsupervised classification. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Intellectual Climate System for Monitoring Industrial Environment

    Page(s): 36 - 39
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (520 KB) |  | HTML iconHTML  

    The monitoring of the temperature and humidity in indoor locations in the industry are very important. Temperature and humidity sensor is the main components of the Intellectual climate control which is more useful. In the manufacturing industry some parameter like temperature, humidity are very important parameter which is directly or indirectly affect on production of the product. Here we need to monitor the temperature as well as humidity to protect the product and company from any type of accident. Such type of system are implementing by embedded systems. The main aim is to obtain desired temperatures in various rooms. The paper describes a system for monitoring the temperature as well as humidity in indoor locations. The communication between them is done wirelessly through a RF frequency. In the industry the temperature and humidity should be in specific range. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Modified Approach to Text Steganography Using HyperText Markup Language

    Page(s): 40 - 44
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (392 KB) |  | HTML iconHTML  

    Steganography is the tactics of silent communication, where one person communicates with others through cover medium so that intermediary does not has suspicion about the hidden information. In this paper we propose a modified approach for text steganography based on HTML tags and attributes. As HTML is rich in tags and its attributes, easily communicated in the internet, and the source code is checked rarely by anybody it can be used intelligently to perform text steganography. By hiding the secret data inside the source code of HTML, text steganography can be easily achieved. In this approach the relation between two consecutive attributes is considered for hiding secret data. To hide '1' two consecutive attributes of same tag are taken, and to hide '0' two consecutive attributes of different tags are taken. The secret data hiding and revealing technique is presented. The proposed approach has been explained using java programming language. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Securing E-Healthcare Applications with PPS and PDS

    Page(s): 45 - 49
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (403 KB) |  | HTML iconHTML  

    Instead of being measured face to face, e-healthcare is a new promising technology that facilitates monitoring patients health related parameter continuously and in real time, with the help of wireless sensor network. Wireless Body Sensor Network (WBSN) reduces heavy dependence on specialized healthcare staff and thus a desirable technique for countries that lack sufficient medical infrastructure and trained staff. However, layout and design of wireless body sensor network without security can influence the system. In this paper we review existing WBSNs and introducing a new approach for Proxy Patient Server (PPS) and a Proxy doctor Server (PDS) to improve e-healthcare services. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatic Speech Reading by Oral Motion Tracking for User Authentication System

    Page(s): 50 - 54
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (317 KB) |  | HTML iconHTML  

    Automatic speech recognition (ASR) systems are used in recognizing speech with high accuracy rates. Visual information is important for human machine interface. It not only increases the accuracy of an Automatic Speech Recognition (ASR) but also improves its robustness. This paper presents an overview of different approaches used for speech recognition and concentrates on visual only lip reading system. Lip reading can be utilized in many applications such as hearing impaired aid and for noisy environment where speech is highly unrecognizable and as password entry system. The visual feature extraction methods are pixel based such as discrete cosine transform (DCT), discrete wavelet transform (DWT)etc. Other feature extraction methods utilize motion analysis of image sequences representing lip movement. This paper is a survey paper explaining comparisons, pros and cons, analysis of various techniques and methods for speech recognition by lip motion tracking. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Formal Modeling of a Tele-surgery Domain as a Multi-agent Planning Problem

    Page(s): 55 - 58
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (277 KB) |  | HTML iconHTML  

    Technological advancements have led to the development of few commercially available telesurgery systems till date. However such systems are very expensive. In telesurgery, the task of a surgeon (the activities related to a surgery) is partially executed by a robot. Typically, the robot is under the control of a surgeon, it executes the instructions of the controlling surgeon. In this paper we make an attempt to formally model a telesurgery domain (heart surgery) as a multiagent planning problem. The actions related to the surgery are represented as planning operators. The model consists of two interactive agents. The state space of each agent is modeled as a transition system. We have also developed a simple prototype implementation incorporating the above features. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analysis of Multispectral Image Using Discrete Wavelet Transform

    Page(s): 59 - 62
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (621 KB) |  | HTML iconHTML  

    In this paper we have analyzed the discrete wavelet transform of multispectral image of Bareilly region using MatLab tool. The wavelet transform is one of the most useful computational tools for a variety of signal and image processing applications. The wavelet transform is used for the compression of digital image because smaller data are important for storing images using less memory and for transmitting images faster and more reliably. Wavelet transforms are useful for images to reduce unwanted noise and blurring. A discrete wavelet transform (DWT) is any wavelet transform for which the wavelets are discretely sampled. A key advantage it has over Fourier transforms is temporal resolution: it captures both frequency and time domain information. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Review on Mosaicing Techniques in Image Processing

    Page(s): 63 - 68
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (117 KB) |  | HTML iconHTML  

    In image processing, mosaic images are images made by cementing together small tiles. The tiles "tessellate" a source image with the purpose of reproducing the original visual information rendered into a new mosaic-like style. Creation of mosaic images from a sequence of partial views is a powerful means of obtaining a larger view of a scene than available within a single view, and it has been used in wide range of applications. A general framework for retinal and document images is proposed in this paper. This paper also discusses a review on different applications of image mosaicing mainly in the area of retinal image mosaicing and document image mosaicing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Image Texture Analysis - Survey

    Page(s): 69 - 76
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (307 KB) |  | HTML iconHTML  

    This paper discusses the various methods used to analyze the texture property of an image. Texture analysis is broadly classified into three categories: Pixel based, local feature based and Region based. Pixel based method uses grey level co occurrence matrices, difference histogram and energy measures and Local Binary Patterns(LBP) Local feature based method uses edges of local features and generalization of co occurrence matrices. Region based method uses region growing and topographic models. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Data Preparation by CFS: An Essential Approach for Decision Making Using C 4.5 for Medical Data Mining

    Page(s): 77 - 85
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (528 KB) |  | HTML iconHTML  

    Trauma has become the leading cause of death in day to day life. Every year millions of people die and many more are handicapped due to various types of accidents caused by Trauma and many people become handicapped for the rest of their lives. It is necessary to develop a tool for predicting and preventing trauma. Reducing mortality rate and increasing the Health awareness is the aim. We have used the data mining process, to extract the useful data from large datasets. Feature subset selection is of immense importance in the field of data mining. The increased dimensionality of data makes testing and training of general classification method difficult. Mining on the reduced set of attributes reduces computation time and also helps to make the patterns easier to understand. The CFS approach for feature selection is proposed. As a part of feature selection step we used filter approach algorithm as random search technique for subset generation, wrapped with different classifiers/ induction algorithm namely decision tree C 4.5, Naïve Bayes, as subset evaluating mechanism on standard datasets. It is mandatory to obtain ethical and legal clearance from regional as well as Institutional Ethics Review Board (IERB), before using data mining tools in health care research. We got Ethical clearance from BGS Hospital for using the datasets. These datasets were gathered from the patient files which were recorded in the medical record section of the BGS Hospital Bangalore. Further the relevant attributes identified by proposed filter are validated using classifiers. Experimental results illustrate, employing feature subset selection using proposed filter approach has enhanced classification accuracy. Applying [DM ] techniques to the data brings about very interesting and valuable results. It is concluded that in this case, comparing the result of evaluating the models on test set, decision tree works better than NaiveBayes. In this paper, we have also used WEKA Tool for - reating the models. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Clustering Technique on Search Engine Dataset Using Data Mining Tool

    Page(s): 86 - 89
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (558 KB) |  | HTML iconHTML  

    Unlabeled document collections are becoming increasingly common and mining such databases becomes a major challenge. It is a major issue to retrieve good websites from the larger collections of websites. As the number of available Web pages grows, it is become more difficult for users finding documents relevant to their interests. Clustering is the classification of a data set into subsets (clusters), so that the data in each subset share some common trait - often proximity according to some defined distance measure. By clustering we improve the quality of websites by grouping similar websites in groups. This paper addresses the applications of data mining tool Weka by applying k means clustering to find clusters from huge data sets and find the attributes that govern optimization of search engines. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.