By Topic

Innovative Computing Technologies (ICICT), 2010 International Conference on

Date 12-13 Feb. 2010

Filter Results

Displaying Results 1 - 25 of 27
  • [Front and back cover]

    Page(s): c1 - c4
    Save to Project icon | Request Permissions | PDF file iconPDF (264 KB)  
    Freely Available from IEEE
  • [Title page i]

    Page(s): 1
    Save to Project icon | Request Permissions | PDF file iconPDF (15 KB)  
    Freely Available from IEEE
  • [Title page ii]

    Page(s): 1
    Save to Project icon | Request Permissions | PDF file iconPDF (74 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Page(s): 1 - 2
    Save to Project icon | Request Permissions | PDF file iconPDF (70 KB)  
    Freely Available from IEEE
  • Table of contents

    Page(s): 1 - 3
    Save to Project icon | Request Permissions | PDF file iconPDF (111 KB)  
    Freely Available from IEEE
  • Organizing Committee

    Page(s): 1
    Save to Project icon | Request Permissions | PDF file iconPDF (205 KB)  
    Freely Available from IEEE
  • A modified particle swarm optimization for lower order model formulation of linear time invariant continuous systems

    Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (205 KB) |  | HTML iconHTML  

    This paper proposes a new version of the classical particle swarm optimization (PSO), namely, MPSO used to formulate the lower order model for the linear time invariant continuous systems. In the modified PSO, the movement of a particle is governed by three behaviors, namely, inertia, cognitive, and social. The cognitive behavior helps the particle to remember its previous visited best position. This paper proposes to split the cognitive behavior into two sections .This modification helps the particle to search the target very effectively. In order to minimize the integral squared error of the lower order model MPSO is proposed and results are shown in the form of unit step response curves and are compared with the response of the original higher order model and with the other model formulation methods. The proposed method is illustrated through numerical example from literature. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A novel Load Balancing algorithm for computational Grid

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (356 KB) |  | HTML iconHTML  

    The Grid computing environment is a cooperation of distributed computer systems where user jobs can be executed on either local or remote computer. Many problems exist in grid environment. Not only the computational nodes are heterogeneous but also the underlying networks connecting them are heterogeneous. The network bandwidth varies and the network topology among resources is also not fixed. Thus with this multitude of heterogeneous resources, a proper scheduling and efficient load balancing across the Grid is required for improving performance of the system. The load balancing is done by migrating jobs to the buddy processors, a set of processors to which a processor is directly connected. An algorithm, Load Balancing on Arrival (LBA) is proposed for small-scale (intraGrid) systems. It is efficient in minimizing the response time for small-scale grid environment. When a job arrives LBA computes system parameters and expected finish time on buddy processors and the job is migrated immediately. This algorithm estimates system parameters such as job arrival rate, CPU processing rate and load on each processor to make migration decision. This algorithm also considers job transfer cost, resource heterogeneity and network heterogeneity while making migration decision. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Classification of cancer gene expressions from micro-array analysis

    Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (179 KB) |  | HTML iconHTML  

    The role of micro array expression data in cancer diagnosis is very significant. Mining for useful information from such micro array data consisting of thousands of genes and a small number of samples is often a tough task. Colon cancer is the second most common cause of cancer mortality in Western countries. According to the WHO 2006 report colorectal cancer causes 655,000 deaths worldwide per year. All the genes used in the expression profile are not informative; also many of them are redundant. Reducing the number of genes by feature selection and still retaining best class prediction accuracy for the classifier is vital in case of tumor classification. The emphasis in cancer classification is both on methods of gene selection and on choice of classifier. It is proposed to study various classification algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • SQP: An experimental stateless queuing discipline for protecting TCP flows

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (212 KB) |  | HTML iconHTML  

    The QoS (Quality of Service) is an important factor in computer networks. Here we proposed an effective queuing discipline for obtaining QoS and thus the throughput is increased. The SQP is a stateless queue management algorithm. The SQP allocates variable bandwidth for TCP (Transmission Control Protocol) flows and performs the matched drops in the queue. Based on the priority of TCP flow the queuing is done using SQP. Thus the queuing is simple and the implementation is less complex also it restricts the unresponsive TCP flows. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design rationale to source graph [DRG] approach for developing Legacy Code Conversion Kit (LCCK)

    Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (167 KB) |  | HTML iconHTML  

    Due to the prominent improvements in the present software engineering, conversion from one programming language to another programming language becomes important task of the industry. Industrial survey shows that major components of development cost of all software goes to maintaining and updating the system software or application package for meeting the current scenario and / or user's satisfaction. In this paper, proposed a Software Development tool (SDT) called `Legacy Code Conversion Kit (LCCK)' for conversion of one object oriented programming language (Source Language) to another object oriented programming language (Destination Language). Design Rationale to Source Graph (DR-SG) is our proposed model that formed a graph from Design Pattern documentation and linked to a source code base. The DR-SG allows developers to trace design concepts through design documentation. This proposed work, completely and confidently satisfying high-level design goals when performing software change tasks. The result of our implementation, from C++ to Java conversion is shown in the chapters. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Noise suppression and improved edge texture analysis in kidney ultrasound images

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (300 KB) |  | HTML iconHTML  

    Due to the characteristic speckle noise of ultrasound kidney images, a noise reducing filter must be first applied before image processing stage like segmentation, registration etc. In addition the speckle suppression methods are highly required to improve the quality of the ultrasound image in retaining the edge features of the kidney images. The effect of this stage increases the dynamic range of gray levels which in turn increase the image contrast. The proposed system develops a multiscale wavelet based Bayesian speckle suppression method for ultrasound kidney images. The logarithmic transform of the original image is analyzed into the multi-scale wavelet domain. The subband decompositions of ultrasound images have significantly non-Gaussian statistics that are best described by families of heavy-tailed distributions. Bayesian estimators are designed to exploits these statistics. Ultrasound (US) is increasingly considered as a viable alternative imaging modality in computer-assisted Kidney segmentation and disease diagnosis applications. Automatic Kidney segmentation from US images, however, remains a challenge due to speckle noise and various other artifacts inherent to US. This paper, design intensity invariant local image phase features, obtained using improved Gabor filter banks, for extracting edge texture features that occur at core and intermediate layer interfaces. The proposed model does the extension of phase symmetry features to modified gabor mode and their use in automatic extraction of kidney edge texture features from US normal and diseased patient images. The system functionality is proved qualitatively and quantitatively through experimentation for synthetic and real data sets. The localization feature value threshold is evaluated with the training samples of US images. The speckle noise error ratio with respect to the standard US image are compared and experimented. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Video conferencing streaming for the rural upliftment

    Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (196 KB) |  | HTML iconHTML  

    The 21st century has already witnessed remarkable achievements in every area of science and technology. By taking full advantage of new information technologies, the scientific community has an unprecedented opportunity to close the vast `Knowledge Gap'among all people. While radio has played a major role in the welfare of rural community in the yesteryears, followed by television, it is the internet cafe that has taken over other communication system in the recent past, even in villages. Now the Internet technology has been developed to such an extend that it offers widespread availability and relative ease of use, both at a most economic and affordable cost. Video conferencing is one such technology which offers effective two-way communication to provide information on agriculture, education, employment, legal assistance etc., for the rural welfare. Thus the ultimate goal of ¿Reaching the Unreached¿ is made possible by video conferencing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • ACO based spatial data mining for traffic risk analysis

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (204 KB) |  | HTML iconHTML  

    Many organizations have collected large amounts of spatially referenced data in various application areas such as geographic information systems (GIS), banking and retailing. These are valuable mines of knowledge vital for strategic decision making and motivate the highly demanding field of spatial data mining i.e., discovery of interesting, implicit knowledge from large amounts of spatial data. Most government local administrations collect and/or use geographical databases on the road accidents, on the road network and sometimes on the vehicle flow and sometimes on the mobility of inhabitants. In addition, other databases provide additional information on the geographical environment trend layers like administrative boundaries, buildings, census data, etc. These data contain a mine of useful information for the traffic risk analysis. There was a first study aiming identifying and at predicting the accident risk of the roads. It used a decision tree that learns from the inventoried accident data and the description of the corresponding road sections. However, this method is only based on tabular data and does not exploit geographical location. Using the accident data, combined to trend data relating to the road network, the traffic flow, population, buildings, etc., this project aims at deducing relevant risk models to help in traffic safety task The existing work provided a pragmatic approach to multilayer geo-data mining. The process behind was to prepare input data by joining each layer table using a given spatial criterion, then applying a standard method to build' a decision tree. It allows the end-user to evaluate the results without any assistance by an analyst or statistician. The existing work did no consider multi-relational data mining domain. The efficiency of risk factor evaluation requires automatic filtering of spatial relationships. The quality of a decision tree depends, on the whole, of the quality of the initial data which are incomplete, incorre- ct or nonrelevant data inevitably leads to erroneous results. The proposed model develops an ant colony algorithm for the discovery of spatial trend patterns found in a GIS traffic risk analysis database. The proposed ant colony based spatial data mining algorithm applies the emergent intelligent behavior of ant colonies to handle the huge search space encountered in the discovery of this knowledge. Genetic algorithm is deployed to evaluate the spatial risk pattern rule sets to its optimization on search phase in quick successions. The experimental results on a geographical traffic (trend layer) spatial database show that our method has higher efficiency in performance of the discovery process and in the quality of trend patterns discovered compared to other existing approaches using non-intelligent decision tree heuristics. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Self-organising map for document categorization using latent semantic analysis

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (233 KB) |  | HTML iconHTML  

    With the increasing amount of unstructured content available electronically on the web, content categorization becomes very important for efficient information retrieval. The basic approaches for information retrieval in text documents are searching using keywords, categorization of the documents and filtering out the stream. To extract information from raw data, its complexity needed to be first reduced. Clustering methods and Projection methods are aimed at reducing the amount of data and dimensionality of data respectively. SOM is a special case in that it can be used at the same time for both clustering and projection. It projects onto a 2D-grid. Various methods were developed for the automatic clustering of worldwide webdocuments according to the user requirements. The objective of this paper is to reduce the time and effort the user has to find the information sought after. The method termed topological organization of content can generate classified topics from a set of unstructured documents. The TOC is a set of hierarchically organized 1D-growing SOMs. In TOC, vector space model is used for indexing of 1D-SOM. In the proposed approach, latent semantic indexing of 1D-SOM can be used to enhance the association between terms. Latent semantic analysis is a technique that projects the original high dimensional document vector into a space with latent semantic dimensions. A term-by-document matrix is constructed for the information retrieval. A brief review is given on existing methods for documents clustering and organization. The proposed method which can use LSI will be efficient in terms of computational cost, accuracy and visualization. It can be easily adapted for large data set. The proposed method will provide feature for retrieving meaningful related topics. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Searching video blogs with integration of context and content analysis

    Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (307 KB) |  | HTML iconHTML  

    Now a day's video blogs are playing an important role. To effectively manage the vlogs and make them more conveniently accessible for users is a challenging problem. In this paper we proposed a novel vlog management model which is consists of two stages: annotation and search. In Vlog annotation, we extract informative keywords not only from the textual content of the target vlog itself but also from external resources which are semantically and visually relevant to it. Sentiment evaluation obtained from comments. In vlog search we adopt saliency based matching to make the search results. We use different ranking strategies are adapted to the user. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed heuristic algorithm to optimize the throughput of data driven streaming in peer to peer networks

    Page(s): 1 - 9
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (378 KB) |  | HTML iconHTML  

    During recent years, the Internet has witnessed a rapid growth in deployment of data-driven (or swarming based) peer-to-peer (P2P) media streaming. In these applications, each node independently selects some other nodes as its neighbors (i.e., gossip style overlay construction) and exchanges streaming data with the neighbors (i.e., data scheduling). To improve the performance of such protocol, many existing works focus on the gossip-style overlay construction issue. However, few of them concentrate on optimizing the streaming data scheduling to maximize the throughput of a constructed overlay. In this paper, we analytically study the scheduling problem in data-driven streaming system and model it as a classical min-cost network flow problem.We then propose both the global optimal scheduling scheme and distributed heuristic algorithm to optimize the system throughput. Furthermore, we introduce layered video coding into data-driven protocol and extend our algorithm to deal with the end-host heterogeneity. The results of simulation with the real-world traces indicate that our distributed algorithm significantly outperforms conventional ad hoc scheduling strategies especially in stringent buffer and bandwidth constraints. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design of optimal fuzzy classifier system using Particle Swarm Optimization

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (214 KB) |  | HTML iconHTML  

    One of the important issues in the design of fuzzy classifier is the formation of fuzzy if-then rules and the membership functions. This paper presents a Particle Swarm Optimization (PSO) approach to obtain the optimal rule set and the membership function. To develop the fuzzy system the membership functions and rule set are encoded as particles and evolved simultaneously using PSO. While designing the fuzzy classifier using PSO, the membership functions are represented as real numbers and the rule set is represented as discrete numbers. In the classification problem under consideration the objective is to maximize the correctly classified data and minimize the number of rules. This objective is formulated as fitness function to guide the search procedure to select an appropriate fuzzy classification system so that the number of fuzzy rules and the number of incorrectly classified patterns are simultaneously minimized. The performance of the proposed approach is demonstrated through development of fuzzy classifier for Iris data available in UCI machine learning repository and Simulation results show the suitability of the proposed approach for developing the fuzzy system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mining rare event classes in noisy EEG by over sampling techniques

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (177 KB) |  | HTML iconHTML  

    Mining is processing data to obtain interesting pattern or knowledge. Noisy EEG can be received on some abnormal state of brain activities. These signals can be logged in data sheets and the samples are taken to identify the rare events. The sampling technique here we used is SMOTE (Synthetic Minority Over-sampling Technique). An approach to the construction of classifiers from imbalanced datasets is described. A dataset is imbalanced if the classification categories are not approximately equally represented. Often real-world data sets are predominately composed of ¿normal¿ patterns with only a small percentage of ¿abnormal¿ or ¿interesting¿ patterns. It is also the case that the cost of misclassifying an abnormal (interesting) pattern as a normal pattern is often much higher than the cost of the reverse error. Under-sampling of the majority (normal) class has been proposed as a good means of increasing the sensitivity of a classifier to the minority class. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A framework approach using CMMI for SPI to Indian SME'S

    Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (192 KB) |  | HTML iconHTML  

    Software development is carried out in different methods and using several standards. Software Process Improvisation (SPI) is the key for developing quality software. Software development companies are not only big organizations but there are many small settings usually called SME (Small and Medium Enterprises) or SMB (Small and Medium Business) which plays a major role in the software industry. CMMI model is applicable to small organizations also for process improvisation. The primary focus is to develop a framework to support the SPI in small organizations so that they meet their business goals successfully along with adhering to CMMI standards. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Resilient group key agreement protocol with authentication security

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (234 KB) |  | HTML iconHTML  

    Many applications in Dynamic Peer Group are becoming increasing popular nowadays. There is a need for security services to provide group-oriented communication privacy and data integrity. To provide this form of group communication privacy, it is important that members of the group can establish a common secret key for encrypting group communication data. A secure distributed group key agreement and authentication protocol is required to handle this issue. Instead of performing individual re-keying operations, an interval-based approach of re-keying is adopted in the proposed scheme. The proposed interval based algorithms considered in this paper are Batch algorithm and the Queue-batch algorithm. The interval-based approach provides re-keying efficiency for dynamic peer groups while preserving both distributed and contributory properties. Performance of these interval-based algorithms under different settings, such as different join and leave probabilities, is analyzed The Queue-batch algorithm performs the best among the interval-based algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • GSM based ECG tele-alert system

    Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (429 KB) |  | HTML iconHTML  

    Cardiac arrest is quoted as the major contributor to sudden and unexpected death rate in the modern stress filled lifestyle around the globe. A system that warns the person about the onset of the disease earlier automatically will be a boon to the society. This is achievable by deploying advances in wireless technology to the existing patient monitoring system. This paper proposes the development of a module that provides mobility to the doctor and the patient, by adopting a simple and popular technique, detecting the abnormalities in the bio signal of the patient in advance and sending an alert sms to the doctor through Global system for Mobile(GSM) thereby taking suitable precautionary measures thus reducing the critical level of the patient. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Overview analysis of reusability metrics in software development for risk reduction

    Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (188 KB) |  | HTML iconHTML  

    Software Engineering is a quality software producing strategy. In which software development life cycle involves a sequence of different activities during the development process. . Risk analysis helps to avoid the adaptive reusability problem for transformation of coding. Its main feature is that it encourages the concept of reusability which paves the way for use of functions and packages. Code development is generally based on the pattern of design. Therefore reusable component is introduced to reduce the risk factors in projects. Risk is directly proportional to the complexity of a system and risk is inversely proportional to the number of reusable components used in a project. Software organizations always focus on improving technology so as to reduce overheads in people management, increase customer satisfaction, cutting short in time and cost of productions etc., Reusability is motivated to reduce time and cost in software development. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An unsupervised learning approach to pixel based image retrieval

    Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1338 KB) |  | HTML iconHTML  

    In this paper, we are going to study about the ¿Image Indexing through Pixel Variations¿. Grouping images into meaningful categories to retrieve useful information is a challenging and important problem. Content based image retrieval address the problem of retrieving images relevant to the user needs from image databases . Here, in this we are going to gather the similarity between the images which are already stored in database. Here we are giving image as input and then comparing it with training concepts in the database. To perform the exact matching of images, certain options are provided for user's choice along with the query. This includes the color, location, shape and size of the input image. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A study of Object Oriented testing techniques: Survey and challenges

    Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (168 KB) |  | HTML iconHTML  

    Object-orientation has rapidly become accepted as the preferred paradigm for large-scale system design. The product created during Software Development effort has to be tested since bugs may get introduced during its development. This paper presents a comprehensive survey of the various software testing methodologies available to test object-oriented software. Software based on Object Oriented technology poses challenges to conventional testing techniques since it involves concepts like Inheritance, Polymorphism etc., It is very essential to monitor the behavior of every object during its lifetime by keeping track of where the object is defined and where such definition is referenced. In this paper we have discussed about how Unit testing, Integration Testing and System Testing are being carried out in the Object Oriented environment. To accommodate these strategies, several new techniques have been proposed like Fault-based Testing, Scenario-based Testing, Surface Structure testing and a brief discussion of these techniques has also been presented. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.