Scheduled System Maintenance on May 29th, 2015:
IEEE Xplore will be upgraded between 11:00 AM and 10:00 PM EDT. During this time there may be intermittent impact on performance. We apologize for any inconvenience.
By Topic

Pattern Recognition, Informatics and Medical Engineering (PRIME), 2012 International Conference on

Date 21-23 March 2012

Filter Results

Displaying Results 1 - 25 of 84
  • Combining local and global feature for object recognition using SVM-KNN

    Publication Year: 2012 , Page(s): 1 - 7
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (229 KB) |  | HTML iconHTML  

    In this paper, a framework for recognizing an object from the given image based on the local and global feature is discussed. The proposed method is based on the combination of the two methods in the literature, K-Nearest Neighbor (KNN) and Support Vector Machine (SVM). For feature vector formation, Hu's Moment Invariant is computed to represent the image, which is invariant to translation, rotation and scaling as a global feature and Hessian-Laplace detector and PCA-SIFT descriptor as local feature. In this framework, first the KNN is applied to find the closest neighbors to a query image and then the local SVM is applied to find the object that belongs to the object set. The proposed method is implemented as two stage process. In the first stage, KNN is utilized to compute distances of the query to all training and pick the nearest K neighbors. During the second stage SVM is applied to recognize the object. The proposed method is experimented in MATLAB and tested with the COIL-100 database and the results are shown. To prove the efficiency of the proposed method, Neural Network model (BPN) is performed and the comparative results are given. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An augmented prerequisite concept relation map design to improve adaptivity in e-learning

    Publication Year: 2012 , Page(s): 8 - 13
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (142 KB) |  | HTML iconHTML  

    Due to the advancement in information and communication technology and vast varied learner group, e-learning has become popular. For achieving the adaptivity in learning, predefined concept map is used to provide proper guidance. In most of the researches, weight of concepts in each learning item is not properly considered. In this study, a three phase prerequisite concept map formulation for adaptivity is proposed. This approach is a most efficient one, since the first phase discards all the unrelated items which may distract the further analysis. Here, Norm-referencing in Item Analysis approach is used to find the item discrimination for elimination of irrelevant items. The second phase, computes all the grade association rules. The weight of concept in each learning item is considered and prerequisite concept sets are found which is not having any redundancy and cyclic in its map, thereby facilitating the next step of procedure. The final phase constructs the concept map with maximum confidence in a capable manner. Finally, the Prerequisite concept map can be used in tutoring system, thereby enhancing the adaptivity in e-learning. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A new approach to the design of knowledge base using XCLS clustering

    Publication Year: 2012 , Page(s): 14 - 19
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (266 KB) |  | HTML iconHTML  

    A Knowledge Base is a special kind of data base used for storage and retrieval of knowledge. From the perspective of knowledge creators, maintenance and creation of knowledge base is a crucial activity in the life cycle of knowledge management. This paper presents a novel approach to the creation of knowledge base. The main focus of our approach is to extract the knowledge from unstructured web documents and create a knowledge base. Preprocessing techniques such as tokenizing, stemming are performed on the unstructured input web documents. Meanwhile, Similarity and redundancy computation is performed for duplicate knowledge removal. The extracted knowledge is organized and converted to XML documents. XCLS clustering is made on XML documents. Finally, Knowledge base is designed for storing extracted XML documents. A query interface has been developed to retrieve the search knowledge. To test the usefulness and ease of use of our prototype, we used the Technology Acceptance Model (TAM) to evaluate the system. Results are promising. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • TOPCRAWL: Community mining in web search engines with emphasize on topical crawling

    Publication Year: 2012 , Page(s): 20 - 24
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (341 KB) |  | HTML iconHTML  

    Web Mining Systems make use of the redundancy of data published on the Web to automatically extract formation from existing web documents. The crawler is an important module of a web search engine. The quality of a crawler directly affects the searching quality of such web search engines. Such a web crawler may interact with millions of hosts over a period of weeks or months, and thus issues of robustness, flexibility, and manageability are of major importance. Given some URLs, the crawler should retrieve the web pages of those URLs, parse the HTML files, add new URLs into its queue and go back to the first phase of this cycle. The crawler also can retrieve some other information from the HTML files as it is parsing them to get the new URLs. This paper proposes a framework and algorithm, TOPCRAWL for mining. The proposed TOPCRAWL algorithm is a new crawling method which emphasis on topic relevancy and outperforms state-of-the-art approaches with respect to recall values achievable within a given period of time. This method also tries to offer the result in community format and it makes use of a new combination of ideas and techniques used to identify and exploit navigational structures of websites, such as hierarchies, lists or maps. This algorithm is simulated with web mining tool Deixto and the basic idea has been implemented using the JAVA and Results are given. Comparisons with existing focused crawling techniques reveal that the new crawling method leads to a significant increase in recall whilst maintaining precision. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Increasing cluster uniqueness in Fuzzy C-Means through affinity measure

    Publication Year: 2012 , Page(s): 25 - 29
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (131 KB) |  | HTML iconHTML  

    Clustering is a widely used technique in data mining application for discovering patterns in large dataset. In this paper the Fuzzy C-Means algorithm is analyzed and found that quality of the resultant cluster is based on the initial seed where it is selected either sequentially or randomly. Fuzzy C-Means uses K-Means clustering approach for the initial operation of clustering and then degree of membership is calculated. Fuzzy C-Means is very similar to the K-Means algorithm and hence in this paper K-Means is outlined and proved how the drawback of K-Means algorithm is rectified through UCAM (Unique Clustering with Affinity Measure) clustering algorithm and then UCAM is refined to give a new view namely Fuzzy-UCAM. Fuzzy C-Means algorithm should be initiated with the number of cluster C and initial seeds. For real time large database it's difficult to predict the number of cluster and initial seeds accurately. In order to overcome this drawback the current paper focused on developing the Fuzzy-UCAM algorithm for clustering without giving initial seed and number of clusters for Fuzzy C-Means. Unique clustering is obtained with the help of affinity measures. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Segregating unique service object from multi-web sources for effective visualization

    Publication Year: 2012 , Page(s): 30 - 35
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (735 KB) |  | HTML iconHTML  

    Web services describe a standardized way of integrating Web-based applications using the XML (Extensible Markup Language), SOAP (Simple Object Access Protocol), WSDL and UDDI (Universal Description Discovery and Integration) open standards over an Internet protocol backbone. WSDL (Web Service Definition Language) is used for describing the available services. The dynamic approach starts with crawling on the Web for Web Services, simultaneously gathering the WSDL service descriptions and related documents. The Web APIs provide the methodology for building unique service objects from multiple web resources. In this semantic search engine, if the web user gets satisfied with the description they can crawl into the webpage, otherwise they can shift to another link. This query enhancement process is exploited to learn useful information that helps to generate related queries. In this research work the add-on is automatically generated when compared with the existing system. Add-on is programs that are integrated into the browser application, usually providing additional functionality. Finally this work gives an overview of how to segregate the unique service object (USO) using Bookshelf Data Structure from web resources and use it to semantically annotate the resulting services in visual mode. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hybrid spamicity score approach to web spam detection

    Publication Year: 2012 , Page(s): 36 - 40
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (133 KB) |  | HTML iconHTML  

    Web spamming refers to actions intended to mislead search engines and give some pages higher ranking than they deserve. Fundamentally, Web spam is designed to pollute search engines and corrupt the user experience by driving traffic to particular spammed Web pages, regardless of the merits of those pages. Recently, there is dramatic increase in amount of web spam, leading to a degradation of search results. Most of the existing web spam detection methods are supervised that require a large set of training web pages. The proposed system studies the problem of unsupervised web spam detection. It introduces the notion of spamicity to measure how likely a page is spam. Spamicity is a more flexible measure than the traditional supervised classification methods. In the proposed system link and content spam techniques are used to determine the spamicity score of web page. A threshold is set by empirical analysis which classifies the web page into spam or non spam. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Electronic voting machine — A review

    Publication Year: 2012 , Page(s): 41 - 48
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (226 KB) |  | HTML iconHTML  

    Electronic Voting Machine (EVM) is a simple electronic device used to record votes in place of ballot papers and boxes which were used earlier in conventional voting system. Fundamental right to vote or simply voting in elections forms the basis of democracy. All earlier elections be it state elections or centre elections a voter used to cast his/her favorite candidate by putting the stamp against his/her name and then folding the ballot paper as per a prescribed method before putting it in the Ballot Box. This is a long, time-consuming process and very much prone to errors. This situation continued till election scene was completely changed by electronic voting machine. No more ballot paper, ballot boxes, stamping, etc. all this condensed into a simple box called ballot unit of the electronic voting machine. Because biometric identifiers cannot be easily misplaced, forged, or shared, they are considered more reliable for person recognition than traditional token or knowledge based methods. So the Electronic voting system has to be improved based on the current technologies viz., biometric system. This article discusses complete review about voting devices, Issues and comparison among the voting methods and biometric EVM. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Computational time factor analysis of K-means algorithm on actual and transformed data clustering

    Publication Year: 2012 , Page(s): 49 - 54
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (491 KB) |  | HTML iconHTML  

    Clustering is the process of partitioning a set of objects into a distinct number of groups or clusters, such that objects from the same group are more similar than objects from different groups. Clusters are the simple and compact representation of a data set and are useful in applications, where we have no prior knowledge about the data set. There are many approaches to data clustering that vary in their complexity and effectiveness due to its wide number of applications. K-means is a standard and landmark algorithm for clustering data. This multi-pass algorithm has higher time complexity. But in real time we want the algorithm which is time efficient. Hence, here we are giving a new approach using wiener transformation. Here the data is wiener transformed for k-means clustering. The computational results shows that the proposed approach is highly time efficient and also it finds very fine clusters. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Binary data clustering based on Wiener transformation

    Publication Year: 2012 , Page(s): 55 - 60
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (213 KB) |  | HTML iconHTML  

    Clustering is the process of grouping similar items. Clustering becomes very tedious when data dimensionality and sparsity increases. Binary data are the simplest form of data used in information systems for very large database and it is very efficient based on computational efficiency, memory capacity to represent categorical type data. Usually the binary data clustering is done by using 0 and 1 as numerical value. In this paper, the binary data clustering is performed by preprocessing the binary data to real by wiener transformation. Wiener is a linear Transformation based upon statistics and it is optimal in terms of Mean square error. Computational results show that the clustering based on Wiener transformation is very efficient in terms of objectivity and subjectivity. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modified backpropagation algorithm with adaptive learning rate based on differential errors and differential functional constraints

    Publication Year: 2012 , Page(s): 61 - 67
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (162 KB) |  | HTML iconHTML  

    In this paper, a new adaptive learning rate algorithm to train a single hidden layer neural network is proposed. The adaptive learning rate is derived by differentiating linear and nonlinear errors and functional constraints weight decay term at hidden layer and penalty term at output layer. Since the adaptive learning rate calculation involves first order derivative of linear and nonlinear errors and second order derivatives of functional constraints, the proposed algorithm converges quickly. Simulation results show the advantages of proposed algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Implementation of VLSI-oriented FELICS algorithm using Pseudo Dual-Port RAM

    Publication Year: 2012 , Page(s): 68 - 73
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (497 KB) |  | HTML iconHTML  

    This paper presents a fast, efficient, lossless image compression algorithm named FELICS. This consists of two techniques named simplified adjusted binary code and GOLOMB-Rice code which provide lossless compression for high throughput applications. Two-level parallelism with four-stage pipelining is adopted. Pseudo Dual-Port RAM is used which improves the processing speed and decreases area and power consumption. The proposed architecture can be used for high definition display applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A new enhanced technique for link farm detection

    Publication Year: 2012 , Page(s): 74 - 81
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (256 KB) |  | HTML iconHTML  

    Search engine spam is a webpage that has been designed to artificially inflating its search engine ranking. Recently this search engine spam has been increased dramatically and creates problem to the search engine and the web surfer. It degrades the search engine's results, occupies more memory and consumes more time for creating indexes, and frustrates the user by giving irrelevant results. Search engines have tried many techniques to filter out these spam pages before they can appear on the query results page. Spammers intend to increase the PageRank of certain spam pages by creating a large number of links pointing to them. We have designed and develop a system, spamcity score that detects spam hosts or pages on the Web. The UK Web Spam UK 2007 data set has been used for experimentation. It is a public web spam dataset annotated at the level of hosts, for all results reported here. System uses the key features of popular link based algorithms to detect spam in improved manner. In this paper, various ways of creating spam pages, a collection of current methods that are being used to detect spam and a new approach to build a tool for improving link spam detection using spamcity score of term spam. This new approach uses SVMLight tool to detect the link spam which considers the link structure of Web and page contents. These statistical features are used to build a classifier that is tested over a large collection of Web link spam. The link farm can be identifying based on Web Graph, classification by using SVMLight Tool, Degree based measure, page Rank, Trust Rank, and Truncated PageRank. The spam classifier makes use of the Wordnet word database and SVMLight tool to classify web links as either spam or not spam. These features are not only related to quantitative data extracted from the Web pages, but also to qualitative properties, mainly of the page links. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Genetic clustering with Bee Colony Optimization for flexible protein-ligand docking

    Publication Year: 2012 , Page(s): 82 - 87
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (215 KB) |  | HTML iconHTML  

    In this paper Flexible Protein Ligand Docking is carried out using Genetic Clustering with Bee Colony Optimization. The molecular docking problem is to find a good position and orientation for docking and a small molecule ligand to a large receptor molecule. It is originated as an optimization problem consists of optimization method and the clustering technique. Clustering is a data mining task which groups the data on the basis of similarities among the data. A Genetic clustering algorithm combine a Genetic Algorithm (GA) with the K-medians clustering algorithm. GA is one of the evolutionary algorithms inspired by biological evolution and utilized in the field of clustering. K-median clustering is a variation of K-means clustering where instead of calculating the mean for each cluster to determine its centroid, one instead calculates the median. Genetic Clustering is combined with Bee Colony Optimization (BCO) algorithm to solve Molecular docking problem. BCO is a new Swarm Intelligent algorithm that was first introduced by Karaboga. It is based on the Fuzzy Clustering with Artificial Bee Colony Optimization algorithm proposed by Dervis Karaboga and Celal Ozturk. In this work, we propose a new algorithm called Genetic clustering Bee Colony Optimization (GCBCO). The performance of GCBCO is tested in 10 docking instances from the PDB bind core set and compared the performance with PSO and ACO algorithms. The result shows that the GCBCO could find ligand poses with best energy levels than the existing search algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mining coregulated biclusters from gene expression data

    Publication Year: 2012 , Page(s): 88 - 93
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (229 KB) |  | HTML iconHTML  

    The objective of this paper is mining coregulated biclusters from gene expression data. Gene expression is the process which produces functional product from the gene information. Data mining is used to find relevant and useful information from databases. Clustering groups the genes according to the given conditions. Biclustering algorithms belong to a distinct class of clustering algorithms that perform simultaneous clustering of both rows and columns of the gene expression matrix. In this paper a new algorithm, Enhanced Bimax algorithm is proposed based on the Bimax algorithm [7]. The normalization technique is included which is used to display a coregulated biclusters from gene expression data and grouping the genes in the particular order. In this work, Synthetic dataset is used to display the coregulated genes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance evaluation of employees of an organization using formal concept analysis

    Publication Year: 2012 , Page(s): 94 - 98
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (210 KB) |  | HTML iconHTML  

    FCA is a mathematical framework that depicts knowledge derived from the data represented as formal context. The objective of this paper is to apply Formal Concept Analysis (FCA) to analyze the key performance areas (KPA) of faculty of an institute. While constructing the formal context we have considered the faculties as objects and their KPA as attributes. This context is processed using FCA and the knowledge derived is analyzed to measure the performance of the faculty. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Vegetable price prediction using data mining classification technique

    Publication Year: 2012 , Page(s): 99 - 102
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (119 KB) |  | HTML iconHTML  

    Each and every sector in this digital world is undergoing a dramatic change due to the influence of IT field. The agricultural sector needs more support for its development in developing countries like India. Price prediction helps the farmers and also Government to make effective decision. Based on the complexity of vegetable price prediction, making use of the characteristics of neural networks such as self-adapt, self-study and high fault tolerance, to build up the model of Back-propagation neural network to predict vegetable price. A prediction model was set up by applying the neural network. Taking tomato as an example, the parameters of the model are analyzed through experiment. At the end of the result of Back-propagation neural network shows absolute error percentage of monthly and weekly vegetable price prediction and analyze the accuracy percentage of the price prediction. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Unsupervised hybrid PSO — Relative reduct approach for feature reduction

    Publication Year: 2012 , Page(s): 103 - 108
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (175 KB) |  | HTML iconHTML  

    Feature reduction selects more informative features and reduces the dimensionality of a database by removing the irrelevant features. Selecting features in unsupervised learning scenarios is a harder problem than supervised feature selection due to the absence of class labels that would guide the search for relevant features. Rough set is proved to be efficient tool for feature reduction and needs no additional information. PSO (Particle Swarm Optimization) is an evolutionary computation technique which finds global optimum solution in many applications. This work combines the benefits of both PSO and rough sets for better data reduction. This paper describes a novel Unsupervised PSO based Relative Reduct (US-PSO-RR) for feature selection which employs a population of particles existing within a multi-dimensional space and dependency measure. The performance of the proposed algorithm is compared with the existing unsupervised feature selection methods USQR (UnSupervised Quick Reduct) and USSR (UnSupervised Relative Reduct) and the effectiveness of the proposed approach is measured by using Clustering evaluation indices. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Location-aware service discovery in next generation wireless networks

    Publication Year: 2012 , Page(s): 109 - 114
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (147 KB) |  | HTML iconHTML  

    The service discovery mechanism in next generation wireless network should be flexible to both location and environment change of the user which can be achieved by appropriately predicting the user mobility. As a result, effective user mobility prediction technique need to be designed for offering the services without affecting the user location. In this paper, we propose a location aware service discovery protocol in next generation wireless networks. This technique consists of three phases: Handoff triggering based on received signal strength of the base station (BS), Client mobility prediction as per its velocity and direction, BS selection with maximum available bandwidth and residual power. By simulation results, we show that our proposed approach minimizes the query latency. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An optimized cluster based approach for multi-source multicast routing protocol in mobile ad hoc networks with differential evolution

    Publication Year: 2012 , Page(s): 115 - 120
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (439 KB) |  | HTML iconHTML  

    This paper presents a new; cluster based multicast routing protocol with some upgrading features such as multiple sources, less redundancy in packet delivery and better forwarding efficiency in mobile ad hoc networks. In this proposed work we have used the differential evolution algorithm for the cluster and hence cluster heads optimization. To prove the proposed routing protocol performance which has been simulated in NS2 and that results are proved that better delivery ratio, control overhead and forwarding efficiency as the function of increasing multicast sources and destinations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Wavelet-based multiple access technique for mobile communications

    Publication Year: 2012 , Page(s): 121 - 124
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (241 KB) |  | HTML iconHTML  

    Wavelet theory has emerged as a new mathematical tool that can be applied in many fields such as image processing, biomedical engineering, radar, physics, control systems and communication systems. The important area of application of wavelets in communication: multiple accesses. Among the multiple access applications one of the most notable work is wavelet packet-based multiple access communication. The two new multiple access systems are Scale-Time-Code Division Multiple Access (STCDMA) and Scale-Code Division Multiple Access (SCDMA). In a STCDMA system, Direct-Sequence (DS) Code-Division Multiple Access (CDMA) is used in each time slot to identify multiple users. If time division multiplexing is excluded in each scale, SCDMA, which is a multimedia system, is obtained. These systems are analyzed over a synchronous Additive White Gaussian Noise (AWGN) by using a conventional detector and a multiuser detector based on decorrelating detector for real and complex-valued PN sequences. These systems have better performance for complex-valued sequences compared to real-valued sequences. SCDMA can also be analyzed over an asynchronous AWGN by using a conventional detector for real-valued sequences. SCDMA is attractive compared to DS-CDMA, because it is capable of transmitting different rates of information messages. To be more specific, STCDMA is user-advantageous and SCDMA is information-advantageous. In STCDMA and SCDMA good PN sequences such as Kasami sequences are required because of the reuse capability while DS-CDMA has only limited number of them. Kasami sequences are optimal since the maximum cross correlation value achieves the Welch Lower Bound. The main purpose of using Kasami sequences is that, it decreases the multiple access interference. These PN sequences are very useful for multipath, jamming environments and synchronization purposes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An improved security mechanism for high-throughput multicast routing in wireless mesh networks against Sybil attack

    Publication Year: 2012 , Page(s): 125 - 130
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (141 KB) |  | HTML iconHTML  

    Wireless Mesh Networks (WMNs) have become one of the important domains in wireless communications. They comprise of a number of static wireless routers which form an access network for end users to IP-based services. Unlike conventional WLAN deployments, wireless mesh networks offer multihop routing, facilitating an easy and cost-effective deployment. In this paper, an efficient and secure multicast routing on such wireless mesh networks is concentrated. This paper identifies novel attacks against high throughput multicast protocols in wireless mesh networks through S-ODMRP protocol. Recently, Sybil attack is observed to be the most harmful attack in WMNs, where a node illegitimately claims multiple identities. This paper systematically analyzes the threat posed by the Sybil attack to WMN. The Sybil attack is encountered by the defense mechanism called Random Key Predistribution technique (RKP). The performance of the proposed approach which integrates the S-ODMRP and RKP is evaluated using the throughput performance metric. It is observed from the experimental results that the proposed approach provides good security against Sybil attack with very high throughput. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Area compactness architecture for elliptic curve cryptography

    Publication Year: 2012 , Page(s): 131 - 134
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (331 KB) |  | HTML iconHTML  

    Elliptic curve cryptography (ECC) is an alternative to traditional public key cryptographic systems. Even though, RSA (Rivest-Shamir-Adleman) was the most prominent cryptographic scheme, it is being replaced by ECC in many systems. This is due to the fact that ECC gives higher security with shorter bit length than RSA. In Elliptic curve based algorithms elliptic curve point multiplication is the most computationally intensive operation. Therefore implementing point multiplication using hardware makes ECC more attractive for high performance servers and small devices. This paper gives the scope of Montgomery ladder computationally. Montgomery ladder algorithm is effective in computation of Elliptic Curve Point Multiplication (ECPM) when compared to Elliptic Curve Digital Signature Algorithm (ECDSA). Compactness is achieved by reducing data paths by using multipliers and carry-chain logic. Multiplier performs effectively in terms of area/time if the word size of multiplier is large. A solution for Simple Power Analysis (SPA) attack is also provided. In Montgomery modular inversion 33% of saving in Montgomery multiplication is achieved and a saving of 50% on the number of gates required in implementation can be achieved. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Channel estimation techniques for OFDM systems

    Publication Year: 2012 , Page(s): 135 - 139
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (98 KB) |  | HTML iconHTML  

    In this work we have compared different types of channel estimation algorithm for Orthogonal Frequency Division Multiplexing (OFDM)systems. The result of the Mean Square algorithm(MMSE)was compared with Least Square(LS) algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An efficient heuristic algorithm for fast clock mesh realization

    Publication Year: 2012 , Page(s): 140 - 144
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (173 KB) |  | HTML iconHTML  

    The application of multiple clocking domains with dedicated clock buffer will be implemented. In this paper, an algorithm is proposed for determining the minimum number of clock domains to be used for multi domain clock skew scheduling. Non-tree based distributions provide a high tolerance towards process variations. The clock mesh constraints can be overcome by two processes. First a simultaneous buffer placement and sizing is done which satisfies the signal slew constraints while minimizing the total buffer size by heuristic algorithm. The second one reduces the mesh by deleting certain edges, thereby trading off skew tolerance for low power dissipation by post processing techniques. Finally comparisons of wire length, power dissipation, nominal skew and variation skews using H-SPICE software for various sized benchmark circuits are performed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.