By Topic

Applications of Digital Information and Web Technologies (ICADIWT), 2014 Fifth International Conference on the

Date 17-19 Feb. 2014

Filter Results

Displaying Results 1 - 25 of 49
  • Bwasw-Cloud: Efficient sequence alignment algorithm for two big data with MapReduce

    Publication Year: 2014 , Page(s): 213 - 218
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1113 KB) |  | HTML iconHTML  

    The recent next-generation sequencing machines generate sequences at an unprecedented rate, and a sequence is not short any more called read. The reference sequences which are aligned reads against are also increasingly large. Efficiently mapping large number of long sequences with big reference sequences poses a new challenge to sequence alignment. Sequence alignment algorithms become to match on two big data. To address the above problem, we propose a new parallel sequence alignment algorithm called Bwasw-Cloud, optimized for aligning long reads against a large sequence data (e.g. the human genome). It is modeled after the widely used BWA-SW algorithm and uses the open-source Hadoop implementation of MapReduce. The results show that Bwasw-Cloud can effectively and quickly match two big data in common cluster. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A novel approach for predicting the length of hospital stay with DBSCAN and supervised classification algorithms

    Publication Year: 2014 , Page(s): 207 - 212
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (141 KB) |  | HTML iconHTML  

    Patient length of stay is the most commonly employed outcome measure for hospital resource consumption and to monitor the performance of the hospital. Predicting the patient's length of stay in a hospital is an important aspect for effective planning at various levels. It helps in efficient utilization of resources and facilities. So, there exist a strong demand to make accurate and robust models to predict length of stay. This paper analyzes various methods for length of stay prediction, its advantages and disadvantages and proposes a novel approach for predicting whether the length of stay of the patient is greater than one week. The approach uses DBSCAN clustering to create the training set for classification. The prediction models are compared using accuracy, precision and recall and found that using DBSCAN as a precursor to classification gives better results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A fast narrow band level set formulation for shape extraction

    Publication Year: 2014 , Page(s): 137 - 142
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (409 KB) |  | HTML iconHTML  

    Shape modeling is an active area of research in Computer Graphics and Computer Vision. Shape models aid in the representation and recognition of arbitrarily complex shapes. This paper proposes a fast and computationally efficient narrow band level set algorithm for recovering arbitrary shapes of objects from various types of image data. The overall computational cost is reduced by using a five grid point wide narrow band applied on a variational level set formulation that can be easily implemented by simple finite difference scheme. The proposed method is more efficient and has many advantages when compared to traditional level set formulations. The periodical reinitialization of the level set function to a signed distance function is completely avoided. Implementation by simple finite difference scheme reduces computational complexity and ensures faster curve evolution. The level set function is initialized to an arbitrary region in the image domain. The region based initialization is computationally more efficient and flexible. This formulation can form the basis of a shape modeling scheme for implementing solid modeling techniques on free form shapes set in a level set framework. The proposed method has been applied to extract shapes from both synthetic and real images including some low contrast medical images, with promising results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Collision probability based Available Bandwidth estimation in Mobile Ad Hoc Networks

    Publication Year: 2014 , Page(s): 244 - 249
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (127 KB) |  | HTML iconHTML  

    Ever increasing demand for streaming applications transfer data in real-time over Mobile Ad hoc NETworks (MANETs). These transmission estimates Available Bandwidth (ABW) at each node before transmission in MANETs as channel is shared. The ABW mainly depends on the channel utilization which can be improved by minimizing packet losses. Researchers proposed many techniques to minimize the packet losses by developing models for synchronized idle period at sender-receiver pair before data transfer, collision probability and random waiting. In this paper, we propose a scheme of estimating ABW in MANETs using similar but modified models. The idle period synchronization model of previous work uses parameters of various node state for actual workload, channel utilization at optimal workload and collision rate at optimal workload instead of actual. We use the actual channel utilization and collision rate. The collision probability model of previous work uses same Lagrange Interpolation polynomial irrespective of node behavior. We compute separate Lagrange Interpolation polynomial at each node according to node behavior dynamically. The random waiting time calculated in previous work does not consider all waiting time statistics which is added in our model of random waiting time calculation. The ABW estimated by our scheme is approximately 19.99% more accurate compare to the recent work done by other researchers. As we are computing Lagrange Interpolation polynomial at each node dynamically, evidently it adds computation overhead which is ignored as our goal is towards accuracy of ABW but optimized for speed in future. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The performance evaluation of proactive fault tolerant scheme over cloud using CloudSim simulator

    Publication Year: 2014 , Page(s): 171 - 176
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (633 KB) |  | HTML iconHTML  

    The main issues in a cloud based environment are security, process fail rate and performance. Fault tolerance plays a key role in ensuring high serviceability and reliability in cloud. Nowadays, demands for high fault tolerance, high serviceability and high reliability are becoming unprecedentedly strong, building a high fault tolerance, high serviceability and high reliability cloud is a critical, challenging, and urgently required task. A lot of research is currently underway to analyze how clouds can provide fault tolerance for an application. When the number of processes are too many and the virtual machine is overloaded then the processes are failed causing lot of rework and annoyance for the users. The major cause of the failure of the processes at the virtual machine level are overloading of virtual machines, extra resource requirements of the existing processes etc. This paper introduces dynamic load balancing techniques for cloud environment in which RAM/Broker (resource awareness module) proactively decides whether the process can be applied on an existing virtual machine or it should be assigned to a different virtual machine created a fresh or any other existing virtual machine. So, in this way it can tackle the occurrence of the fault. This paper also proposed a mechanism which proactively decides the load on virtual machines and according to the requirement either creates a new virtual machine or uses an existing virtual machine for assigning the process. Once a process is complete, it will update the virtual machine status on the broker service so that other processes can be assigned to it. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Game theoretic resource allocation in cloud computing

    Publication Year: 2014 , Page(s): 36 - 42
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (363 KB) |  | HTML iconHTML  

    Considering the proliferation in the number of cloud users on an everyday basis, the task of resource provisioning in order to support all these users becomes a challenging problem. When resource allocation is non-optimal, users may face high costs or performance issues. So, in order to maximize profit and resource utilization while satisfying all client requests, it is essential for Cloud Service Providers to come up with ways to allocate resources adaptively for diverse conditions. This is a constrained optimization problem. Each client that submits a request to the cloud has its own best interests in mind. But each of these clients competes with other clients in the quest to obtain required quantum of resources. Hence, every client is a participant in this competition. So, a preliminary analysis of the problem reveals that it can be modelled as a game between clients. A game theoretic modelling of this problem provides us an ability to find an optimal resource allocation by employing game theoretic concepts. Resource allocation problems are NP-Hard, involving VM allocation and migration within and possibly, among data centres. Owing to the dynamic nature and number of requests, static methods fail to surmount race conditions. Using a Min-Max Game approach, we propose an algorithm that can overcome the problems mentioned. We propose to employ a utility maximization approach to solve the resource provisioning and allocation problem. We implement a new factor into the game called the utility factor which considers the time and budget constraints of every user. Resources are provisioned for tasks having the highest utility for the corresponding resource. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Experimental analysis of CUBIC TCP in error prone MANETs

    Publication Year: 2014 , Page(s): 256 - 261
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (145 KB) |  | HTML iconHTML  

    CUBIC TCP, a variant of the traditional TCP, is the default congestion control algorithm deployed in Linux kernels above 2.6 version. CUBIC is mainly designed for high speed, long distance wired networks and several studies have exhibited that it indeed, enriches the performance of such networks. Recently, hand-held devices such as smart phones have grown very popular and there has been a lot of interest in the research community to design efficient operating systems for such devices. Android is one of the latest open source mobile operating system and it is based on a reduced version of the Linux kernel. Since it is Linux based, CUBIC TCP remains the default TCP in Android also. These hand-held devices, however, are connected to low speed wireless networks and consequently, CUBIC TCP deployment in Android ends up being a mismatch. The main goal of this work is to analyze the behavior of CUBIC TCP in low speed error prone wireless networks and bring out the challenges and issues related to the same. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analysis of sfqCoDel for Active Queue Management

    Publication Year: 2014 , Page(s): 262 - 267
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (117 KB) |  | HTML iconHTML  

    The availability of cheaper and high capacity Random Access Memory (RAM) has resulted in the growth of buffer size in all the computing devices. This aberrant increase of buffer capacity in network devices has resulted into high latency, leading to reduced throughput; thus decreasing the tendency of absorbing spontaneous burst of traffic. The requirement for Active Queue Management (AQM) has been evident for decades. These solutions require various parameter configuration and are dependent on a particular network condition to work efficiently. Hence an algorithm which is simple, efficient, does not require setting of parameters and works seamlessly irrespective of the network condition is required. Even though Controlled Delay (CoDel) is parameterless and adapts to dynamically changing link rates with no negative impact on utilization, it deviates from its primary purpose of reducing congestion when there is an increase in RTT and when congestion level varies abruptly. As a consequence, a variant of CoDel called Stochastic Fair Queue CoDel (sfqCoDel) is simulated and compared. The Stochastic Fair Queue CoDel proactively drops packets which occupy reasonably larger bandwidth as compared to CoDel, which proactively drops packet irrespective of the bandwidth consumption by packets. This paper aims to perform a comprehensive analysis of Stochastic Fair Queue CoDel for Active Queue Management. A comparison is also carried out between sfqCoDel with CoDel. The sfqCoDel appears to be much better than CoDel in certain areas where CoDel fails to perform well. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Privacy protection in cloud using identity based group signature

    Publication Year: 2014 , Page(s): 75 - 80
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (137 KB) |  | HTML iconHTML  

    Cloud computing is one of the emerging computing technology where costs are directly proportional to usage and demand. The advantages of this technology are the reasons of security and privacy problems. The data belongs to the users are stored in some cloud servers which is not under their own control. So the cloud services are required to authenticate the user. In general, most of the cloud authentication algorithms do not provide anonymity of the users. The cloud provider can track the users easily. The privacy and authenticity are two critical issues of cloud security. In this paper, we propose a secure anonymous authentication method for cloud services using identity based group signature which allows the cloud users to prove that they have privilege to access the data without revealing their identities. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Secure two-party computation with AES-128: Generic approach and exploiting specific properties of functions approach

    Publication Year: 2014 , Page(s): 87 - 91
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (128 KB) |  | HTML iconHTML  

    Introduced by Yao in early 1980s, secure computation is being one among the major area of research interest among cryptologists. In three decades of its growth, secure computation which can be called as two-party computation, or multiparty computation depending on the number of parties involved has experienced vast diversities. Research has been carried out by exploiting specific properties of functionalities and generic approach to achieve efficient practical secure computation protocols. This paper compares the above two approaches for secure two-party computation of AES-128. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Detection of thunderstorms using data mining and image processing

    Publication Year: 2014 , Page(s): 226 - 231
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (227 KB) |  | HTML iconHTML  

    Thunderstorm is a sudden electrical expulsion manifested by a blaze of lightening with a muffled sound. It is one of the most spectacular mesoscale weather phenomena in the atmosphere which occurs seasonally. On the other hand, prediction of thunderstorms is said to be the most complicated task in weather forecasting, due to its limited spatial and temporal extension either dynamically or physically. Every thunderstorm produce lightening, this kills more people every year than tornadoes. Heavy rain from thunderstorm leads to flash flooding, and causes extensive loss to property and other living organisms. Different scientific and technological researches are been carried on for the forecasting of this severe weather feature in advance to reduce damages. In this regard, many of the researchers proposed various methodologies like STP model, MOM model, CG model, LM model, QKP model, DBD model and so on for the detection, but neither of them could provide an accurate prediction. The present research adopted clustering and wavelet transform techniques in order to improve the prediction rate to a greater extent. This is the first research study carried on thunderstorm prediction using the clustering and wavelet techniques resulting with higher accuracy. The proposed model yields an average accuracy of 89.23% in the identification of thunderstorm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A multi-objective differential evolution approach for the question selection problem

    Publication Year: 2014 , Page(s): 219 - 225
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1180 KB) |  | HTML iconHTML  

    Examinations are important tools for assessing student performance. They are commonly used as a metric to determine the quality of the students. However, examination question paper composition is a multi-constraint concurrent optimization problem. Question selection plays a key role in question paper composition system. Question selection is handled in traditional systems by using a specified question paper format containing a listing of weightages to be allotted to each unit/module of the syllabus. They do not consider other constraints such as total time duration for completion of the paper, total number of questions, question types, knowledge points, difficulty level of questions etc,. In this paper we have proposed an innovative evolutionary approach that handles multi-constraints while generating question papers from a very large question bank. The proposed Multi-objective Differential Evolution Approach (MDEA) has its advantage of simple structure, ease of use, better computational speed and good robustness. It is identified to be more suitable for combinatorial problems as compared to the generally used genetic algorithm. Experimental results indicate that the proposed approach is efficient and effective in generating near-optimal or optimal question papers that satisfy the specified requirements. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hybrid Background Subtraction in video using Bi-level CodeBook model

    Publication Year: 2014 , Page(s): 124 - 130
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (243 KB) |  | HTML iconHTML  

    Detection of Objects in Video is a highly demanding area of research. The Background Subtraction Algorithms can yield better results in Foreground Object Detection. This work presents a Hybrid CodeBook based Background Subtraction to extract the foreground ROI from the background. Codebooks are used to store compressed information by demanding lesser memory usage and high speedy processing. This Hybrid method which uses Block-Based and Pixel-Based Codebooks provide efficient detection results; the high speed processing capability of block based background subtraction as well as high Precision Rate of pixel based background subtraction are exploited to yield an efficient Background Subtraction System. The Block stage produces a coarse foreground area, which is then refined by the Pixel stage. The system's performance is evaluated with different block sizes and with different block descriptors like 2D-DCT, FFT etc. The Experimental analysis based on statistical measurements yields precision, recall, similarity and F measure of the hybrid system as 88.74%, 91.09%, 81.66% and 89.90% respectively, and thus proves the efficiency of the novel system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A novel receiver window function for ICI reduction in OFDM system

    Publication Year: 2014 , Page(s): 18 - 21
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (247 KB) |  | HTML iconHTML  

    In this paper a novel receiver window function, for the reduction of Inter carrier Interference(ICI) in Orthogonal Frequency Division Multiplexing (OFDM) system, is proposed. The performance of the proposed window has been compared with raised-cosine (RC) window, better than raised-cosine (BTRC) window, rectangular window, and the modified bartlet hanning(MBH) window with reference to ICI and SIR. It has been observed that ICI power of the proposed window is 17.74 dB better than that of BTRC and 7.3 dB better than that of MBH window function at a normalized frequency offset of 0.05. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An agent-based linked data integration system

    Publication Year: 2014 , Page(s): 113 - 117
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (459 KB) |  | HTML iconHTML  

    With the advent of the Web of Linked Data, new challenges to federated query processing are emerging. Different from traditional federated database systems which do static data integration, this Web of Data is open and ever-changing. In this paper, we present a agent-based architecture providing a flexible and decoupled solution for the federated queries over Linked Data. Based on the presented architecture, a Linked Data Management System (LDMS) has been developed. LDMS manages Linked Data in a virtual way, i.e., it does not load remote data into a local data store. With an application scenario, we demonstrate the scalability and extensibility of the presented architecture. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reduction of semantic gap using relevance feedback technique in image retrieval system

    Publication Year: 2014 , Page(s): 148 - 153
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (300 KB) |  | HTML iconHTML  

    This paper proposes a novel content based image retrieval system incorporating the relevance feedback technique. In order to improve the retrieval accuracy of content based image retrieval systems, research focus has been shifted in reducing the semantic gap between visual features and the human semantics. The five major techniques available to narrow down the semantic gap are: (a) Object ontology (b) machine learning (c) relevance feedback (d) semantic template (e) web image retrieval. This paper focuses on the relevance feedback technique by which semantic gap can be reduced in order to improve the retrieval efficiency of the system. The major challenges facing the existing relevance feedback technique is the number of iterations and the execution time. The proposed algorithm provides a better solution to overcome both these challenges. The efficiency of the system can be calculated based on precision and recall. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Survivable of multicast traffic grooming against single link failures in WDM mesh networks

    Publication Year: 2014 , Page(s): 250 - 255
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (129 KB) |  | HTML iconHTML  

    In Wavelength Division Multiplexing (WDM) optical networks, the failure of network resources (e.g., fiber link or node) can disrupt the transmission of information to several destination nodes on a light-tree based multicast sessions. Thus, it is essential to protect multicast sessions by reserving resources along back-up trees. So that if primary tree fails to transmit the information back-up tree will forward the message to the desired destinations. In this paper, we address the problem of survivable of multicast routing and wavelength assignment with sub-wavelength traffic demands in a WDM mesh networks. In this work, we extend the approach of segment disjoint protection methodology to groom the multicast sessions in order to protect them from single link failures. We have proposed an efficient approach for protecting multicast sessions named light-tree based shared segment protection grooming (LTSSPG) scheme and compared with existing multicast traffic grooming with segment protection (MTG-SP) approach. In case of MTG-SP, each segment of primary tree is protected by dis-joint segment in the back-up tree to share the edges or segment. Whereas in case of LTSSPG approach, the segment are shared between the primary as well as back-up trees. The main objective of this work is to minimize the cost in terms of number of wavelengths requirement and optical splitters as well as minimizing the blocking probability of network resources. The performance of various algorithms are evaluated based on extensive simulations in standard networks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Droid permission miner: Mining prominent permissions for Android malware analysis

    Publication Year: 2014 , Page(s): 81 - 86
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (171 KB) |  | HTML iconHTML  

    In this paper, we propose static analysis of android malware files by mining prominent permissions. The proposed technique is implemented by extracting permissions from 436 .apk files. Feature pruning is carried out to investigate the impact of feature length on accuracy. The prominent features that give way to lesser misclassification are determined using Bi-Normal Separation (BNS) and Mutual Information (MI) feature selection techniques. Results suggest that Droid permission miner can be used for preliminary classification of Android package files. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • SentenceRank — A graph based approach to summarize text

    Publication Year: 2014 , Page(s): 177 - 182
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (147 KB) |  | HTML iconHTML  

    We introduce a graph and an intersection based technique which uses statistical and semantic analysis for computing relative importance of textual units in large data sets in order to summarize text. Current implementations consider only the mathematical/statistical approach to summarize text. (like frequency, TFIDF, etc.) But there are many cases where two completely different textual units might be semantically related. We hope to overcome this problem by exploiting the resources of WordNet and by the use of semantic graphs which represents the semantic dissimilarity between any pair of sentences. Ranking is usually performed on statistical information. The algorithm constructs semantic graphs using implicit links which are based on the semantic relatedness between text nodes and consequently ranks nodes using a ranking algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Finger vein extraction and authentication based on gradient feature selection algorithm

    Publication Year: 2014 , Page(s): 143 - 147
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (217 KB) |  | HTML iconHTML  

    In present days, Authentication by means of biometrics systems is used for personal verifications. In spite of having existing technology in biometrics such as recognizing the fingerprints, voice/face recognition etc., the vein patterns can be used for the personal identification. Finger vein is a promising biometric pattern for personal identification and authentication in terms of its security and convenience. Finger vein has gained much attention among researchers to combine accuracy, universality and cost efficiency. We propose a method of personal identification based on finger-vein patterns. An image of a finger captured under infrared light contains not only the vein pattern but also irregular shading produced by the various thicknesses of the finger bones and muscles. The proposed method extracts the finger-vein pattern from the unclear image by using gradient feature extraction algorithm and the template matching by Euclidean distance algorithm. The better vein pattern algorithm has to be introduced to achieve the better Equal Error Rate (EER) of 0.05% comparing to the existing vein pattern recognition algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automated colour segmentation of Tuberculosis bacteria thru region growing: A novel approach

    Publication Year: 2014 , Page(s): 154 - 159
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (359 KB) |  | HTML iconHTML  

    Medical image analysis is very challenging due to idiosyncrasies of medical profession. Object recognition with data mining techniques has helped doctors in case of medical emergencies for the image analysis, pattern identification and treatment. Over 180 million people died and more than one third of the population is carrier of Mycobacterium Tuberculosis (TB) bacteria as per the WHO statistics [1-5]. Segmentation of TB from the stained background is very challenging due to noise and debris in the image. In this paper, an automated segmentation of tuberculosis bacterium using image processing techniques is presented. Colour segmentation with region growing watershed algorithm is proposed for the bacterial identification. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Post-search query modeling in federated web scenario

    Publication Year: 2014 , Page(s): 183 - 188
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (279 KB) |  | HTML iconHTML  

    As opposed to query reformulation oriented towards changes made by a user to specify the information need more precisely, a post-search query modeling is a technique of exploiting syntax variation of gradually extended query which depending on some other factors like e.g. the resource, database or the key word alignment, facilitates the searching process. The study into modeling query submitted to some search engines that utilize different translation semantic paradigms is motivated by a real-world's challenges to retrieve heterogeneous textual documents from the web. For a couple of language pairs, we develop a user-centered framework for imposing the Hidden Web traffic optimization. In literature Hidden Web is the World Wide Web facet usually missed by standard information systems. Our data set contains variety of query types submitted to translingual systems that perform a number of syntax-driven indexing being evaluated by constructing a precision trend function, the one that intensifies the relevance set of the system responses from a perspective of dramatic reduction of those outside the user's interest. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Searching application for Southern Thailand Travel Guide on iPhone

    Publication Year: 2014 , Page(s): 195 - 200
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (267 KB) |  | HTML iconHTML  

    Recently, the travel guides have become an important tool to support tourists around the world. The tourists will search the tourist attraction and travel information from the book, brochure or website. In this paper, the searching application for Southern Thailand Travel Guide on iPhone is proposed. The proposed application is developed to search for travel guide information on different provinces of the Southern Thailand. The searching application function is divided into two modes: online and offline mode. The language can be displayed in two languages: Thai and English language. This paper presents the design and implementation by using Apple's iOS Software Development Kit. The results show that the tourists can search tourist attraction including history, picture, address, phone number, website, map and travel detail. The maps can show the current location of the users. In addition, these maps can display in three modes: standard, satellite and hybrid. Overall information on tourist attraction can be shared on Facebook, Twitter and E-mail. The proposed application is better support for the foreign tourists and Thai tourists that they are used iPhone mobile. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Novel mutual authentication protocol for cloud computing using secret sharing and steganography

    Publication Year: 2014 , Page(s): 101 - 106
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (209 KB) |  | HTML iconHTML  

    Proper authentication is an essential technology for cloud-computing environments in which connections to external environments are common and risks are high. Here, a new scheme is proposed for mutual authentication where the user and cloud server can authenticate one another. The protocol is designed in such a way that it uses steganography as an additional encryption scheme. The scheme achieves authentication using secret sharing. Secret sharing allows a part of the secret to be kept in both sides which when combined becomes the complete secret. The secret contains information about both parties involved. Further, out of band authentication has been used which provides additional security. The proposed protocol provides mutual authentication and session key establishment between the users and the cloud server. Also, the users have been given the flexibility to change the password. Furthermore, strong security features makes the protocol well suited for the cloud environment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A distributed system design for next generation storage and remote replication

    Publication Year: 2014 , Page(s): 22 - 27
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (256 KB) |  | HTML iconHTML  

    The business continuity is essential for any enterprise application where remote replication enables customers to store the data on a Logical Disk (LDisk) at the local site and replicate the same at remote locations. In case of a disaster at local site, the replicated LDisk (remote copy) at remote site is marked as primary copy and the remote copy is made available without any downtime. The replication to destination is configured either in sync-mode or async-mode. In case of async-mode, the host IOs are first processed by the source array at the local site. A snapshot of the LDisk is triggered periodically and the new snapshot is replicated to the destination array at remote site. In this configuration, one particular node of source array becomes loaded with ongoing host IOs, snapshot, and replication activities. In the scale-out model, a storage array consists of multiple nodes and hence, the replication tasks and responsibilities can be distributed to a different node. We propose a cloning mechanism called DeltaClone, which replicates the incremental changes of LDisk across nodes. The ownership of a LDisk and its DeltaClone are assigned to two different nodes which are called as master node and slave node respectively. When the periodic request is triggered to synchronize the LDisk data with its remote copy, the current DeltaClone is frozen and it is then merged with remote copy. Hence, the replication tasks are carried out at slave node without affecting the performance of the master node and the ongoing host IOs. The slave node is re-elected periodically to ensure the dynamic load-balancing across the nodes. Our distributed design improves the overall storage performance and the simulation results showed that the proposed method outperforms the traditional methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.