Scheduled System Maintenance:
Some services will be unavailable Sunday, March 29th through Monday, March 30th. We apologize for the inconvenience.
By Topic

Digital Information and Communication Technology and it's Applications (DICTAP), 2012 Second International Conference on

Date 16-18 May 2012

Filter Results

Displaying Results 1 - 25 of 97
  • Recent developments in MIMO channel estimation techniques

    Publication Year: 2012 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (142 KB) |  | HTML iconHTML  

    The ever increase in demand for bandwidth and services related to high performance broadband wireless communications has opened the doors for use of multiple antennas both at the transmitter and the receiver. The wireless channel properties are dynamic by nature as it is frequency selective as well as time dependent. MIMO channel estimation technique has been one among the prime focus areas among the researchers in past two decades, which has emerged as one among the novel techniques to support the ever increase in demand for bandwidth as well as high data rate to support upcoming telecom and multimedia services. Combination of MIMO with OFDM as well as UWB techniques has shown improved performance in providing high data rate with low complex receiver properties and channel capacity. Recently researchers have shown interest in applying MIMO-OFDM in estimating channel parameters of underwater acoustic channels. The objective of the present paper is to present the developments made by researchers in channel estimation techniques in MIMO environment in recent past years. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • WSeH: Proposal for an adaptive monitoring framework for WSNs, with enhanced security and QoS support

    Publication Year: 2012 , Page(s): 7 - 12
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (155 KB) |  | HTML iconHTML  

    Wireless sensor networks (WSN) are highly distributed self-organized systems, the associated research being growing at a tremendous pace, and targeting various application domains. The successful implementation of such networks is dependent on the enabling technologies (such as digital electronics and wireless communications), as well as the provisioning of Quality of Service (QoS) and various security features in the networks. This paper focuses on the main characteristics and the current development status of the: management and monitoring, security and QoS topics, an overview of the recent progress being given. The paper examines and discusses the challenges of an adaptive monitoring framework (WSeH framework) with enhanced security and QoS support for WSNs, proposing a generic architecture and opening research issues. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An efficient routing technique that maximizes the lifetime and coverage of wireless sensor networks

    Publication Year: 2012 , Page(s): 13 - 18
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1352 KB) |  | HTML iconHTML  

    Wireless sensor networks (WSN) have become very popular in the last few years. One key issue about WSN is that sensor nodes have a limited battery capacity. For that, it is important to develop energy efficient solutions to keep these networks functioning for the longest period of time. Since most of the nodes energy is spent on data transmission, for that many routing techniques have been proposed to expand the network lifetime such as the Online Maximum Lifetime heuristics (OML) and capacity maximization (CMAX). The OML has obtained the best lifetime in the literature. In this paper, we introduce an efficient routing power management heuristic to gain higher lifetime and increased coverage by managing the power at the node level. We accomplished that by dividing the node energy into two ratios; one for the sensor node originated data (α) and the other part is for data relays from other sensors (β). This heuristic, which is called ERPMT (Efficient Routing Power Management Technique), has been applied to OML and CMAX. Results from running extensive simulation runs revealed the superiority of the new methodology ERPMT over existing heuristics. The ERPMT increases the lifetime up to 56.7% in the best case, which is achieved when α = 50% of total energy. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 2-Connected relay placement problem in wireless sensor networks

    Publication Year: 2012 , Page(s): 19 - 24
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (316 KB) |  | HTML iconHTML  

    This paper addresses a more reliable and fault-tolerant version of the standard relay placement problem (RPP) in the design and deployment of wireless sensor networks. Given a set of sensors in a Euclidean plane, the 2-connected relay placement problem (2CRPP) is to place minimum number of relays such that each sensor can communicate with at least one relay, and all relays jointly form a 2-connected network. Since 2CRPP is proven to be NP-hard, in this paper we proposed a polynomial time approximation algorithm for the problem and mathematically proved that its approximation ratio is bound by (4+ε), in the worst case. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance enhancement of mobile networks using cooperative MIMO technique

    Publication Year: 2012 , Page(s): 25 - 29
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (175 KB) |  | HTML iconHTML  

    It is well known that Multiple-Input Multiple-Output (MIMO) system provides both spatial and diversity gains by using multiple antennas at both mobile and base stations. However, the mobile station cannot effort multiple antennas due to small and limited dimension. Therefore, the MIMO technique cannot be successfully operated. In this light, the cooperative MIMO technique enables a mobile user with single antenna to realize a virtual multiple antennas by using the antennas of neighboring users. As a result, the mobile user can experience the full benefits of MIMO system. In literatures, the cooperative MIMO technique using Space-Time Block Codes (STBC) is created by the relay capability of neighbor users. However, the devices of all users have the same potential to detect symbols. Therefore, this paper proposes the STBC scheme by changing the symbol order from source to relay in order to be easily performed as MIMO at the destination. The simulation results reveal that it is possible to obtain a better error performance as well as a higher channel capacity by using the proposed scheme for mobile cellular networks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multipath load balancing & rate based congestion control for mobile ad hoc networks (MANET)

    Publication Year: 2012 , Page(s): 30 - 35
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (184 KB) |  | HTML iconHTML  

    In mobile ad hoc network (MANET), congestion is one of the most important restrictions that deteriorate the performance of the whole network. Multipath routing can balance the load better than the single path routing in ad hoc networks, thereby reducing the congestion by dividing the traffic in several paths. This paper presents a new approach Multipath Load Balancing and Rate Based Congestion Control (MLBRBCC) based on rate control mechanism for avoiding congestion in network communication flows. The proposed approach contains an adaptive rate control based technique in which the destination node copies the estimated rate from the intermediate nodes and the feedback is forwarded to the sender through an acknowledgement packet. Since the sending rate is adjusted based on the estimated rate, this technique is better than the traditional congestion control technique. Simulation results show that proposed technique has better packet delivery ratio and improved throughput and also controls the congestion in more effective manner. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Investigating QoS support in WiMAX over Metro-Ethernet backhaul

    Publication Year: 2012 , Page(s): 36 - 41
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (145 KB) |  | HTML iconHTML  

    Regarding the necessity of internet and also increasing demand for various services, WiMAX network with high bandwidth and suitable speed of transfer can be considered as a solution for public access. This technology with production in considerable volumes from fixed to portable versions shows notable progress in wireless connectivity. Capability of Quality of Service (QoS) support provides user satisfaction in real-time and interactive services over WiMax networks. The purpose of this study is to investigate WiMAX network performance over a Metro-Ethernet backhaul. As in WiMAX network the QoS is of due importance, the investigation of the possibility of transfer and maintenance of this quality is considered. A method for WiMAX and Metro-Ethernet fusion is suggested and its performance is evaluated by simulation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A decomposition method for pilot power planning in UMTS systems

    Publication Year: 2012 , Page(s): 42 - 47
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (223 KB) |  | HTML iconHTML  

    Pilot power management is an important issue for coverage planning in UMTS systems. We consider the problem of minimizing the pilot power subject to the constraint of full service coverage. For this planning problem, which is NP-hard in complexity, effective methods being able to deal with large-scale networks of heterogeneous cell coverage patterns are highly desirable. We propose an integer linear optimization formulation, and a decomposition method that exploits the problem structure using a Dantzig-Wolfe reformulation. We report numerical results for networks of various sizes. The proposed method efficiently finds near-optimal solutions that yield substantial savings in power consumption when compared to baseline approaches. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Services composition in IMS environment: An evolved SCIM based approach

    Publication Year: 2012 , Page(s): 48 - 51
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (378 KB) |  | HTML iconHTML  

    IP Multimedia Subsystem (IMS) has been widely accepted by telecom industry as a prolific platform for providing next generation Telecom services, and massive deployment has been finally taking off. However, most of the services in IMS environment are still developed in silo method which makes new services implementation to be a time-consuming and burdensome process. In this paper, we consider service composition as an alternative approach for rapid service creation, reusing service capabilities to implement new services. For this goal, we analyze existing standards for IMS service composition management, and show how the Service Capability Interaction Manager (SCIM) is used for brokering services capabilities and managing service composition. Based on this analysis, we propose a SCIM-based architecture evolution in order to enable an agile service composition. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Possible schemes on baseband signal joint detection concept for multimode terminal

    Publication Year: 2012 , Page(s): 52 - 57
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (418 KB) |  | HTML iconHTML  

    This paper presents an integration architectures concept for various wireless access networks detections. The process is done in physical layer after carrier sensing to collect as many as the available services around a multimode terminal. The main idea is recognition of the unique signals transmitted by different standard within a constant period. The research has found that the unique signals either synchronization or preamble signals have to be employed as a representation of service availabilities. Furthermore, the possibility of some correlation schemes also presented in order to detect those signals. From a complexity point of view, finally, this paper proposes the lowest complexity of joint detection architecture for multimode terminal to take care GSM, WiMAX OFDM, and Wireless LAN services. The result shows that the proposed architecture requires 60% of computational resources than employing cross-correlation to detect all services. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • LDPC coding for MIMO wireless sensor networks with clustering

    Publication Year: 2012 , Page(s): 58 - 61
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (331 KB) |  | HTML iconHTML  

    Wireless Sensor Network (WSN) is used in various applications. Sensors acquire samples of physical data and send them to a central node in different topologies to process the data and makes decisions. A main performance factor for WSN is the battery life that depends on energy consumption on the sensor. To reduce the energy consumption, an energy efficient transmission technique is required. Multiple Input Multiple Output (MIMO) systems showed good utilization of channel characteristics. This leads to enhance the transmission and hence reduce energy consumed by the sensor. In MIMO systems multiple signals are combined at the transmitter and transmitted using multiple antennas. This provides each receiver the whole combined signal and hence, array processing techniques helps in getting better performance. To further enhance the transmission of data, a Low Density Parity Check (LDPC) Coded MIMO wireless sensor network is proposed. The system implements space diversity through Multiple Antennas and temporal diversity through LDPC code and uses a clustering procedure to optimize the forming of the MIMO system. Results showed that, if the number of sensors is greater than the number of receiving antennas, time or frequency multiplexing is possible to keep good performance for the devised system. And by controlling the encoder we can create a temporal and spatial code among the transmitted signals enhancing the BER results in longer battery life at sensor nodes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast fault detection in wireless sensor networks

    Publication Year: 2012 , Page(s): 62 - 66
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (238 KB) |  | HTML iconHTML  

    Fault detection is of a high importance, because if the fault diagnosis is not done, it is possible that gained data would not be correct and so would be decision making. Perfect and safe sensors and defected sensors, should be separated from each other. In this paper we have presented a distributed algorithm to detect the faulty sensors, This method is based on neighbor voting and it acts as each sensor, according to neighbor voting, makes decision itself. In this method, each sensor sends its state with the value sensed from the environment to neighbors alternatively. All sensors based on information received from neighbors about whether the sensor is impaired or not, make decisions. In this algorithm, three parameters of θ1 and θ2 and θ3, which determining their accurate value is very important, by any change in those, the result would change. Increasing the amount of θ2, the faulty sensor detection accuracy (FSDA) reduces and the false alarm rate (FAR) increases. One of the differences of This approach to previous method [1], is that the sending position would be with data, That itself makes the algorithm run faster and also it reduces the amount of sending information that as the result, the energy consumption. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • DFMC: Decentralized fault management mechanism for cluster based wireless sensor networks

    Publication Year: 2012 , Page(s): 67 - 71
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (206 KB) |  | HTML iconHTML  

    Energy is one of the most constraining factors in wireless sensor networks and node failures due to crash and energy exhaustion are commonplace. In order to avoid degradation of service due to faults, it is necessary for the WSN to be able to detect faults early and initiate recovery actions. In this paper, we propose a decentralized method for fault detection and recovery in clustered wireless sensor network namely DFMC which is energy efficient and can improve the throughput of network. Simulation results show that the performance of proposed algorithm is more efficient than CMATO. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast motion estimation using two-step bit-transform-based normalized partial distortion search algorithm

    Publication Year: 2012 , Page(s): 72 - 76
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (336 KB) |  | HTML iconHTML  

    In this paper, we propose a two-step bit-transform-based normalized partial distortion search (TSB-NPDS) algorithm for fast motion estimation by using the characteristics of pattern similarity matching errors. Two significant bit-plane features of image block pattern extracted by a two-bit transform are used to determine the calculation order in the proposed TSB-NPDS. The experimental results indicate that the TSB-NPDS can achieve a speedup ranging from about 14.4 to 16.0 times of that of the full search algorithm with neglected PSNR loss. Furthermore, the performance of TSB-NPDS is better than other modified NPDS-based algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robust video transmission for H.264 scalable video coding using unequal error protection

    Publication Year: 2012 , Page(s): 77 - 81
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (276 KB) |  | HTML iconHTML  

    In this work, we develop a novel robust scheme of two-dimensional unequal error protection (2D-UEP) for the H.264 scalable video coding (H.264/SVC) with a combined temporal and quality (SNR) scalability over packet-lossy networks. The proposed scheme combines error resilience techniques and importance measures in video coding. To avoid the waste of bits and obtain the best rate allocation, we develop a threshold-based UEP (TH-UEP) algorithm. The TH-UEP designs a predefined threshold according to the length of packet and the error correcting ability of RS code to achieve the best allocation. In addition, the proposed scheme also derives a simple mathematical model to reduce computational load of the best rate allocation. Experimental results demonstrate that the proposed H.264/SVC video transmission scheme using UEP can provide strong robustness and video quality improvement when compared to other 2-D UEP schemes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Study of ICA algorithm for separation of mixed images

    Publication Year: 2012 , Page(s): 82 - 86
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (772 KB) |  | HTML iconHTML  

    The image data can be Gaussian or non-Gaussian or both. If the data is Gaussian then the extraction and processing of image data becomes computationally less complex. Due to this reason many existing techniques like factor analysis, Principle Component analysis, Gabor wavelets etc. assume the data to be Gaussian and processing involves only second order moments such as mean and variance. But if the data is non-Gaussian, then the extraction and processing of image data becomes computationally more complex as it involves higher order moments like kurtosis and a new measure of non-Gaussianity known as negentropy. In this paper a recently developed technique, known as Independent Component Analysis, is applied to image data and detailed analysis is done for step wise output of the algorithm. In the context of adaptive Neural Network, ICA method tries to train the non-Gaussianity instead of assuming the data to be Gaussian. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Faster and more accurate feature-based calibration for widely spaced camera pairs

    Publication Year: 2012 , Page(s): 87 - 92
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (726 KB) |  | HTML iconHTML  

    The increasing demand for live multimedia systems in gaming, art and entertainment industries, has resulted in the development of multiview capturing systems that use camera arrays. We investigate sparse (widely spaced) camera arrays to capture scenes of large volume space. A vital aspect of such systems is camera calibration, which provides an understanding of the scene geometry used for 3D reconstruction. Traditional algorithms make use of a calibration object or identifiable markers placed in the scene, but this is impractical and inconvenient for large spaces. Hence, we take the approach of features-based calibration. Existing schemes based on SIFT (Scale Invariant Feature Transform), exhibit lower accuracy than marker-based schemes due to false positives in feature matching, variations in baseline (spatial displacement between the camera pair) and changes in viewing angle. Therefore, we propose a new method of SIFT feature based calibration, which adopts a new technique for the detection and removal of wrong SIFT matches and the selection of an optimal subset of matches. Experimental tests show that our proposed algorithm achieves higher accuracy and faster execution for larger baselines of up to ≈2 meters, for an object distance of ≈4.6 meters, and thereby enhances the usability and scalability of multi-camera capturing systems for large spaces. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A framework of multi-objective particle swarm optimization in motion segmentation problem

    Publication Year: 2012 , Page(s): 93 - 98
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (236 KB) |  | HTML iconHTML  

    Research in motion segmentation and robust tracking have been getting more attention recently. In video sequence, motion segmentation is considered as multi-objective problem. Better representation and processing of the standard image in video sequence, with efficient segmentation algorithm is required. Thus, multi-objective optimization approach is an appropriate method to solve the optimization problem in motion segmentation. In this paper, we present new framework of the video surveillance for optimization of motion segmentation using Multi-objective particle swarm (MOPSO) algorithm. Experiment based on benchmarked test functions of MOPSO and PSO is evaluated to show the result with respect to the coverage metric of the best point of optimization value. The result indicates that MOPSO is highly good in converging towards the Pareto Front and has generated a well-distributed set of non-dominated solution. Hence, is a promising solution in multi-objective motion segmentation problem of video surveillance application. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Face recognition using Oriented Laplacian of Gaussian (OLOG) and Independent Component Analysis (ICA)

    Publication Year: 2012 , Page(s): 99 - 103
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (266 KB) |  | HTML iconHTML  

    The problem of face recognition using Laplacian pyramids with different orientations and independent components is addressed in this paper. The edginess like information is obtained by using Oriented Laplacian of Gaussian (OLOG) methods with four different orientations (0°, 45°, 90°, and 135°) then preprocessing is done by using Principle Component analysis (PCA) before obtaining the Independent Components. The independent components obtained by ICA algorithms are used as feature vectors for classification. The Euclidean distance (L2) classifier is used for testing of images. The algorithm is tested on two different databases of face images for variation in illumination, facial expressions and facial poses up to 180° rotation angle. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hierarchical requantization of depth data for 3D visual communications

    Publication Year: 2012 , Page(s): 104 - 109
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1077 KB) |  | HTML iconHTML  

    Depth data is recognized as an important information in 3D visual communications. Whereas the representation, coding and transmitting of depth data is still an open problem, the depth data quality strongly also depend of its bit precision, i.e. how many bits are used to represent the depth signal. This paper addresses the efficient requantization of a n-bit depth data to a lower m-bit representation, in order to be compliant with classical video encoder input. The proposed depth mapping method to a lower m-bit precision is carried out through a binary space partition wherein the construction is based on histogram analysis. The resulting constrained optimization problem can be solved in O(2m) time. Experimental results show that this new depth requantization strategy leads to a smaller quantization error and hence better synthesis novel views by Depth Image Based Rendering (DIBR). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Puppet modeling for real-time and interactive virtual shadow puppet play

    Publication Year: 2012 , Page(s): 110 - 114
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (340 KB) |  | HTML iconHTML  

    Traditional shadow puppet play has been a popular local performing art and storytelling tradition in many regions of South East Asia. Currently, this traditional show is slowly becoming less popular due to the fact that the show can only be performed by professional puppeteers and there are not many of them around nowadays. Furthermore, the theater requires high cost of maintenance, and a long time and arduous task in preparing for a show. Therefore, various applications have been developed in order to allow the users to perform the shadow puppet play virtually. Most previous related works involved creating a show or storyline in off-line mode, and those that allow interactive and real-time performance provide limited interactivity and the arm movement of the shadow puppet is not realistic and free enough like an original shadow puppet movement. In this paper, we propose a method that provides more interactivity and more realistic arm movement in real-time by focusing on the holder that is attached to the wrist of the shadow puppet which is a key element in moving the arm of the traditional shadow puppet during a performance. The method uses the Bone and Bind tools function of the Flash-development platform, Adobe® Flash® CS4 or higher, together with texture mapping technique and the H-Anim standard model. In the preliminary evaluation of the method, both the experts and nonexperts gave encouraging responses and feedbacks with regard to faster and better control, and more realistic and smoother arm movement of the virtual shadow puppets. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • MATLAB based defect detection and classification of printed circuit board

    Publication Year: 2012 , Page(s): 115 - 119
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (612 KB) |  | HTML iconHTML  

    A variety of ways has been established to detect defects found on printed circuit boards (PCB). In previous studies, defects are categories into seven groups with a minimum of one defect and up to a maximum of 4 defects in each group. Using Matlab image processing tools this research separates two of the existing groups containing two defects each into four new groups containing one defect each by processing synthetic images of bare through-hole single layer PCBs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Development of system supervision and control software for a micromanipulation system

    Publication Year: 2012 , Page(s): 120 - 124
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (252 KB) |  | HTML iconHTML  

    This paper presents the realization of a modular software architecture that is capable of handling the complex supervision structure of a multi degree of freedom open architecture and reconfigurable micro assembly workstation. This software architecture initially developed for a micro assembly workstation is later structured to form a framework and design guidelines for precise motion control and system supervision tasks explained subsequently through an application on a micro assembly workstation. The software is separated by design into two different layers, one for real-time and the other for non-realtime. These two layers are composed of functional modules that form the building blocks for the precise motion control and the system supervision of complex mechatronics systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An efficient algorithm for human cell detection in electron microscope images based on cluster analysis and vector quantization techniques

    Publication Year: 2012 , Page(s): 125 - 129
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1246 KB) |  | HTML iconHTML  

    Automatic detection of human cell is one of the most common investigation methods that may be used as part of a computer aided medical decision making system. In this paper we present an efficient algorithm, based on the cluster analysis and the vector quantization techniques for human cell image detection. First, we perform the edge detection methods to specify the desired region of any object in image and then apply vector quantization technique to cluster the property approximation of human cells. Our proposed algorithm is applied on two sample datasets from our research laboratory and also Imamreza laboratory in Mashhad which contain 196 number of normal electron microscope images. Experimental results show that this model is both accurate and fast with a detection rate of around 86.69 percent. Our proposed method does not require any under segmentation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hybrid approach for georeferencing RadarSat2 images

    Publication Year: 2012 , Page(s): 130 - 134
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1192 KB) |  | HTML iconHTML  

    Target geolocation in radar images is becoming more crucial with the advent of high-resolution sensors and the variety of acquisition modes. Each image acquisition system produces specific geometric distortions in its raw images and consequently the geometry of these images does not correspond to the terrain or to a specific map projection of end-users. The geometric distortions vary according to different factors such as the acquisition system (the platform, the sensor and other measuring instruments), as well as the observed surface (problem of earth model, effects of its rotation and the problems inherent to the relief [8]). Nevertheless, it is possible to make general categorizations of these distortions. For RADARSAT 2, image acquisition can be achieved in a wide range of viewing directions [6], and at different resolutions, which implies different geometric distortions, specific to each type of acquisition mode. We also have to take into account the range nonlinearities caused by both the height of target regions and the side-looking acquisition mode of SAR images. We propose a georeferencing process for RADARSAT 2 images, using image metadata. Three levels of treatment were achieved: extracting and restructuring the orbital data from the header file; global modelling of the random distortions to achieve georeferencing, and finally rectifying the range position error caused by elevation in the slant range plane. The process was tested on a set of image data acquired by the RADARSAT2 satellite in both quadri and bipolarisation, covering an area of the capital "Algiers" (Algeria). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.