By Topic

I/V Communications and Mobile Network (ISVC), 2010 5th International Symposium on

Date Sept. 30 2010-Oct. 2 2010

Filter Results

Displaying Results 1 - 25 of 106
  • Semantic image retrieval using relevance feedback and reinforcement learning algorithm

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (104 KB) |  | HTML iconHTML  

    Due to the recent improvements in digital photography and storage capacity, storing large amounts of images has been made possible, and efficient means to retrieve images matching a user's query are needed. Content-based Image Retrieval (CBIR) systems automatically extract image contents based on image features, i.e. color, texture, and shape. Relevance feedback methods are applied to CBIR to integrate users' perceptions and reduce the gap between high-level image semantics and low-level image features. In the past 30 years, relevance feedback (RF) has been an effective query modification approach to improving the performance of information retrieval (IR) by interactively asking a user whether a set of documents are relevant or not to a given query concept. This paper aims at developing a scheme for intelligent image retrieval using machine learning technique and the information gathered from the user's feedback. This helps the system on the following rounds of the retrieval process to better approximate the present need of the user. We have shown that a powerful relevance feedback mechanism can be implemented by using reinforcement learning algorithms. The user thus does not need to explicitly specify weights for relationship between images and concepts, because the weights are formed implicitly by the system. The proposed relevance feedback technique is described, analyzed qualitatively, and visualized in the paper. Also, its performance is compared with a reference method. Experimental results demonstrate that our proposed technique is promising. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An efficient method for 3D objects retrieval based on fixed number of 2D views approach

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (170 KB) |  | HTML iconHTML  

    In this paper an efficient method for 3D objects indexing and retrieval is presented. It is based on a set of six 2D views extracting using the projection box after a scale and pose normalization of the 3D models using PCA. Afterwards, we binarize each 2D view associated with the 3D object and we extract its external contour that represents its associated 2D shape. The Similarity among the 2D shapes is computed using our early proposed 2D shape descriptor based on multi-scale analysis. The experimental results illustrate the efficiency of our proposed method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hypergraph coarsening for image superpixelization

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3710 KB) |  | HTML iconHTML  

    Image segmentation is a hard task and many methods have been developed to alleviate its difficulties. A common preprocessing step designed for this purpose is to compute an over-segmentation of the image, often referred to as superpixels. In this paper, we propose a new approach to superpixels computation. In a first step, a hypergraph-based representation of the image is built. Then, a coarsening approach is operated on the resulting hypergraph to group pixels which belong to the same homogeneous region. This leads to a smaller hypergraph where each component represents a superpixel of the image. Our approach is very fast and can deal with great sized images. Its reliability have been tested on several real images from nature scenes with comparison to other methods. We show in particular that hypergraphs offer a more accurate image representation than graphs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Noisy ICA-based detection method for compound system MIMO-OFDM in CDMA context

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (99 KB) |  | HTML iconHTML  

    A Noisy Independent Component Analysis (ICA) based method for detection in the compound system MIMO-OFDM and in the context of CDMA is proposed. Noisy-ICA algorithm is used as a post processor attached to a subspace based CDMA receiver in the presence of Gaussian noise. The proposed architecture of the detector reduces the bias caused by the channel noise and further decreases the noise by dimensionality reduction. An uplink CDMA based MIMO-OFDM channel is investigated given that only the code of the desired user is known in a blind symbol separation situation. The proposed receiver is compared to the conventional matched filter MF, the well-known linear MMSE detector, the ZF multi-user detector and the detection based on the ordinary ICA. The obtained simulation results for the AWGN channel demonstrate that the proposed scheme achieves significant bit error rate performance and appears to be suitable for CDMA based MIMO-OFDM system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Steganographic algorithm based on error-correcting codes for gray scale images

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (201 KB) |  | HTML iconHTML  

    R. Crandall introduced in the matrix encoding idea to improve the embedding efficiency for steganography. The steganographic algorithm F5 proposed by Westfeld is the first implementation of the matrix encoding concept to reduce modification of the quantized DCT coefficients. In this paper, a new construction of protocol steganography is considered, it is an extension of the error correcting code and steganography construction introduced by the authors in. The proposed method consists of use the Majority logic decoding, for embedding the message in the cover image, the extraction function is always based on syndrome coding. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Correction of ECG baseline wander application to the Pan & Tompkins QRS detection algorithm

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (192 KB) |  | HTML iconHTML  

    The ECG signal is pseudo-periodic, since the amplitude of every wave varies from a cycle to the other one during the same recording. The variation of the amplitude is related to physiological and pathological conditions of the patient. But when recording, the ECG signal is contaminated by various kinds of noise such as the patient's contraction muscles, respiration, 60 Hz interference, place of recording (ambulatory recording), which can change the positions of electrodes which record the signal. All these factors affect the signal and disrupt it, this gives a signal whose baseline is wandering. In order to obtain the best extraction of the QRS complex of an ECG signal, we will need to correct this baseline and to make it horizontal. In this paper, we will use this correction in order to use a fixed thresholding in the application of the Pan & Tompkins algorithm instead of using an adaptatif thresholding, this method reduces the processing time and complexity for the concerned algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Wireless sensor networks: A new event notification approach

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (86 KB) |  | HTML iconHTML  

    The use of mobile sinks has attracted a lot of researchers in the recent few years. In fact, when deploying a static sink, nodes that are close to the sink will drain their energy faster, because besides sending their own data, they are, also, responsible for forwarding data on behalf of nodes that are far. In this work, we propose a new event notification approach in which nodes that detect a certain event, such as flooding or air condition change, will form a cluster and elect a clusterhead. The clusterhead will be in charge of reporting the event occurrence to a certain mobile sink. It is necessary to point that all sensors stay in "sleep" mode until an event occurs, which makes our approach highly energy-efficient. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A new affine invariant representation method of 3D objects using multiple linear models

    Page(s): 1 - 3
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (145 KB) |  | HTML iconHTML  

    A new invariant method for representation or description for 3D object is proposed using multiple linear model. This method consists of extracting vector invariant using the multiple linear parameter model applied to the 3D object in question, it's invariant against affine transformation of the object. The concerned 3D objects are transformations of 3D objects by one element of the overall transformation. The set of transformations considered in this work is the general affine group. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An FPGA implementation of motion estimation algorithm for H.264/AVC

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (104 KB) |  | HTML iconHTML  

    The H.264/AVC standard achieves much higher coding efficiency than previous video coding standards. Unfortunately mis comes with a cost in considerably increased complexity at the encoder mainly due to motion estimation. Therefore, various fast algorithms have been proposed for reducing computation but they do not consider how they can be effectively implemented by hardware. In this paper, we propose a hardware architecture of fast search block matching motion estimation algorithm using Line Diamond Parallel Search (LDPS) for H.264/AVC video coding system. This architecture presents pipeline processing techniques, minimum latency, maximum throughput and full utilization of hardware resources. The VHDL code has been tested and can work at high frequency in a Xilinx Virtex-5 FPGA circuit. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Recovery of ISI channels with Wavelet Packet Modulation using linear equalization and channel estimation

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (700 KB) |  | HTML iconHTML  

    Recently, multicarrier modulations is becoming the chosen modulation technique for wireless communications. The Wavelet Packet Modulation (WPM) is applied as a novel alternative to the Orthogonal Frequency Division Multiplexing (OFDM). In this paper, Wavelet Packet Transform based multicarrier modulation is presented and introduced using both Zero Forcing (ZF) and Minimum Mean Square Error (MMSE) equalizers. They are proposed and compared for Wavelet based multicarrier modulation. First we have evaluated the performance of the classical OFDM and the WPT in term of Power Average Peak Ratio (PAPR). Second, the performances of proposed schemes are qualified for three types of wireless channels in the presence of Additive White Gaussian Noise (AWGN). Results of simulation illustrate a notable enhancement in Bit Error Rate (BER) for MMSE over ZFE equalization technique for different types of channel. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New approach for oesophageal speech enhancement

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (419 KB) |  | HTML iconHTML  

    This paper presents an oesophageal speech enhancement algorithm. Such an exceptionally special type of voice is due to the laryngectomy undergone by those persons with larynx cancer. An oesophageal voice has extremely low intelligibility. This work proposes a method to improve its quality, which consists of stabilizing the transfer function poles of the vocal tract model so as to improve a signal's formants. With this aim, the joint use of three techniques has been applied: firstly, Digital Wavelet Transform technique, secondly, Kalman filtering technique and thirdly, an algorithm that transforms the modulus and phase of vocal tract's poles. When using this efficient speech enhancement Kalman Filtering procedure, the speech signal is usually modelled as an autoregressive (AR) process and represented in the state-space domain. The final speech improvement has been measured with the help of Multi-Dimensional Voice Program (MDVP) tools. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Statistical block-based skin detection

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (382 KB) |  | HTML iconHTML  

    In this paper, we propose a novel approach to model the human skin color. The underlying approach involves affecting a block's average value at each pixel location, using its surrounding points. The generated skin data are found to follow a generalized Gaussian distribution (GGD) and a mixture of GGDs in the H and S color component, respectively. Next, the model parameters are estimated using the maximum-likelihood (ML) criterion applied to a set of training skin samples. Each pixel is then classified as skin or the opposite if its joint likelihood ratio is above some threshold. The preliminary experimental results show that our model avoids excessive false detection while still retaining a high degree of correct detection. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Colposcopic image registration using opponentSIFT descriptor

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (125 KB) |  | HTML iconHTML  

    This work presents a colposcopic image registration system able to help physicians for cervical cancer diagnosis. The goal is to make registration between the cervical tissue throughout the whole temporal sequence. Recent digital images processing works, suggested using feature points to compute the tissue displacement. These methods achieve good results, because they are fast and do not need any segmentation, but to date, all methods based on feature points are sensitive to light change and reflections which are frequently current in colposcopic images. To solve this problem, we propose to apply the opponentSIFT descriptor which describes features point in the opponent color space. Experimental results show the robustness of this descriptor in colposcopic images registration in comparison with other descriptors. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Solving the 3D watershed over-segmentation problem using the generic adjacency graph

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (174 KB) |  | HTML iconHTML  

    The watershed transformation is a useful tool for the 3D segmentation. However, over segmentation have become the key problems for the conventional algorithm. This paper presents two new methods for solving these problems. The first method is to establish a generic-adjacencies graph of regions resulting from the application of watershed segmentation and to merge these regions according to a criterion of depth. The second method that works by pre-treatment uses the generic-adjacencies graph of minima to eliminate insignificant ones. In this process we have applied a hybrid criterion of depth and concavity/convexity to obtain the significant minima, these latter will subsequently pass to the watershed segmentation for a 3D object parting. The results show the effectiveness of the proposed approach. Indeed, the use of the adjacency graph allowed us to reduce processing time. Our ways permit therefore to get a fast and efficient segmentation of 3D mesh models. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An area reduced design of the Context-Adaptive Variable-Length encoder suitable for embedded systems

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (236 KB) |  | HTML iconHTML  

    In this paper a new Context-Adaptive Variable Length Coding (CAVLC) encoder architecture is proposed aimed to be implemented in embedded systems and field programmable logic. The design proposes novel Arithmetic Table Elimination (ATE) techniques, along with a table compression technique applied to those tables that cannot be eliminated by arithmetic manipulations. These approaches allows to halve the total number of tables requested by CAVLC algorithm and bring to an overall memory saving of about 87% with respect to an unoptimized implementation of the tables. Computational performances of the encoder have been improved by increasing the degree of parallelism through the use of priority cascading logic. With the proposed approaches the CAVLC encoder is capable of real time compression of 1080p HDTV video streams, coded in YCbCr 4:2:0, when it is implemented with a low-end Xilinx Spartan 3 FPGA, where the encoder achieves an operation frequency of 63MHz and requires an area occupancy of 2200 LUTs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Throughput-delay optimisation with adaptive method in wireless ad hoc network

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (5436 KB) |  | HTML iconHTML  

    Mobile ad-hoc network (MANET) is a collection of intercommunicating mobile hosts forming a spontaneous network without use existing infrastructure. The mobility model represents the moving behavior of each mobile node in the MANET that should be realistic. It is a crucial part in the performance evaluation of MANET. In this paper, we have studied the effects of various random mobility models on the performance of On-Demand Distance Vector Routing (AODV) and we develop an adaptive method which gives the better performances in terms of delay and throughput. For experimental purposes, we have considered three mobility models: Random Waypoint, Random Direction and Mobgen-Steady State. Experimental results illustrate that the performance of the routing protocol varies across different mobility models and node densities. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Single mixture audio sources separation using ISA technique in EMD domain

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (145 KB) |  | HTML iconHTML  

    This paper introduces a novel technique that is developed to separate the audio sources from a single mixture. Indeed, audio signals and, in particular, musical signals can be well approximated by a sum of damped sinusoidal (modal) components. Based on this representation, Empirical Mode Decomposition (EMD) is employed to extract Intrinsic Mode Functions (IMFs) for audio mixture signal. By applying PCA (Principal Component Analysis) to the extracted components, we find uncorrelated components which are the artificial observations. Then we obtain independent components by applying Independent Component Analysis (ICA) to the uncorrelated components. A k-means clustering algorithm is introduced to group the independent basis vectors into the number of component sources inside the mixture. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A SysML profile for wireless sensor networks modeling

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (137 KB) |  | HTML iconHTML  

    Today, wireless sensor networks designers do not fully benefit from the power of widely recognized UML (and its profiles including SysML) when it comes to modeling all aspects of a wireless sensor network. The modeling process is usually restricted to the software and therefore only partially supports the wireless sensor network's design. Among the UML profiles family, there is a language that supports the modeling of many aspects of a system (including software and hardware) : SysML (Systems Modeling Language). In this paper, we propose a SysML profile for wireless sensor networks that will support the modelling of such networks. This profile provides support for structure, behavior, requirements and energy consumption information modeling. It will allow designers to model different aspects of a wireless sensor network. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Study of LEO satellite constellation systems based on quantum communications networks

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (197 KB) |  | HTML iconHTML  

    Quantum cryptography, or more specifically quantum key distribution (QKD), is the first offspring of quantum information that has reached the stage of real-world application. Its security is based on the fact that a possible spy (Eve, the eavesdropper) cannot obtain information about the bits that Alice sends to Bob, without introducing perturbations. Therefore authorized partners can detect the spy by estimating the amount of error in their lists. The central objective of this paper is to implement and improve practical systems for quantum cryptography. The essential work carried in our research laboratory concerns the software development to implement of Quantum Key Distribution (QKD) Network based on LEO orbit number and reduce the telecommunication interruption risks and this will provide indeed a better communication quality, and investigations into the causes of losses in the system and attempts to minimize the quantum bit error rate (QBER). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A nonlinear adaptive filtering scheme based on a non-iterative orthogonalization procedure

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (227 KB) |  | HTML iconHTML  

    Deserving attention to the optimization of algorithm architecture adequacy for polynomial filtering with good numerical stability and convergence speed performances, we propose in this paper an adaptive filtering scheme based on a non iterative orthogonalization polynomials procedure allowing to enhance the performances of a real-time adaptive system. We explore the benefits of generalizing the closed-form expression, derived for optimal NL RF power amplifier identification, to the adaptive identification of audio transmission systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Accessible schematics content descriptors using image processing techniques for blind students learning

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (301 KB) |  | HTML iconHTML  

    The documentation used when teaching Electronics in Engineering Degrees is not accessible to blind or visually disabled students because they can just have a text to speech conversion of the paragraphs but not a circuits schematics to speech conversion. This work presents an open source algorithm integrated in a tool compatible with Open Office. This algorithm applies digital image processing and computer vision techniques to any schematic circuits included in the document. It provides an intelligent and automatic textual description of both the sequence of electronic components and the position in the general schematics in order to make it accessible to blind people. Lecturers and teachers will be able to create teaching materials with common applications such as PSpice or Oregano, and then they will convert the charts into accessible-content figures in a very easy way. The screen reader will let them navigate around the graph in order to access the information of a concrete element in it. Assessment has been done with real trials and a satisfaction survey. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A reduced reference approach based on bidimensional empirical mode decomposition for image quality assessment

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (533 KB) |  | HTML iconHTML  

    In this paper, we present a reduced reference method for image quality assessment (RRIQA). This method is based on statistical properties of the bidimensional empirical mode decomposition (BEMD) components, called intrinsic mode function (IMF). First we decompose both reference and distorted images to IMFs, then we compute the distance between marginal probability distributions of each IMF coefficients of the reference image, and its corresponding IMF of the distorted image using the Kullback Leibler divergence (KLD) as a measure of image distortion. To summarize the histogram of each IMF coefficients of the reference image, we use the Generalized Gaussian density model (GGD), in order that only a vector of few features will be needed for the image quality assessment(IQA) purpose. This method is simple to implement and it outperforms some existing results especially for the blur and the white noise distortion. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Experiments on acoustic model supervised adaptation and evaluation by K-Fold Cross Validation technique

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (194 KB) |  | HTML iconHTML  

    This paper is an analysis of adaptation techniques for French acoustic models (hidden Markov models). The LVCSR engine Julius, the Hidden Markov Model Toolkit (HTK) and the K-Fold CV technique are used together to build three different adaptation methods: Maximum Likelihood a priori (ML), Maximum Likelihood Linear Regression (MLLR) and Maximum a posteriori (MAP). Experimental results by means of word and phoneme error rate indicate that the best adaptation method depends on the adaptation data, and that the acoustic models performance can be improved by the use of alignments at phoneme-level and K-Fold Cross Validation (CV). The very known K-Fold CV technique will point to the best adaptation technique to follow considering each case of data type. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A macroblock-based perceptually adaptive bit allocation for H264 rate control

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (178 KB) |  | HTML iconHTML  

    Statistical methodologies are the main tools used in video compression and this lead to a kind of stagnation in terms of performance. This means that solutions for increasing the visual performance of video compression have to come from other fields like the perception. One can notice that coding errors in highly textured areas are relatively less perceptible than errors in untextured ones because of the masking effect. The existing H.264/AVC bit allocation scheme does not take into account this phenomenon. To handle this problem, we propose an adaptive bit allocation based on spatial and temporal perceptual features. This is performed by determining a spatiotemporal importance factor that is used to adjust the number of allocated bits. The proposed spatial feature consists on classifying regions into three categories: flat, textured and edged regions. The proposed temporal feature assumes that the Human Visual System (HVS) is more sensitive for moving regions than static ones. So, a low amount of bits will be assigned to static textured regions, and large one will be assigned to moving regions, which are spatially edged or flat. Experimental results show that the proposed bit allocation at macroblock (MB) level, compared with H.264/AVC reference software, improves the average peak signal-to-noise ratio (PSNR) (up to +1.10 dB) and preserves details in the most perceptually prominent regions for low bitrates. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Influence of distortions of key frames on video transfer in wireless networks

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (168 KB) |  | HTML iconHTML  

    In this paper, it is shown that for substantial increase of video quality delivery in wireless networks, it is necessary to execute two important enhancements to existing communication schemes: (i) the video player on the receiver side should selectively discard duplicated RTP packets, and (ii) server of streaming video should duplicate the packets containing the information of key frames. Coefficients of the mathematical model to assess video quality have been found for WiFi and 3G standards-compliant wireless networks that employ MPEG-2 and MPEG-4 (DivX) codecs. We also present a novel experimental technique that enabled us to collect and process the quantitative datasets used in our modeling study. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.