Scheduled System Maintenance:
On May 6th, single article purchases and IEEE account management will be unavailable from 8:00 AM - 12:00 PM ET (12:00 - 16:00 UTC). We apologize for the inconvenience.
By Topic

Systems, Signals and Image Processing, 2007 and 6th EURASIP Conference focused on Speech and Image Processing, Multimedia Communications and Services. 14th International Workshop on

Date 27-30 June 2007

Filter Results

Displaying Results 1 - 25 of 144
  • Two Fundamental Challenges in Perceptual Coding and Image Restoration

    Publication Year: 2007 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (42 KB) |  | HTML iconHTML  

    Summary form only given. Based on a human visual system (HVS) based approach, digital video image quality and perceptual coding (DVIQPC) outlines the principles, metrics an standards associated with perceptual coding as well as the latest techniques and applications. It discusses the latest developments in vision research as they relate to HVS based video and image coding. It discusses subjective and objective assessment methods, quantitative quality metrics including vision model based digital video impairment metrics, test criteria and procedures. It examines post-filtering and reconstruction issues associated with color bleeding, blocking, ringing and temporal fluctuation artifacts in detail along with methods to reduce/eliminate them. It also focuses on picture quality assessment criteria. It poses new challenges to vision research and/or how to transfer vision science to imaging and visual communication systems engineering. It also poses an obvious theoretical and practical challenge regarding the concept of psychovisual redundancy (also how to define this quantitatively) and to establish theoretical bound for perceptually lossless coding. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Error Resilient Transmission of Video over Mobile Networks

    Publication Year: 2007 , Page(s): 2
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (29 KB)  

    The deployment of third generation mobile networks enabled new real-time multimedia services like video call, conferencing and streaming. The real-time nature of these services excludes the possibility of end-to-end retransmissions. Therefore, errors affecting the quality of the received service are inevitable. The aim of error resilience methods is to minimize the impact of errors on the end-user quality. In this talk, the effect of errors at different positions in the video stream and the possibility of their detection will be discussed. Typically, within an IP video stream, the presence of errors in one IP packet can be detected by means of a simple checksum. Thus, the IP packet size determines the resolution of the error detection. In order to reduce the rate increase due to packet headers, the IP packets are rather large and their loss results in a loss of a considerable part of a picture. Currently, the erroneous IP packets are discarded at the receiver and the corresponding missing parts of the video sequence are concealed. However, the discarded IP packets may still contain correctly received information. If this information is used additionally, an essential improvement in the end-user quality can be obtained. If the access network technology is known, an appropriate cross-layer design enables easier error detection and allows for further improvements of error resilience. In this talk, the UMTS access network is focused. Error resilience methods can be further improved by an appropriate scheduling of the video stream. Here, link-error aware and distortion-aware concepts and their combinations will be discussed and their performance demonstrated. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Overview of Research in 3DTV

    Publication Year: 2007 , Page(s): 3
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (45 KB) |  | HTML iconHTML  

    3DTV is regarded by the experts and the general public as the next major step in video technologies. The ghost-like images of remote persons or objects are already depicted in many futuristic movies; both entertainment applications, as well as 3D video telephony, are among the commonly imagined utilizations of such a technology. As in every product, there are various different technological approaches also in 3DTV. By the way, 3D technologies are not new; the earliest 3DTV application is demonstrated within a few years after the invention of 2D TV. However, earlier 3D video relied on stereoscopy. Current work mostly focuses on advanced variants of stereoscopic principles like goggle-free autostereoscopic multi-view devices. However, holographic 3DTV and its variants are the ultimate goal and will yield the envisioned high-quality ghostlike replicas of original scenes once technological problems are solved. Stereoscopy is based on exploiting the human perception. Simply, two views, taken at two slightly different angles are then guided to left and right eyes. The two eyes, receiving the two different views of the same scene from two different angles, provide the visual signals to the brain; and then, the brain interprets the scene as 3D. However, there are many different 3D depth cues in perception, and usually, there are contradictory signals received by the brain. Viewers experience a motion-sickness-like feeling as a consequence of such mismatches. This is the major reason which kept 3D from becoming a popular mode of visual communications. However, recent advances in end-to-end digital techniques minimized such problems. Stereoscopic TV broadcasts have been conducted. Novel advances in stereoscopy brought viewing without goggles; however, the viewer and the monitor must have a fixed location and orientation with respect to each other for most autostereoscopic images. Multi-view autostereoscopic displays allow some horizontal parallax within a limited viewing ang- e. There are experiments in head-tracking autostereoscopic displays, as well as, free-view point video by providing the right pair of images based on the location of the viewer. Holography is not based on human perception, but targets perfect recording and reconstruction of light with all its properties. If such a reconstruction is achieved, the viewer, embedded in the same light distributionas the original, will of course see the same scene as the original. Classical holography tries to the physical duplication of light, integral imaging and classical holography may be classified under general holography. Holographic 3DTV can be achieved if the holographic recordings and the associated holographic display can be refreshed in real-time. Currently, dynamic holographic capture by CCD arrays, and dynamic holographic display by spatial light modulators (SLMs) are demonstrated. However, due to limited number of array elements, and limitations regarding the pixel sizes, such holographic 3DTV displays have a very small angle of view (about 2 degrees), and therefore, far from being satisfactory at present. Applications of 3D video technologies to different fields like medicine, dentistry, navigation, cultural exhibits, art, science, education, etc., in addition to primary application of entertainment and communications, will revolutionarize the way we interact with visual data, and will bring many benefits. A consortium of 19 European institutions, led by Bilkent University, has been focusing on all technical aspects of 3DTV since Aug 2004: 3D scene capture, representation, compression, transmission and display are the main technical building blocks. Fundamental signal processing issues associated with scalar wave propagation, diffraction and holography are also of prime interest. It is envisioned that future 3DTV systems will decouple the capture and display steps: 3D scenes will be captured by some means, like multi-camera systems, and this data will then be converted to View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robust Recognition-by-Parts Using Transduction and Boosting with Applications to Biometrics

    Publication Year: 2007 , Page(s): 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (33 KB)  

    Summary form only given. The ability to recognize objects, in general, and living creatures, in particular, in photographs or video clips, is a critical enabling technology for a wide range of applications including health care, human-computer intelligent interaction, search engines for image retrieval and data mining, industrial and personal robotics, surveillance and security, and transportation. Despite almost 50 years of research, however, today's object recognition systems are still largely unable to handle the extraordinary wide range of appearances assumed by common objects [including human faces] in typical images. Some of the challenges for modern pattern recognition that have to be addressed in order to advance and make practical both detection and categorization include open set recognition, occlusion and masking, change detection and time-varying imagery, lack of enough data for training, and proper performance evaluation and error analysis. Open set recognition operates under the assumption that not all the test (unknown) probes have mates in the gallery (training set), occlusion and masking hide and disguise parts of the input, image contents vary across both the spatial and temporal dimensions, the amount of data available for learning and adaptation is limited, and errors are not uniformly distributed across patterns. The recognition-by-parts approach proposed here to address the challenges listed above is driven by transduction and boosting. Transduction employs local estimation and inference to find a compatible labeling of joined training and test data. Active learning further promotes the recognition process by making incremental choices about what is best to learn and when in order to accumulate the evidence needed to disambiguate among alternative interpretations. The interplay between labeled ("training") and unlabeled ("test"') data points mediates between semi-supervised learning and transduction. The additional information coming from th- unlabeled data points includes consn-aints and hints about the meaningful relations and regularities affecting their very discrimination. Boosting combines in an iterative fashion part-based, model-free, and non-parametric simple weak classifiers, whose contents and relative ranking are driven by their "strangeness" characteristics. The scope of the proposed approach covers also stream-based data points and includes change detection. The benefits of the proposed discriminative recognition-by-parts approach include a priori setting of rejection thresholds, no need for image segmentation, robustness to occlusion, clutter, and disguise. Examples drawn from biometrics illustrate the proposed approach and show its feasibility and utility. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Overview of Multi-view Video Coding

    Publication Year: 2007 , Page(s): 5 - 12
    Cited by:  Papers (13)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (696 KB) |  | HTML iconHTML  

    With the advancement of computer graphics and computer vision technologies, the realistic visual system can come true in the near future. The multi-view video system can provide an augmented realism through selective viewing experience. The multi-view video is a collection of multiple videos capturing the same 3D scene at different viewpoints. Since the data size of the multi-view video increases proportionally to the number of cameras, it is necessary to compress multi-view video data for efficient storage and transmission. This paper provides an overview of multi-view video coding (MVC) and describes its applications, requirements, and the reference software model for MVC. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Data Protection Techniques, Cryptographic Protocols and PKI Systems in Modern Computer Networks

    Publication Year: 2007 , Page(s): 13 - 24
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (254 KB) |  | HTML iconHTML  

    This tutorial is devoted to the emerging topic in domain of modern e-business systems - a computer network security based on public key infrastructure (PKI) systems. We consider possible vulnerabilities of the TCP/IP computer networks and possible techniques to eliminate them. We signify that only a general and multilayered security infrastructure could cope with possible attacks to the computer network systems. We evaluate security mechanisms on application, transport and network layers of ISO/OSI reference model and give examples of the today most popular security protocols applied in each of the mentioned layers. We recommend secure computer network systems that consist of combined security mechanisms on three different ISO/OSI reference model layers: application layer security based on strong user authentication, digital signature, confidentiality protection, digital certificates and hardware tokens, transport layer security based on establishment of a cryptographic tunnel between network nodes and strong node authentication procedure and network IP layer security providing bulk security mechanisms on network level between network nodes. User strong authentication procedures based on digital certificates and PKI systems are especially emphasized. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Voice Quality Measurement in Modern Telecommunication Networks

    Publication Year: 2007 , Page(s): 25 - 32
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (210 KB) |  | HTML iconHTML  

    Speech quality is the most visible and important aspects of quality of service (QoS) for telecommunication networks. Hence, the ability to monitor and design for this quality has become a top priority. Speech quality refers to the clearness of a speaker's voice as perceived by a listener. Its measurement offers a means of adding the human end-user's perspective to traditional ways of performing network management evaluation of voice telephony services. Traditionally, measurement of users' perception of speech quality has been performed by expensive and time-consuming subjective listening tests. Over the last three decades, numerous attempts have been made to supplement subjective tests with objective measurements based on algorithms that can be computerised and automated. This paper describes the technicalities associated with speech quality measurement, and presents a review of current subjective and objective speech quality evaluation methods and standards in telecommunications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analysis of Fused Ophthalmologic Image Data

    Publication Year: 2007 , Page(s): 33 - 36
    Cited by:  Papers (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (759 KB) |  | HTML iconHTML  

    -The contribution summarises the results of a long-term project concerning processing and analysis of multimodal retinal image data, run in cooperation between Brno University of Technology -Dept. of Biomedical Engineering and Erlangen University -Clinic of Ophthalmology. From the medical application point of view, the main stimulus is the improvement of diagnostics (primarily of glaucoma but other diseases as well) by making the image segmentation and following analysis reproducible and possibly independent on the evaluator. Concerning the methodology, different image processing approaches had to be combined and modified in order to achieve reliable clinically applicable procedures. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Signal estimation from uncertain observations coming from multiple sensors

    Publication Year: 2007 , Page(s): 37 - 40
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (184 KB) |  | HTML iconHTML  

    In this paper, the least-squares estimation problem of signals from uncertain observations coming from multiple sensors is addressed. Assuming that the probability distribution of the Bernoulli random variables modeling the uncertainty is not necessarily the same for all the sensors, recursive filtering and smoothing (fixed-point and fixed-interval) algorithms are proposed. The derivation of such algorithms does not require the knowledge of the signal state-space model, but only the covariance functions of the processes involved in the observation equations and the uncertainty probabilities. The application of the proposed algorithms is illustrated by a numerical simulation example wherein a signal is estimated from uncertain observations coming from two sensors with different uncertainty characteristics. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New filtering algorithm using observations with one or two-step random delay

    Publication Year: 2007 , Page(s): 41 - 44
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (234 KB) |  | HTML iconHTML  

    This paper discusses the least-squares linear Altering problem of discrete-time signals from observations, perturbed by additive white noise, which can be randomly delayed by one or two sampling times. It is assumed that the Bernoulli random variables modelling the delays are independent and that the delay probabilities are known. Using an innovation approach, a recursive linear filtering algorithm is obtained using only the covariance functions of the signal and the noise, and the delay probabilities. An illustrative example shows the performance of the proposed filtering estimators for different delay probabilities. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Threshold Estimation for Wavelet Domain Filtering of Signal-dependent Noise

    Publication Year: 2007 , Page(s): 45 - 48
    Cited by:  Papers (2)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (271 KB) |  | HTML iconHTML  

    This paper presents a method for signal denoising in the case of varying noise proportional to the local signal intensity. The signals are processed in a wavelet domain with a non-uniform threshold adjusted to the noise level. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Example of Adaptive Sampling and Reconstruction of Signals: Application of Chaikin's Algorithm

    Publication Year: 2007 , Page(s): 49 - 52
    Cited by:  Papers (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (272 KB) |  | HTML iconHTML  

    In this paper we explore Chaikin's algorithm for generation of arbitrary curves to reconstruct non-uniformly sampled signals. The sampling is adapted to the signal shape. For the sampling rate estimation the discrete Haar wavelet transform is used. The results of reconstruction are presented graphically and compared with those obtained by reconstruction based on multiresolution. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Partial Realization of a Generalized Transfer Function

    Publication Year: 2007 , Page(s): 53 - 56
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (187 KB) |  | HTML iconHTML  

    This paper refers to the partial realization of linear, discrete-time, time-invariant singular systems. An algorithm is developed for the determination of the generalized transfer function and the necessary and sufficient conditions for the uniqueness and existence of the solution are given. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Extracting Self-affine (Fractal) Features from Physiologic Signals

    Publication Year: 2007 , Page(s): 57 - 60
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (159 KB) |  | HTML iconHTML  

    It has been recognized that many biological systems exhibit complex behavior that is governed by fractal dynamical process. For revealing such dynamics we propose a method based on principal component analysis (PCA) of system's time series or some other measured and ordered data sequence, embedded in high-dimensional pseudo phase space. It is demonstrated that such mapping, together with projection of data vectors on the successive subspaces, reveals scale invariant statistics in the simple and clear fashion. To illustrate the method and to compare the results with others, such as detrended fluctuation analysis (DFA), we applied it on human heartbeat data series of different lengths and from various groups such as healthy young subjects and subjects with congestive heart failure (PhysioBank data library). Results show that the proposed method is appropriate for detection of fractal dynamics when analyzing limited scale intervals (dimensions) from smaller data sets. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Application of Speaking Style Conversion in the Czech and Slovak TTS System with Cepstral Description

    Publication Year: 2007 , Page(s): 277 - 280
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (195 KB) |  | HTML iconHTML  

    This contribution describes experiments with speaking style conversion performed on the utterances produced by the TTS system with cepstral description and basic prosody generated by rules. Speaking style conversion was realized in the form of post-processing operation on the output speech signal of the TTS system, and as a real-time implementation directly to the TTS system. Speaking style prototypes representing three emotional states (sad, angry, and joyous) were obtained from the sentences with the same information content. The problem with the different frame length between the prototype and the target utterance was solved by linear time scale mapping. The results were evaluated by listening tests of the resynthetized utterances. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • MABox - Multimodal Microphone Array Algorithm Development System

    Publication Year: 2007 , Page(s): 281 - 283
    Cited by:  Papers (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (159 KB) |  | HTML iconHTML  

    In this work a design and a realization of multimodal microphone array algorithm development system which is proposed to develop new microphone array algorithm named MABox is presented. This device incorporates microphone array with four microphones, ADC cards and development software. Microphones are integrated in a separate directional box pointed to the speaker, the box is connected via USB and analog line to the computer. The development software environment allows us to test new beamforming algorithm. This system runs in real-time what allows us to change the structure of algorithm and its parameters. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Noise Reduction Algorithm for Robust Speech Recognition Using Minimum Statistics Method and Neural Network VAD

    Publication Year: 2007 , Page(s): 284 - 287
    Cited by:  Papers (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (132 KB) |  | HTML iconHTML  

    In this paper we present basic ideas of noise reduction for robust speech recognition using minimum statistic algorithm and VAD based on neural networks. Noise estimation is based on minimum statistic procedure and noise subtraction in spectral space is performed based on neural network VAD output. For noise subtraction two different subtraction factors are used. If VAD output indicates noise frame, subtraction is carried out with one subtraction factor. On the other hand if VAD output value indicates speech frame, than subtraction with the other subtraction factor is carried out. Research and tests have been performed on German part of Aurora3 database. Performance was tested according to ETSI ES 201/108 standard. During testing several combinations of parameters have been experimented and optimum values were defined. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Influence of Features Extraction Methods in Performance of Continuous Speech Recognition for Romanian

    Publication Year: 2007 , Page(s): 288 - 291
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (186 KB) |  | HTML iconHTML  

    This paper describes continuous speech recognition experiments for Romanian language, based on statistical modelling by using hidden Markov models. These experiments are made in order to select the most appropriate features extraction method. The compared methods are cepstral and LPC analysis, in standard and perceptual versions. In our tests the cepstral coefficients perform in the most situations better versus the linear prediction ones, and the perceptual coefficients better than the standard ones. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Speech sampling by level-crossing and its reconstruction using spline-based filtering

    Publication Year: 2007 , Page(s): 292 - 295
    Cited by:  Papers (2)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (415 KB) |  | HTML iconHTML  

    The paper discusses an original approach to speech processing. Traditionally it is organized using uniform sampling with a fixed sampling rate, which is followed by encoding to reduce amount of data for transmission. The proposed approach is based on event-driven digital-to-analog conversion that produces data with low redundancy and allows the compression procedure to be omitted. The application of the level-crossing method provides samples without quantization errors in amplitude. However, since they are spaced non-equidistantly, a dedicated reconstruction algorithm is required. A recovery procedure, which uses spline-based filtering with time-varying bandwidth, has been developed. The instantaneous maximum frequency of a signal is estimated from the local sampling density. The results of the simulation produced using the proposed method are presented. The developed approach can be implemented using asynchronous design techniques, and is targeted at application in speech transmission over wireless networks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Tone discrimination in Mandarin Chinese

    Publication Year: 2007 , Page(s): 296 - 299
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (132 KB) |  | HTML iconHTML  

    Intelligibility testing for Mandarin Chinese speech has, to date, consisted of an evaluative methodology derived by translating the methods of the English language ANSI standard S3.2 diagnostic rhyme test and modified rhyme tests into Chinese. Language-specific tests for Chinese are important not only due to the rapidly increasing prevalence of Chinese speech conveyed by the world telecommunications system, but also due to several features of Chinese that differ markedly from English and many other European languages, and which are thus untested by the english-language intelligibility evaluation methods. The previously published test methodologies, proposed as an international standard, have been used in the definition of speech coding systems specifically tailored to Chinese speakers, and in the evaluation of the ability of several current speech codec standards to cater for Chinese speech -and in particular Chinese lexical tones. Usage is likely to grow in future. This paper presents new experimental evidence on the discrimination of Chinese tone that overturn assumptions made in the proposed Chinese intelligibility test standard, and leads here to a new formulation of the intelligibility test relating to tone discrimination. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • ASR for Romanian Language

    Publication Year: 2007 , Page(s): 300 - 303
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (136 KB) |  | HTML iconHTML  

    In this paper we will present progresses made in Automatic Speech Recognition (ASR) for Romanian language. The recognition system is based on statistical modelling with hidden Markov models (HMMs). The progresses concern enhancement of modelling by taking into account the context in form of triphones, improvement of speaker independence by applying a gender specific training and enlargement of the feature categories used to describe speech sequences derived not only from perceptual cepstral analysis but also from perceptual linear prediction. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Influence of the Baseband Transmission Channel Cut-off Parameter on the Transmission Quality in DVB Baseband Processing

    Publication Year: 2007 , Page(s): 304 - 307
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (272 KB) |  | HTML iconHTML  

    The paper deals with the simulation of the digital video transmission using the model of the baseband transmission channel. The paper introduces baseband channel with variable cut-off frequency and then two simulation experiments. The first was done with the DVB baseband processing and simulation in Matlab applied to the set of test images with variable characteristics. The second was done with the real digital video captured, compressed and processed again according to DVB. The transmitted data were protected against transmission errors by forward error correction code. The results of achieved error rates and according picture quality measures in dependence on the transmission channel cut-off are presented and compared. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Image reception and control of IP-cam via digital video broadcasting

    Publication Year: 2007 , Page(s): 308 - 310
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (203 KB) |  | HTML iconHTML  

    The multimedia home platform (MHP) provides a new level of interactivity, which has been standardized with digital video broadcasting DVB. Nowadays, TV watchers can vote, play a game, do shopping or any other things with the DVB-MHP set-top-boxes. This paper describes the universal MHP application for broadcasting, displaying and recording image or video from various distant IP cameras. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using Telecommunications Middleware to Dynamically Adapt Multimedia Services

    Publication Year: 2007 , Page(s): 311 - 314
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (610 KB) |  | HTML iconHTML  

    The IP multimedia subsystem (IMS), standardized by 3GPP, can be seen as a way of offering Internet services, such as the Web access, electronic mail or instant messaging, through any access technology, in terms of terminals and networks. This paper presents a proposal to solve the problem of the dynamic adaptation of multimedia services provided in the Web context. The proposed solution is based on the real-time generation of user interfaces conditioned by the user context. The solution is mainly characterized by the approach used for resolving the existing dependencies among user interface variables and by the mechanism for acquiring the user context information, which uses the Parlay middleware. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Complementary Code Keying Implementation in the Wireless Networking

    Publication Year: 2007 , Page(s): 315 - 318
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (259 KB) |  | HTML iconHTML  

    This paper deals with a Matlab-simulink simulation of IEEE 802.11 b Physical Layer specification. There was made a Matlab Simulink program that simulates 4 data rates (1, 2, 5.5, 11 Mbps) specified in the 802.11 b standard. Program includes Barker coding implementation for lower data rates (1,2 Mbps) and CCK (complementary code keying) for higher data rates providing 55 and 11 Mbps. All data rates are using DSSS (direct sequence spread spectrum). Basic description of the program and results in graphical and numerical form are introduced as a BER to Eb/N0 dependence. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.