By Topic

Communications, IET

Issue 12 • Date August 14 2012

Filter Results

Displaying Results 1 - 25 of 30
  • Exploiting interest-based proximity for content recommendation in peer-to-peer networks

    Publication Year: 2012 , Page(s): 1595 - 1601
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (270 KB)  

    Feasibility of content recommendation over interest-aware unstructured peer-to-peer (P2P) systems where peers sharing similar contents are connected. The authors present a novel and simple general metrics, by extending the Sorgenfrei coefficient to measure content similarities among peers. The authors provide two simple approximations of the proposed measure, that can be calculated by aggregating only the pair wise Sorgenfrei similarities, relaying on certain assumptions of statistical independence in the input data. The authors conduct experiments using a massive set of P2P file-sharing data to show that our new similarity measure could be a good predictor of the recommendation quality in unstructured distributed systems. The feasibility of finding similar peers in a simple unstructured system is also examined by simulation. The authors conclude that in unstructured P2P networks, an efficient recommendation system can be built without relying on any centralised or structured architectural extensions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Measurements on movie distribution behaviour in peer-to-peer networks

    Publication Year: 2012 , Page(s): 1602 - 1610
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (602 KB)  

    Peer-to-Peer (P2P) mode dominates the way that files are shared over the Internet today. A measurement study on the user behaviour during the P2P file sharing is important and helpful to better understand and design P2P networks. In this study, the authors developed a method to collect information about peers and connections in movie sharing at the BitTorrent client side. Movie is selected as the investigation object since its immense popularity and large size among all the file types over P2P networks. The method proposed in this study can be easily applied to study the distribution behaviour of other types of files. Based on the collected data, the authors have derived 10 observations in three categories: (i) distributions of peers and connections over globe time and local time (after adjustment of time differences); (ii) distributions of peers and connections over geographic areas (at different levels of continents, countries, cities); and (iii) the influence to the above distributions by differences of population, gross domestic product (GDP) and life style. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cyclic entropy of collaborative complex networks

    Publication Year: 2012 , Page(s): 1611 - 1617
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (324 KB)  

    Recent models of complex networks rely on degree-based properties evaluation, a new approach is proposed based on other microstructures existing in networks that are cycles (loops). Degree-based entropy measures the uncertainties in relationship whereas cycle-based (cyclic) entropy measures the uncertainties associated with the information feedback in collaboration network, namely Wikipedia. On the basis of the values of cyclic and degree entropies measured in three different experiments on Wikipedia, the authors conclude that citation activity level in Wikipedia is low, specialisation level is high and low tendency toward contribution in topics with different authors. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Identify content quality in online social networks

    Publication Year: 2012 , Page(s): 1618 - 1624
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (372 KB)  

    The flooding of low-quality user generated contents (UGC) in online social network (OSN) has been a threat to web knowledge management systems. Recently several domain-specific systems have been developed addressing this problem, for example, predict correct answer in QA community; recognise reliable comment in products review forums etc. Major drawback of most research efforts is the lack of a general framework applicable to all OSNs. In this study, the authors start by analysing the effects of distinguishing features on UGC quality in different types of OSNs. Extensive statistical analysis leads to the discovery of existence of diverse patterns of human information sharing activity in dissimilar OSNs. This discovery is employed as prior knowledge in the classification framework, which decompose the original highly imbalanced problem into several balanced sub-problems. Ensemble classifiers are adopted in samples from clusters generated by incompact features. Experiments show the proposed framework is both effective and efficient for several OSNs. Contributions of this study are two-fold: (i) model posting activity in different types of OSNs; (ii) propose novel classification framework to identify UGC quality. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal data scheduling for P2P video-on-demand streaming systems

    Publication Year: 2012 , Page(s): 1625 - 1631
    Cited by:  Papers (2)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (371 KB)  

    Peer-to-peer (P2P) overlay-based streaming services have became more and more attractive. However, it is still challenging to provide scalable streaming services over large-scale Internet environment beacause of the stringent quality of service requirements as well as the dynamic nature of P2P overlay network. In this study, the authors focus on the optimisation of streaming data scheduling in P2P video-on-demand (VoD) system, with the objective of minimising the server stress and maximising the playback continuity. The authors first model the data scheduling problem to the maximum network flow problem where the schedule scheme is transformed to find an optimal supplier-consumer relationship assigment among peers with minimal server strees, and then present two max-flow-based streaming data scheduling algorithms by combining the upload capacity of peers as well as the path capacity between peers. The authors prove that the computing complexing of the proposed scheduling algorithms is polynomial. The practicability of the proposal is evaluated via simulations. Simulation results indicate that the proposed scheduling scheme could distribute bandwidth load among peers well while keeping the node degree low. Simulation results also show that the novel proposal has better performance than previous work in term of the server stress and the playback continuity. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Network service registration based on role-goal-process-service meta-model in a P2P network

    Publication Year: 2012 , Page(s): 1632 - 1639
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (392 KB)  

    Service composition-based network software customisation is currently a research hotspot in the field of software engineering. A key problem of the hotspot is how to efficiently discover services distributed over the Internet. In the service oriented architecture, service discovery suffers from the performance bottleneck of centralised universal description discovery and integration (UDDI), and inaccurate matching of service semantics. In this study, the authors describe a novel method for service labelling, registration and discovery, which is based on the role-goal-process-service meta-model. This approach enables ones to achieve accurate matching of service semantics by extending web service description language with RGP demand-information. The authors also suggest a peer-to-peer (P2P)-based architecture of service discovery to address the issues in the UDDI bottleneck and the complexity of semantic computation. By adopting the proposed approach, an experiment prototype system has been designed and implemented in Beijing municipal transportation system. The experimental results show the proposed approach is effective in addressing the aforementioned problems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Unveiling popularity of BitTorrent Darknets

    Publication Year: 2012 , Page(s): 1640 - 1650
    Cited by:  Papers (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (842 KB)  

    BitTorrent is todays most influential peer-to-peer content distribution system. Currently BitTorrent has two very different operating models: (i) public trackers, and (ii) private trackers (a.k.a. PTs, Darknets). A PT can only be accessed by its registered users, and can provide ultrahigh downloading speed because of its effective share-ratio enforcement (SRE) incentive mechanism which stimulates the users to upload contents as much as possible. Although PTs are becoming more and more popular, they receive little attention from the research literature, possibly because they are operated underground. To understand the popularity of Darknets, the authors have traced 17 PT sites, 2 public tracker sites and 1 BitTorrent search engine for over a year. The authors investigate these PT sites from several aspects and try to understand why they are so successful in terms of attracting loyal users and providing high downloading speed. The authors then analyse the SRE mechanism and ratio free system which are commonly used by PTs. Our results unveil the reason of popularity and effectiveness of PTs. These understandings are essential to the sustainable development of future BitTorrent content distribution systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Switched diversity strategies for dual-hop amplify-and- forward relaying systems

    Publication Year: 2012 , Page(s): 1651 - 1661
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (769 KB)  

    This study investigates different receive single-branch switch-based diversity schemes for dual-hop amplify-andforward relaying networks. Specifically, three receive processing algorithms are adopted, in which the receive branch is selected using the arbitrary selection algorithm, the switching algorithm, or the switching algorithm with post-examining best branch selection. The identification of the receive branch is carried out for two different system models. For the first model, a single-antenna relaying station is used in conjunction with a multiple-antenna transceiver, where the processing is performed independently of the first hop-fading conditions. The second model suggests the use of parallel deployment of single-antenna relays to transfer information from a multiple-antenna transmitter to a single-antenna receiver, where the active relaying station is determined based on the pre-combining end-to-end fading conditions. Performance comparisons for various transmission scenarios on the first hop are presented using new formulations for the statistics of the combined signal-tonoise ratio. Simulation results are also provided to validate the mathematical development and to verify the numerical computations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Coordinated beamforming design using duality theory with dynamic cooperation clusters

    Publication Year: 2012 , Page(s): 1662 - 1669
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (404 KB)  

    Uplink??downlink duality has emerged as an attractive approach to optimise the downlink beamforming problem with fixed cooperation clusters where either all base stations serve all terminals or each base station serves only its own terminals. Although easily implementable for co-located base stations, the performance is still limited by out-of-cluster interference. To address these concerns, this study establishes an uplink??downlink duality for the multi-cell multi-user system with dynamic cooperation clusters where each base station has responsibility for the interference leaked to a set of terminals while only serving a subset of them with data. The multi-cell downlink problem of minimising the total transmit power subject to individual signalto- interference-and-noise ratio requirements under per-base station power constraints is solved via a dual uplink problem. Conditions for beamforming optimality and the optimal downlink beamforming design are derived using Lagrange duality theory. The convergence behaviour of proposed algorithm is shown. The percentage of power saved by proposed algorithm is calculated subject to the user-specific SINR value achieved by zero-forcing (ZF), maximum-ratio transmit (MRT), virtual SINR (VSINR) and layered virtual SINR (LVSINR) under different cooperation scenarios and the sum rate performance is compared. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Field programmable gate arrays implementations of low complexity soft-input soft-output low-density parity-check decoders

    Publication Year: 2012 , Page(s): 1670 - 1675
    Cited by:  Papers (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (245 KB)  

    Low-density parity-check (LDPC) codes are very efficient error control codes that are being considered for use in many next-generation communication systems. In this study low complexity soft-input, soft-output (SISO) field programmable gate arrays (FPGA) implementations of a novel logarithmic sum-product (LogSP) iterative LDPC decoder and a recently proposed simplified soft Euclidean distance (SSD) iterative LDPC decoder are presented, and their complexities and performance are compared. These implementations operate over any choice of parity check matrix (including those randomly generated, structurally generated and either systematic or non-systematic) and can be parametrically adapted for any code rate. The proposed implementations are both of very low complexity, because they operate using only sums, subtractions, comparisons and look-up tables, which makes them particularly suitable for FPGA realisation. The SSD decoder has a lower implementation complexity than the LogSP LDPC decoder and it also offers the advantage of not requiring knowledge of the channel signal-to-noise ratio, unlike most other LDPC decoders. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance of variable-power adaptive modulation with space-time block coding and imperfect channel state information over Rician fading channels

    Publication Year: 2012 , Page(s): 1676 - 1684
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (439 KB)  

    The performance analysis of a space-time block coded multiple input multiple output system with variable-power (VP) adaptive modulation (AM) over Rician fading channels for imperfect channel state information is presented. The optimum switching thresholds for attaining maximum spectrum efficiency (SE) subject to a target bit error rate (BER) and an average power constraint are derived. In the derivation, a very tight BER expression of quadrature amplitude modulation (QAM) is used to develop a new power adaptation scheme that can fulfil the target BER even at low signal-to-noise ratio (SNR) and achieve higher SE than the scheme using the BER upper bound commonly adopted in the literature. The existence and uniqueness of the Lagrange multiplier used in the constrained optimisation are investigated. It is shown that the Lagrange multiplier is unique when it exists. By using the switching thresholds, the authors obtain closed-form expressions of SE and average BER of the system. Computer simulation shows that the theoretical analysis is in good agreement with simulation. The results show that the proposed VP-AM scheme provides better SE than the constant-power counterparts and the VP-AM scheme using the commonly used BER bound. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Code-aided turbo synchronisation using irregular low-density parity check codes

    Publication Year: 2012 , Page(s): 1685 - 1691
    Cited by:  Papers (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (578 KB)  

    An efficient scheme for iterative carrier recovery by using irregular low-density parity check (LDPC) codes is proposed at low signal-to-noise ratio (SNR) conditions. Owing to the significant effect of decoding extrinsic information on the carrier estimation algorithms, deficiency of extrinsic information from decoding caused by large residual frequency or phase offsets in initial iterations will result in incorrect parameter estimation and compensation, which would fail to converge toward low BER in decoding process. Irregular LDPC codes have inherence unequal error protection (UEP) capabilities wherein the highdegree nodes tend to correct their value more quickly than low-degree nodes. The more reliable information, which is provided by the high-degree nodes of irregular LDPC code in initial iterations, can be used to offer more reliable soft information for carrier synchronisation even in poor channel conditions, which leads to improvement of the carrier parameter estimation performance. The simulation results indicate that the more reliable information from irregular LDPC codes can cope with large frequency, phase uncertainty region and speed up the convergence properties for iterative synchronisers when comparing with regular LDPC codes with same parameters. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Symbol error probability of non-coherent M-ary frequency shift keying with postdetection selection and switched combining over Hoyt fading channel

    Publication Year: 2012 , Page(s): 1692 - 1701
    Cited by:  Papers (2)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (430 KB)  

    For mitigating the deleterious effects caused by time-varying multipath fading, most of the modern digital wireless systems employ some sort of diversity combining. Combining of the different diversity branches, however, may be performed either before demodulation (predetection combining), or after it (postdetection combining). It has been shown earlier that postdetection schemes outperform their predetection counterparts for Rayleigh and less-severe-than Rayleigh (e.g. Rician)- fading channels. In this study, the authors consider Hoyt fading that characterises wireless environments experiencing moresevere- than Rayleigh fading, and compare error performance of postdetection combining with the results available for predetection combining. In particular, two of the combining variants namely selection combining and switched combining are investigated, and symbol error probability of a non-coherent M-ary frequency shift keying receiver is derived for independent but not necessarily identical diversity branches. The comparison reveals that postdetection schemes perform better than predetection combiners when the average branch signal-to-noise ratio (SNR) exceeds some crossover SNR. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Integrating disruption-prone links into reliable networks: a transmission control protocol friendly approach

    Publication Year: 2012 , Page(s): 1702 - 1709
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (440 KB)  

    Although progress has been made to satisfy mobility at the network's edge, much work remains for deeper mobility requirements. In many emergency or military applications, fixed network infrastructure may not be available or even possible. As mobile military data requirements grow, and spectrum limitations increase, systems must better utilise the diminishing available bandwidth of wireless radio frequency and free space optical links. To achieve better utilisation, disruption-prone networks require disruption-tolerant protocols or localised buffering to mask disruptions. Transmission control protocol (TCP)/IP assumes reliable links and performs well in networks with congestion dominated packet losses, but poorly with link failure dominated packet losses. Although TCP might be altered for disruptive environments, evolutionary reasons make it difficult to do so well without partitioning networks into reliable and disruption-tolerant systems. Instead, the authors examine transport layer aware helper protocols, with intermediate buffering in routers, to assist TCP across disruption-prone network portions. The buffering does not require TCP modifications at communicating nodes and integrate well with existing routers (i.e. TCP friendly). Experimental results show that TCP can reliably establish and maintain connections under poor link availability using the buffering protocol. Few TCP connections complete without it. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generalised low-density parity-check codes with binary cyclic codes as component codes

    Publication Year: 2012 , Page(s): 1710 - 1715
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (546 KB)  

    Irregular low-density parity-check (LDPC) codes outperform turbo codes for the block length 104 and above. This study introduces generalised LDPC (GLDPC) codes with binary cyclic codes as component codes whose performance is better than that of irregular LDPC codes. The codes are found by optimising degree distributions. The authors also present some simulation results, which show that the codes surpass the irregular LDPC codes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Trie shifting scheme with depth adjusting for multiple virtual routers

    Publication Year: 2012 , Page(s): 1716 - 1723
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (514 KB)  

    In network virtualisation, which enables multiple virtual routers to share one physical router, how to ensure the scalability and performance of concurrent virtual routers is a challenging problem in virtual router design. Since physical router only has limited memory resource, it is significant to efficiently store multiple forwarding tables for virtual routers. Motivated by the idea of diminishing the dissimilarities in forwarding tables, the authors propose a novel trie shifting scheme with depth adjusting, and finally obtain a memory-efficient shared trie for all forwarding tables. In this scheme, by diminishing the dissimilarity in the depth and in the shape, memory is saved by reducing the trie nodes needed in the shared trie. In the simulation, the scheme needs only 10% of trie nodes, compared with storing forwarding tables separately. Extensive simulation based on three sets of forwarding tables, which are collected from five backbone routers, shows that the scheme saves memory by reducing the number of trie nodes between 4.7 and 7.8%, compared with the latest scheme in related work. Moreover, this scheme achieves more improvement when the dissimilarity is increasing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Digital compensation of cross-modulation distortion in multimode transceivers

    Publication Year: 2012 , Page(s): 1724 - 1733
    Cited by:  Papers (3)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (609 KB)  

    In a multimode transceiver, several communication standards may be active at the same time. Owing to the small size of the transceiver, the transmitter for one standard induces a large interference on the receiver for another one. When this large interference passes through the inherently non-linear receiver front-end (FE), distortion products are generated. Among these products, the cross-modulation (CM) product is the most problematic one, as it always has the same centre frequency as the desired signal. Increasing the FE linearity to lower the CM distortion leads to unacceptable power consumption for a handheld device. Considering the continuous increase of digital computation power governed by Moore's law an attractive alternative approach is to digitally compensate for the CM distortion. An existing solution to compensate for the CM distortion is tailored to single-mode transceivers and requires an auxiliary FE. By using the locally available transmitted interference in the multimode transceiver, the authors propose a CM compensation method which requires no additional analogue hardware. Hence the power consumption and complexity of the multimode transceiver can be reduced significantly. The simulation results demonstrate that the proposed method can lower distortion to a negligible amount at realistic interference levels. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robust non-linear precoding for downlink multiuser multiple-input multiple-output orthogonal frequency-division multiplexing systems with limited feedback

    Publication Year: 2012 , Page(s): 1734 - 1741
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (423 KB)  

    The authors consider the robust Tomlinson??Harashima precoding (THP) for downlink multiuser multiple-input multiple-output orthogonal frequency-division multiplexing systems with quantised feedback. The authors discuss vector channel feedback strategies in the frequency and time domains, and develop a robust version of THP that takes into account of error statistics of the channel state information, that consists of the optimal feedforward filters, feedback filters and the receive filters. Feedback techniques are developed to exploit the spatial correlations in realistic 3GPP channel models by applying dimension reduction and scalar-quantisation. Extensive simulations results are provided to demonstrate the performance of the proposed robust THP design as well as the channel feedback scheme. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New approach for evaluation of the performance of spectral amplitude coding-optical code division multiple access system on high-speed data rate

    Publication Year: 2012 , Page(s): 1742 - 1749
    Cited by:  Papers (2)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (794 KB)  

    In this study, the authors proposed a new approach to evaluate the system performance of spectral amplitude codingoptical code division multiple access (SAC-OCDMA) systems. The system performance is evaluated based on a new code called the dynamic cyclic shift (DCS) code. The bit-error-rate (BER) performance of the DCS code on high-speed SAC-OCDMA systems has been analysed and reported in this study. The most remarkable trait of the newly developed DCS code is that the cross-correlation is variable between 1 and 0, and the phase induced intensity noise is low. In order to evaluate the performance of the DCS code in high-speed SAC-OCDMA system, the mathematical analysis has been extensively derived along with the simulation analysis at 10 Gbit/s, which is carried out by using 'OptisysTM ver.9 simulation software from Optiwave'. The results obtained for the DCS code were compared with those obtained from different coding schemes [e.g. random diagonal code and modified quadratic congruence code] for the same number of interfering users. It has been observed that the DCS code of BER equal to 10-11 can support high data rate (5 Gbit/s) at 60 km transmission link. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Approach to the construction of regular low-density parity-check codes from group permutation matrices

    Publication Year: 2012 , Page(s): 1750 - 1756
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (509 KB)  

    In this study, a new method for constructing low-density parity-check (LDPC) codes is presented. This construction is based on permutation matrices which come from a finite abstract group and hence the codes constructed in this manner are called group permutation low-density parity-check (GP-LDPC) codes. A necessary and sufficient condition under which a GP-LDPC code has a cycle is given and some properties of these codes are investigated. A class of flexible-rate GP-LDPC codes without cycles of length four is also introduced. Simulation results show that GP-LDPC codes perform very well with the iterative decoding and can outperform their random-like counterparts. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Effective capacity of multiple antenna channels: correlation and keyhole

    Publication Year: 2012 , Page(s): 1757 - 1768
    Cited by:  Papers (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (541 KB)  

    In this study, the authors derive the effective capacity limits for multiple antenna channels which quantify the maximum achievable rate with consideration of link-layer delay-bound violation probability. Both correlated multiple-input single-output and multiple-input multiple-output keyhole channels are studied. Based on the closed-form exact expressions for the effective capacity of both channels, the authors look into the asymptotic high and low signal-to-noise ratio regimes, and derive simple expressions to gain more insights. The impact of spatial correlation on effective capacity is also characterised with the aid of a majorisation theory result. It is revealed that antenna correlation reduces the effective capacity of the channels and a stringent quality-of-service requirement causes a severe reduction in the effective capacity but can be alleviated by increasing the number of antennas. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exploiting primary retransmission to improve secondary throughput by cognitive relaying with best-relay selection

    Publication Year: 2012 , Page(s): 1769 - 1780
    Cited by:  Papers (2)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (980 KB)  

    In this paper, the authors propose two cognitive relaying schemes based on overlay and underlay, which exploit the cooperation opportunities inherent in primary retransmission to improve secondary throughput. If a primary signal is not decoded by the primary receiver (PR), a secondary user (SU) can be selected to relay it invisibly along with the primary retransmission. For overlay cognitive relaying, SUs intend to reduce primary retransmission time by relaying primary message, so that more access opportunities are available. While in underlay cognitive relaying, SU allocates part of its power to help primary user (PU) and the remaining power is used to transmit secondary message simultaneously. By controlling the phase of the relay signal, signals retransmitted from primary transmitter (PT) and SU can constructively combine at the PR. In both relaying schemes, we consider the best-relay selection as well. We define some novel metrics to evaluate the performance of PU and SU. For PU, we study the improvements in outage performance and average transmitting time per packet, while for SU, we consider the cooperation gain and cooperation efficiency, respectively. Theoretical analysis and numerical results verify the validity of both schemes, and a comparison is made between them. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Effect of channel estimation error on performance of time reversal-UWB communication system and its compensation by pre-filter

    Publication Year: 2012 , Page(s): 1781 - 1794
    Cited by:  Papers (2)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (704 KB)  

    Time reversal (TR) technique is an effective and simple method for data transmission in extremely multipath indoor ultra-wideband (UWB) channels. Temporal focusing property supported by TR, mitigates the complexity of receiver by reducing the effective channel taps. In spite of its good performance in perfect channel state information (CSI), TR method is very sensitive to the channel estimation error. The effect of channel imperfection on TR technique is considered the study. At first, the bit error rate (BER) of the TR-UWB communication system under the assumptions of the simple matched filter (MF) receiver under the channel estimation errors is derived in closed-form. Then, based on optimal minimum mean square error (MMSE) estimator receiver, a pre-filter is calculated in closed-form to improve the performance of the TR-UWB system in an imperfect CSI scenario. Furthermore, in order to compare the results, similar calculation for pre-filter is carried out based on a simple MF receiver. Unfortunately, it was not possible for us to drive a closed-form solution for pre-filter in this case. To overcome this shortcoming, a two stage iteration-based algorithm is developed at transmitter to calculate a pre-filter in MF receiver. This improved algorithm causes the channel estimation error tends to zero in some steps for the TR-UWB system with MF. The initial value for this iteration-based improved algorithm is considered to be closed-form pre-filter where is calculated based on TR-UWB system with an optimal MMSE estimator. Finally, exhaustive simulations are done to demonstrate the performance advantage attained by the proposed algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Interference analysis of 3G/ad hoc integrated network

    Publication Year: 2012 , Page(s): 1795 - 1803
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (651 KB)  

    A C3G-A network integrating the 3G network and the ad hoc network is proposed. In such C3G-A network, 3G and ad hoc networks adopt the same frequency bands, which result in additional interference for each other. In this study, an interference model of the C3G-A network is presented. Based on this model, the effects of the interference on network capacity are analysed. Meanwhile, the corresponding formulae of network capacity are deduced. It is observed from extensive simulation and numerical analysis that network capacity is seriously affected by the additional interference. In order to suppress the effects of the additional interference, an algorithm based on distance (ABD) is proposed so as to decrease the additional interference and maximise network capacity. The simulation results show that the ABD algorithm can effectively overcome the effects of the additional interference and approximate network capacity in the 3G network and the ad hoc network with different frequency bands. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analytic approximation to the largest eigenvalue distribution of a white Wishart matrix

    Publication Year: 2012 , Page(s): 1804 - 1811
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (535 KB)  

    Eigenvalue distributions of Wishart matrices are given in the literature as functions or distributions defined in terms of matrix arguments requiring numerical evaluation. As a result the relationship between parameter values and statistics is not available analytically and the complexity of the numerical evaluation involved may limit the implementation, evaluation and use of eigenvalue techniques using Wishart matrices. This study presents analytic expressions that approximate the distribution of the largest eigenvalue of white Wishart matrices and the corresponding sample covariance matrices. It is shown that the desired expression follows from an approximation to the Tracy-Widom distribution in terms of the Gamma distribution. The approximation offers largely simplified computation and provides statistics such as the mean value and region of support of the largest eigenvalue distribution. Numeric results from the literature are compared with the approximation and Monte Carlo simulation results are presented to illustrate the accuracy of the proposed analytic approximation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IET Communications covers the theory and practice of systems, networks and applications involving line, mobile radio, satellite and optical technologies for telecommunications, and Internet and multimedia communications.

Full Aims & Scope

Meet Our Editors

IET Research Journals
iet_com@theiet.org