Scheduled System Maintenance:
Some services will be unavailable Sunday, March 29th through Monday, March 30th. We apologize for the inconvenience.
By Topic

On the performance degradation from one-bit quantized detection

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

The purchase and pricing options are temporarily unavailable. Please try again later.
2 Author(s)
Willett, P. ; Connecticut Univ., Storrs, CT, USA ; Swaszek, P.F.

It is common signal detection practice to base tests on quantized data and frequently, as in decentralized detection, this quantization is extreme: to a single bit. As to the accompanying degradation in performance, certain cases (such as that of an additive signal model and an efficacy measure) are well-understood. However, there has been little treatment of more general cases. In this correspondence we explore the possible performance loss from two perspectives. We examine the Chernoff exponent and discover a nontrivial lower bound on the relative efficiency of an optimized one-bit quantized detector as compared to unquantized. We then examine the case of finite sample size and discover a family of nontrivial bounds. These are upper bounds on the probability of detection for an unquantized system given a specified quantized performance, given that both systems operate at the same false-alarm rate

Published in:

Information Theory, IEEE Transactions on  (Volume:41 ,  Issue: 6 )