Information Theory, IRE Transactions on
This Transactions ceased production in 1962. The current retitled publication is IEEE Transactions on Information Theory.
Latest Published Articles

Capacity of a certain asymmetrical binary channel with finite memory
Feb18 2014 
The utility of a communication channel and applications to suboptimal information handling procedures
Feb18 2014 
Multiple error correction by means of parity checks
Feb18 2014 
Prediction and filtering for random parameter systems
Feb18 2014 
On signal parameter estimation
Feb18 2014
Popular Articles

Visual pattern recognition by moment invariants
Jan06 2003 
Lowdensity paritycheck codes
Jan06 2003 
The zero error capacity of a noisy channel
Jan06 2003 
Quantizing for minimum distortion
Jan06 2003 
A statistical theory of target detection by pulsed radar
Jan06 2003
Publish in this Journal
Popular Articles (October 2014)
Includes the top 50 most frequently downloaded documents for this publication according to the most recent monthly usage statistics.
1. Visual pattern recognition by moment invariants
Page(s): 179  187In this paper a theory of twodimensional moment invariants for planar geometric figures is presented. A fundamental theorem is established to relate such moment invariants to the wellknown algebraic invariants. Complete systems of moment invariants under translation, similitude and orthogonal transformations are derived. Some moment invariants under general twodimensional linear transformations are also included. Both theoretical formulation and practical models of visual pattern recognition based upon these moment invariants are discussed. A simple simulation program together with its performance are also presented. It is shown that recognition of geometrical patterns and alphabetical characters independently of position, size and orientation can be accomplished. It is also indicated that generalization is possible to include invariance with parallel projection. View full abstract»

2. Lowdensity paritycheck codes
Page(s): 21  28A lowdensity paritycheck code is a code specified by a paritycheck matrix with the following properties: each column contains a small fixed number
j geq 3 of l's and each row contains a small fixed numberk > j of l's. The typical minimum distance of these codes increases linearly with block length for a fixed rate and fixedj . When used with maximum likelihood decoding on a sufficiently quiet binaryinput symmetric channel, the typical probability of decoding error decreases exponentially with block length for a fixed rate and fixedj . A simple but nonoptimum decoding scheme operating directly from the channel a posteriori probabilities is described. Both the equipment complexity and the datahandling capacity in bits per second of this decoder increase approximately linearly with block length. Forj > 3 and a sufficiently low rate, the probability of error using this decoder on a binary symmetric channel is shown to decrease at least exponentially with a root of the block length. Some experimental results show that the actual probability of decoding error is much smaller than this theoretical bound. View full abstract» 
3. The zero error capacity of a noisy channel
Page(s): 8  19The zero error capacity
C_o of a noisy channel is defined as the least upper bound of rates at which it is possible to transmit information with zero probability of error. Various properties ofC_o are studied; upper and lower bounds and methods of evaluation ofC_o are given. Inequalities are obtained for theC_o relating to the "sum" and "product" of two given channels. The analogous problem of zero error capacityC_oF for a channel with a feedback link is considered. It is shown that while the ordinary capacity of a memoryless channel with feedback is equal to that of the same channel without feedback, the zero error capacity may be greater. A solution is given to the problem of evaluatingC_oF . View full abstract» 
4. Quantizing for minimum distortion
Page(s): 7  12This paper discusses the problem of the minimization of the distortion of a signal by a quantizer when the number of output levels of the quantizer is fixed. The distortion is defined as the expected value of some function of the error between the input and the output of the quantizer. Equations are derived for the parameters of a quantizer with minimum distortion. The equations are not soluble without recourse to numerical methods, so an algorithm is developed to simplify their numerical solution. The case of an input signal with normally distributed amplitude and an expected squared error distortion measure is explicitly computed and values of the optimum quantizer parameters are tabulated. The optimization of a quantizer subject to the restriction that both input and output levels be equally spaced is also treated, and appropriate parameters are tabulated for the same case as above. View full abstract»

5. A statistical theory of target detection by pulsed radar
Page(s): 59  267This report presents data from which one may obtain the probability that a pulsedtype radar system will detect a given target at any range. This is in contrast to the usual method of obtaining radar range as a single number, which can be taken mathematically to imply that the probability of detection is zero at any range greater than this number, and certainty at any range less than this number. Three variables, which have so far received little quantitative attention in the subject of radar range, are introduced in the theory: l.The time taken to detect the target. 2.The average time interval between false alarms (false indications of targets). 3.The number of pulses integrated. It is shown briefly how the results for pulsedtype systems may be applied in the case of continuouswave systems. Those concerned with systems analysis problems including radar performance may profitably use this work as one link in a chain involving several probabilities. For instance, the probability that a given aircraft will be detected at least once while flying any given path through a specified model radar network may be calculated using the data presented here as a basis, provided that additional probability data on such things as outage time etc., are available. The theory developed here does not take account of interference such as clutter or manmade static, but assumes only random noise at the receiver input. Also, an ideal type of electronic integrator and detector are assumed. Thus the results are the best that can be obtained under ideal conditions. It is not too difficult, however, to make reasonable assumptions which will permit application of the results to the currently available types of radar. The first part of this report is a restatement of known radar fundamentals and supplies continuity and background for what follows. The mathematical part of the theory is not contained herein, but will be issued subsequently as a Separate report(2a) View full abstract»

6. Three models for the description of language
Page(s): 113  124We investigate several conceptions of linguistic structure to determine whether or not they can provide simple and "revealing" grammars that generate all of the sentences of English and only these. We find that no finitestate Markov process that produces symbols with transition from state to state can serve as an English grammar. Furthermore, the particular subclass of such processes that produce
n order statistical approximations to English do not come closer, with increasingn , to matching the output of an English grammar. We formalizethe notions of "phrase structure" and show that this gives us a method for describing language which is essentially more powerful, though still representable as a rather elementary type of finitestate process. Nevertheless, it is successful only when limited to a small subset of simple sentences. We study the formal properties of a set of grammatical transformations that carry sentences with phrase structure into new sentences with derived phrase structure, showing that transformational grammars are processes of the same elementary type as phrasestructure grammars; that the grammar of English is materially simplified if phrase structure description is limited to a kernel of simple sentences from which all other sentences are constructed by repeated transformations; and that this view of linguistic structure gives a certain insight into the use and understanding of language. View full abstract» 
7. An introduction to matched filters
Page(s): 311  329In a tutorial exposition, the following topics are discussed: definition of a matched filter; where matched filters arise; properties of matched filters; matchedfilter synthesis and signal specification; some forms of matched filters. View full abstract»

8. Envelopes and preenvelopes of real waveforms
Page(s): 53  57Rice's formula
^1 for the "envelope" of a given signal is very cumbersome; in any case where the signal is not a single sine wave, the analytical use and explicit calculation of the envelope is practically prohibitive. A different formula for the envelope is given herein which is much simpler and easier to handle analytically. We show precisely that ifhat{u}(t) is the Hilbert transform ofu(t) , then Rice's envelope ofu(t) is the absolute value of the complexvalued functionu(t) + i hat{u}(t) . The functionu + ihat{u} is called the preenvelope ofu and is shown to be involved implicitly in some other usual engineering practices. The Hilbert transformhat{u} is then studied; it is shown thathat{u} has the same power spectrum asu and is uncorrelated withu at the same time instant. Further, the autocorrelation of the preenvelope ofu is twice the preenvelope of the autocorrelation ofu . By using the preenvelope, the envelope of the output of a linear filter is easily calculated, and this is used to compute the first probability density for the envelope of the output of an arbitrary linear filter when the input is an arbitrary signal plus Gaussian noise. An application of preenvelopes to the frequency modulation of an arbitrary waveform by another arbitrary waveform is also given. View full abstract» 
9. Phase shift pulse codes with good periodic correlation properties
Page(s): 254  257A method of generating phase shift pulse codes of arbitrarily long length with zero periodic correlation except for the peak for zero shift is presented. The codes are of length
p^2 wherep is any prime number, andp different phase shifts corresponding to thep th roots of unity are necessary to generate them. Sincep different phase shifts are required, these codes are not as easy to generate and process as the binary codes, but this does not seem to be a serious limitation to their usefulness. Application of these codes can be made as interpulse phase modulation for range resolution in pulse Doppler radars or for a method of synchronizing a pulse code communication system. View full abstract» 
10. Complementary series
Page(s): 82  87A set of complementary series is defined as a pair of equally long, finite sequences of two kinds of elements which have the property that the number of pairs of like elements with any one given separation in one series is equal to the number of pairs of unlike elements with the same given separation in the other series. (For instance the two series, 1001010001 and 1000000110 have, respectively, three pairs of like and three pairs of unlike adjacent elements, four pairs of like and four pairs of unlike alternate elements, and so forth for all possible separations.) These series, which were originally conceived in connection with the optical problem of multislit spectrometry, also have possible applications in communication engineering, for when the two kinds of elements of these series are taken to be +1 and 1, it follows immediately from their definition that the sum of their two respective autocorrelation series is zero everywhere, except for the center term. Several propositions relative to these series, to their permissible number of elements, and to their synthesis are demonstrated. View full abstract»

11. Phase shift pulse codes with good periodic correlation properties (Corresp.)
Page(s): 381  382First Page of the ArticleView full abstract» 
12. A useful theorem for nonlinear devices having Gaussian inputs
Page(s): 69  72If and only if the inputs to a set of nonlinear, zeromemory devices are variates drawn from a Gaussian random process, a useful general relationship may be found between certain input and output statistics of the set. This relationship equates partial derivatives of the (highorder) output correlation coefficient taken with respect to the input correlation coefficients, to the output correlation coefficient of a new set of nonlinear devices bearing a simple derivative relation to the original set. Application is made to the interesting special cases of conventional crosscorrelation and autocorrelation functions, and Bussgang's theorem is easily proved. As examples, the output autocorrelation functions are simply obtained for a hard limiter, linear detector, clipper, and smooth limiter. View full abstract»

13. On the Shannon theory of information transmission in the case of continuous signals
Page(s): 102  108First Page of the ArticleView full abstract» 
14. On a moment theorem for complex Gaussian processes
Page(s): 194  195A general theorem is provided for the moments of a complex Gaussian video process. This theorem is analogous to the wellknown property of the multivariate normal distribution for real variables, which states that an
n th order central product moment is zero ifn is odd and is equal to a sum of products of covariances whenn is even. View full abstract» 
15. Probability of detection for fluctuating targets
Page(s): 269  308This report considers the probability of detection off a target by a pulsed search radar, when the target has a fluctuating cross section. Formulas for detection probability are derived, and curves off detection probability vs, range are given, for four different target fluctuation models. The investigation shows that, for these fluctuation models, the probability of detection for a fluctuating target is less than that for a nonfluctuating target if the range is sufficiently short, and is greater if the range is sufficiently long. The amount by which the fluctuating and nonfluctuating cases differ depends on the rapidity of fluctuation and on the statistical distribution of the fluctuations. Figure 18, p. 307, shows a comparison between the nonfluctuating case and the four fluctuating cases considered. View full abstract»

16. Spectral power density functions in pulse time modulation
Page(s): 40  46Spectral power density functions corresponding to various types of pulse shapes, and probability distribution functions arising in the study of pulse time modulation problems are computed. The results are presented in tabular form. The following cases are considered: PAM and PPM, for arbitrary pulse shape, PDM, for rectangular, Gaussian, and errorfunction pulse shapes, and SEM, for rectangular pulse shape. View full abstract»

17. On the factorization of rational matrices
Page(s): 172  189Many problems in electrical engineering, such as the synthesis of linear n ports and the detection and filtration of multivariable systems corrupted by stationary additive noise, depend for their successful solution upon the factorization of a matrixvalued function of a complex variable
p . This paper presents several algorithms for affecting such decompositions for the class of rational matricesG(p) , i.e., matrices whose entries are ratios of polynomials inp . The methods employed are elementary in nature and center around the Smith canonic form of a polynomial matrix. Several nontrivial examples are worked out in detail to illustrate the theory. View full abstract» 
18. A note on the maximum flow through a network
Page(s): 117  119This note discusses the problem of maximizing the rate of flow from one terminal to another, through a network which consists of a number of branches, each of which has a limited capacity. The main result is a theorem: The maximum possible flow from left to right through a network is equal to the minimum value among all simple cutsets. This theorem is applied to solve a more general problem, in which a number of input nodes and a number of output nodes are used. View full abstract»

19. Tests on a cell assembly theory of the action of the brain, using a large digital computer
Page(s): 80  93Theories by D.O. Hebb and P.M. Milner on how the brain works were tested by simulating neuron nets on the IBM Type 704 Electronic Calculator. The formation of cell assemblies from an unorganized net of neurons was demonstrated, as well as a plausible mechanism for shortterm memory and the phenomena of growth and fractionation of cell assemblies. The cell assemblies do not yet act just as the theory requires, but changes in the theory and the simulation offer promise for further experimentation. View full abstract»

20. Human memory and the storage of information
Page(s): 129  137The amount of selective information in a message can be increased either by increasing the variety of the symbols from which it is composed or by increasing the length of the message. Psychological experiments indicate that the variety of the symbols is far less important than the length of the message in controlling what human subjects are able to remember. Two messages equal in length but differing in the amount of information per symbol are equally easy to memorize. This fact provides an opportunity for the effective use of recoding procedures and reveals the mental economy involved in organizing the materials we want to remember. An apparent exception to the rule that length, not variety, is the limiting factor in human memory occurs in the case of redundant messages. If two messages of the same length differ because one contains redundancy familiar to the learner and the other does not, the redundant message will usually be easier to learn and remember. In terms of the theory of information, redundancy can be viewed equally well as a reduction in the information per symbol or as a reduction in the effective length of the message. Psychologically, however, these two alternatives are not equivalent; redundancy permits a reorganization into familiar sequences in a way that effectively shortens the length of the message and so makes it easier to memorize, but this is not psychologically equivalent to reducing the amount of information per symbol. It is as if each storage register could accept any one of a tremendous variety of alternative symbols, but the number of registers available was quite limited. If we use these registers to store binary symbols, the storage is inefficient. If we group the binary symbols into sequences, give each sequence a different name, and store the recoded names, we can make much more efficient use of the registers. Familiar redundancy is helpful because it enables us to recode more efficiently. These results for human memory are all th e more striking in view of the fact that the amount of information per symbol is a critically important variable controlling the accuracy of our perceptions. View full abstract»

21. Design and performance of phaselock circuits capable of nearoptimum performance over a wide range of input signal and noise levels
Page(s): 66  76Phase locked loops (PLLs) provide an efficient method for detection and tracking of narrowband signals in the presence of wideband noise. This paper explains how minimumrmserror loops may be designed if the inputsignal level, inputnoise level, and a specification for transient performance are given. However, the system performance of PLLs departs rapidly from the best obtainable performance if either the signal or the noise levels are different from the design levels, and if no compensating changes are made in the PLL. A marked improvement results if the total input power is held constant, regardless of signal or noise levels. It will be demonstrated that a fixedcomponent loop preceded by a bandpass limiter yields nearoptimum performance over a wide range of input signal and noise levels. The following topics are discussed: 1. An outline of the theoretical design of minimmum rmserror, PLLs when inputsignal level, inputnoise level, and a specification for transient error are given. 2. The effects of different input levels of signal and noise: a. On a system having a fixedcomponent loop that is optimum only for an original set of levels. b. On a system in which loop components maintain optimum performance when the new levels are given. 3. Characteristics of a bandpass limiter. 4. A comparison of the effect of different signal and noise levels: a. On a loop using a fixed filter preceded by an automatic gain control (AGC) system that holds the signal level constant. b. On a fixedfilter loop preceded by a bandpass limiter. c. On a variablefilter loop continually adjusted to be optimum. 5. Experimental verification of the fixedcomponent loop preceded by a bandpass limiter. View full abstract»

22. Binary codes with specified minimum distance
Page(s): 445  450Two
n digit sequences, called "points," of binary digits are said to be at distanced if exactlyd corresponding digits are unlike in the two sequences. The construction of sets of points, called codes, in which some specified minimum distance is maintained between pairs of points is of interest in the design of selfchecking systems for computing with or transmitting binary digits, the minimum distance being the minimum number of digital errors required to produce an undetected error in the system output. Previous work in the field had established general upper bounds for the number ofn digit points in codes of minimum distance d with certain properties. This paper gives new results in the field in the form of theorems which permit systematic construction of codes for givenn, d ; for somen, d , the codes contain the greatest possible numbers of points. View full abstract» 
23. Design factors in the development of an optical character recognition machine
Page(s): 167  171Interest in optical character recognition has grown tremendously during the past several years. When one considers the volume of printed information that must be accessible both to human readers and to data processing machines, the increasing interest is not at all surprising. Many optical character recognition principles that appear to be quite powerful are impractical or too costly to implement with available technology. Other principles that appear relatively economical require an input quality that is impractical to generate. The field for commercial development falls somewhere between these two extremes. Many factors determine the character recognition principles to be utilized in the development of a machine system. Of primary importance are: 1) The shapes of the symbols to be sensed, 2) The number of different symbols to be discriminated, 3) The print quality range that must be accommodated, and 4) The required performance of the system. Unfortunately, in the present state of the art, the relationship of these factors to each other and to a given principle can be only qualitatively defined. This paper describes and discusses the development of a practical optical character recognition system, the character recognition portion of the IBM 1418 Optical Character Reader. The considerations involved are applicable to the development of any character recognition system. View full abstract»

24. Picture coding using pseudorandom noise
Page(s): 145  154In order to transmit television pictures over a digital channel, it is necessary to send a binary code which represents the intensity level at each point in the picture. For good picture quality using standard PCM transmission, at least six bits are required at each sample point, since the eye is very sensitive to the small intensity steps introduced by quantization. However, by simply adding some noise to the signal before it is quantized and subtracting the same noise at the receiver, the quantization steps can be broken up and the source rate reduced to three bits per sample. Pseudorandom number generators can be synchronized at the transmitter and receiver to provide the identical "noise" which makes the process possible. Thus, with the addition of only a small amount of equipment, the efficiency of a PCM channel can be doubled. View full abstract»

25. Leastsquare synthesis of radar ambiguity functions
Page(s): 246  254The synthesis of radar ambiguity functions is approached by minimizing the integrated square error between an arbitrary desired function and a realizable ambiguity function. The approximation is carried out via an orthonormal signal basis which generates an "induced basis" over the timefrequency plane consisting of all pairwise crossambiguity functions of the signal basis. The minimum meansquare error and the corresponding signal are determined through an eigenvalue problem for 1) approximation by complex autoambiguity function and 2) approximation by complex crossambiguity function. A new form of the realizability theorem shows that the conditions for crossambiguity functions differ from those for autoambiguity functions only in the absence of the symmetry condition:
F(tau, omega) = F^{ast}(tau, omega) . Moreover, the approximations by cross and autoambiguity functions coincide whenever the desired function has the above symmetry. When an ambiguity function realizable by a known signal is to be approximated on a finite basis, leastsquare approximation in signal space or in ambiguityfunction space leads to equivalent results. The relation between the meansquare errors in the two spaces is obtained. For the phase incoherent radar case an iteration procedure is presented for successively modifying the arbitrary phase assigned to the desired function. Convergence is proved in the sense that the error is nonincreasing at each stage of the iteration, but arrival at the best approximation to the desired magnitude is not guaranteed. As an aid in numerical applications, a formulation based on a discrete sample grid in the timefrequency plane is derived. With the appropriate grid dimensions, the analytic procedures carry over directly into sampleddata representation. View full abstract» 
26. Predictive codingI
Page(s): 16  24Predictive coding is a procedure for transmitting messages which are sequences of magnitudes. In this coding method, the transmitter and the receiver store past message terms, and from them estimate the value of the next message term. The transmitter transmits, not the message term, but the difference between it and its predicted value. At the receiver this error term is added to the receiver prediction to reproduce the message term. This procedure is defined and messages, prediction, entropy, and ideal coding are discussed to provide a basis for Part II, which will give the mathematical criterion for the best predictor for use in the predictive coding of particular messages, will give examples of such messages, and will show that the error term which is transmitted in predictive coding may always be coded efficiently. View full abstract»

27. Encoding and errorcorrection procedures for the BoseChaudhuri codes
Page(s): 459  470Bose and RayChaudhuri have recently described a class of binary codes which for arbitrary
m andt aret error correcting and have length2^m  1 of which no more thanmt digits are redundancy. This paper describes a simple errorcorrection procedure for these codes. Their cyclic structure is demonstrated and methods of exploiting it to implement the coding and correction procedure using shift registers are outlined. Closer bounds on the number of redundancy digits are derived. View full abstract» 
28. Visual Pattern Discrimination
Page(s): 84  92Visual discrimination experiments were conducted using unfamiliar displays generated by a digital computer. The displays contained two sidebyside fields with different statistical, topological or heuristic properties. Discrimination was defined as that spontaneous visual process which gives the immediate impression of two distinct fields. The condition for such discrimination was found to be based primarily on clusters or lines formed by proximate points of uniform brightness. A similar rule of connectivity with hue replacing brightness was obtained by using varicolored dots of equal subjective brightness. The limitations in discriminating complex line structures were also investigated. View full abstract»

29. A matched filter detection system for complicated Doppler shifted signals
Page(s): 373  385A matched filter system is described which was designed to detect complicated signals subject to a wide range of possible Doppler shifts. A 100 tap bandpass delay line was used in conjunction with a resistor weighting matrix to synthesize signals and filter characteristics. The system could handle a signal with a durationbandwidth product of 100 over a range of Doppler frequency shifts 17 times the reciprocal of the signal duration. A theoretical discussion of the Doppler effect is given, making use of conjugate functions or Hilbert Transforms. Various engineering compromises which simplify the construction of matched filters are suggested. The performance of the resulting signal detection system was within 5 db of that of an ideal theoretical model. View full abstract»

30. Measurements on timevariant communication channels
Page(s): 229  236We discuss the problems of making detailed measurements of instantaneous values and the statistical parameters of timevariant filters when observations are permitted at the filter terminals only. It appears that the product of the maximum time and frequencyspreadings produced by the timevariant filter sets a limit on our ability to determine the instantaneous values unambiguously even in the absence of additive noise. This limit can be relaxed when it comes to determining average or statistical parameters of the filter. For the determination of secondorder filter statistics a fourthmoment method is presented that exhibits some novel aspects. View full abstract»

31. The response of a phaselocked loop to a sinusoid plus noise
Page(s): 136  142The phaselocked loop is a practical device for separating a sinusoidal signal from additive noise. In this device the incoming signalplusnoise is multiplied by a noisefree sinusoid generated by a voltagecontrolled oscillator (vco). The filtered product is used to lock the phase of the vco output to that of the incoming signal, thus producing a relatively clean version of the incoming signal in which the noise manifests itself as a small phase modulation. Analysis of this noiseproduced phase modulation is complicated by the presence of the multiplier at the input to the loop. This paper presents a perturbation method which reduces this inherently nonlinear servo analysis problem to the analysis of a series of linear systems, the first of which is related to the linear model used by previous authors. The perturbation technique permits the phase modulation resulting from an arbitrary noise input to be computed to any desired accuracy. This analysis is particularly useful in predicting loop performance when it is used as a narrowband receiver in a phasecomparison anglemeasuring system, View full abstract»

32. On the asmptotic efficiency of locally optimum detectors
Page(s): 67  71A detector examines an unknown waveform to determine whether it is a mixture of signal and noise, or noise alone. The NeymanPearson detector is optimum in the sense that for given false alarm probability, signaltonoise ratio, and number of observations, it minimizes the false dismissal probability. This detector is optimum for all values of the signaltonoise ratio, and its implementation is usually quite complicated. In many situations it is desired to detect signals which are very weak compared to the noise. The locally optimum detector is defined as one which has optimum properties only for small signaltonoise ratios. It is proposed as an alternative to the NeymanPearson detector, since in practice it is usually only necessary to have a nearoptimum detector for weak signals, since strong signals will be detected with reasonable accuracy even if the detector is well below optimum. In order to evaluate the performance of the locally optimum detector, it is compared to the NeymanPearson detector. This comparison is based on the concept of asymptotic relative efficiency introduced by Pitman for comparing hypothesis testing procedures. On the basis of this comparison, it is shown that the locally optimum detector is asymptotically as efficient as the NeymanPearson detector. A number of applications to several detection problems are considered. It is found that the implementation of the locally optimum detector is less, or at most as complicated as that of the NeymanPearson detector. View full abstract»

33. Predictive codingII
Page(s): 24  33In Part I predictive coding was defined and messages, prediction, entropy, and ideal coding were discussed. In the present paper the criterion to be used for predictors for the purpose of predictive coding is defined: that predictor is optimum in the information theory (IT) sense which minimizes the entropy of the average errorterm distribution. Ordered averages of distributions are defined and it is shown that if a predictor gives an ordered average error term distribution it will be a best IT predictor. Special classes of messages are considered for which a best IT predictor can easily be found, and some examples are given. The error terms which are transmitted in predictive coding are treated as if they were statistically independent. If this is indeed the case, or a good approximation, then it is still necessary to show that sequences of message terms which are statistically independent may always be coded efficiently, without impractically large memory requirements, in order to show that predictive coding may be practical and efficient in such cases. This is done in the final section of this paper. View full abstract»

34. Quaternary codes for pulsed radar
Page(s): 400  408A class of quaternary codes is described, and an algorithm for generating the codes is given. The codes have properties that make them useful for radar applications: 1) their autocorrelation consists of a single pulse, 2) their length can be any power of two, 3) each code can be paired with another code (its mate) of the same class in such a way that the crosscorrelation of mates is identically zero, 4) coded waveforms can be generated in a simple network the number of whose elements is proportional to the base2 logarithm of the code length, and 5) the same network can be readily converted to a matched filter for the coded waveform. View full abstract»

35. Detection of fluctuating pulsed signals in the presence of noise
Page(s): 175  178This paper treats the detection of pulsed signals in the presence of receiver noise for the case of randomly fluctuating signal strength. The system considered consists of a predetection stage, a square law envelope detector, and a linear postdetection integrator. The main problem is the calculation of the probability density function of the output of the postdetection integrator. The analysis is carried out for a large family of probability density functions of the signal fluctuations and for very general types of correlation properties of the signal fluctuations. The effects of nonuniform beam shape and of nonuniform weighting of pulses by the postdetection integrator are also taken into account. The function which is actually evaluated is the Laplace transform of the probability density function of the integrator output. In many of the cases treated, the resulting Laplace transform has an inverse of known form. In such cases the evaluation of the probability density function would require the computation of a finite number of constants; in practice this would usually require the use of computing machinery, but would be perfectly feasible with presently available computing machinery. View full abstract»

36. Communication theory vs. communications (Corresp.)
Page(s): 325  326First Page of the ArticleView full abstract» 
37. Speech recognition: A model and a program for research
Page(s): 155  159A speech recognition model is proposed in which the transformation from an input speech signal into a sequence of phonemes is carried out largely through an active or feedback process. In this process, patterns are generated internally in the analyzer according to an adaptable sequence of instructions until a best match with the input signal is obtained. Details of the process are given, and the areas where further research is needed are indicated. View full abstract»

38. An expansion for some secondorder probability distributions and its application to noise problems
Page(s): 10  15In this paper it is shown that, in general, secondorder probability distributions may be expanded in a certain double series involving orthogonal polynomials associated with the corresponding firstorder probability distributions. Attention is restricted to those secondorder probability distributions which lead to a "diagonal" form for this expansion. When such distributions are joint probability distributions for samples taken from a pair of time series, some interesting results can be demonstrated. For example, it is shown that if one of the time series undergoes an amplitude distortion in a timevarying "instantaneous" nonlinear device, the covariance function after distortion is simply proportional to that before distortion. Some simple results concerning conditional expectations are given and an extension of a theorem, due to Doob, on stationary Markov processes is presented. The relation between the "diagonal" expansion used in this paper and the Mercer expansion of the kernel of a certain linear homogeneous integral equation, is pointed out and in conclusion explicit expansions are given for three specific examples. View full abstract»

39. Optimum sequential detection of signals in noise
Page(s): 5  18A device which performs a sequential test on a mixture of signal and noise is called a Sequential Detector. With such a device, two thresholds are introduced, each of which is associated with a terminal decision. The length of the detection process (integration time) is not fixed in advance of the experiment but is a random variable, depending on the progress of the test. An optimum form of such a test exists and is characterized by the fact that detection is performed on the average faster than with conventional; i.e., fixed sample size (optimum or nonoptimum), devices. The sequential analysis developed by A. Wald is fully applied in this paper, but an important new feature is the treatment of correlated samples and its application to continuous sampling processes. In the introduction, the problem is presented within the framework of Wald's Statistical Decision Theory, and the optimum properties of sequential detectors are discussed accordingly. It is pointed out that a sequential detector is defined in terms of conditional probabilities and hence its operation is essentially independent of a priori information, although the average risk or cost of detection necessarily depends on the a priori signal data. The general theory is illustrated with some cases of special interest. The simplest example of detection involves independent, discrete observations; e.g., the case of a pulsed carrier in normal noise. Here the optimum detector still has the wellknown
log I_0 structure, but it is shown that the square law approximation for weak signals requires a bias correction due to the fourth order term. Coherent sequential detection of causal signals in normal noise provides another illustration of the theory. An interesting result is that the probabilities of error do not depend on the shape of the filter, provided the proper computer is used. The use of RCfiltered noise illustrates the treatment of continuous detection processes. Finally, the reduction in min imum detectable signal level resulting from the use of a sequential detector is computed. A third example is the sequential detection of random signals in normal noise. It is shown that, although the optimum computer involves the knowledge of the inverted correlation matrix, the average length of the test does not. Hence a curious result is obtained that in this instance detection can be performed in an arbitrarily short time. The paper concludes with a discussion of the practical necessity of truncating the detection process and exact expressions for the error probabilities of such truncated tests are derived and compared with Wald's original approximations. View full abstract» 
40. Information transmission with additional noise
Page(s): 293  304First Page of the ArticleView full abstract» 
41. A linear coding for transmitting a set of correlated signals
Page(s): 41  46A coding scheme is described for the transmission of
n continuous correlated signals overm channels,m being equal to or less thann . Each of them signals is a linear combination of then original signals. The coefficients of this linear transformation, which constitute anm times n matrix, are constants of the coding scheme. For the purpose of decoding, them signals are once more combined linearly inton output signals which approximate the input signals. The coefficients of the coding matrix which minimize the sum of the mean square differences between the original signals and the reconstructed ones are shown to be the components of the eigenvectors of the matrix of the correlation coefficients of the original signals. The decoding matrix is the transpose of the coding matrix. As an example, the coding scheme is applied to a channel vocoder in which speech is transmitted by means of a set of signals proportional to the speech energy in the various frequency bands. These signals are strongly correlated, and the coding results in a substantial reduction in the number of signals necessary to transmit highly articulate speech. The coding theory can be extended to include the minimization of the expectation of any positive definite quadratic function of the differences between the original and reconstructed signals. In addition, if the signals are Gaussian, the sum of the channel capacities necessary to transmit the transformed signals is shown to be equal to or less than that necessary to transmit the original signals. View full abstract» 
42. Golay's complementary series (Corresp.)
Page(s): 273  276First Page of the ArticleView full abstract» 
43. Machine recognition of handsent Morse code
Page(s): 17  24A transistorized special purpose digital computer called MAUDE (Morse AUtomatic DEcoder) has been designed, built and analyzed. This computer decodes a handsent Morse message, which is printed on a teletypewriter. MAUDE's decisions take place at a number of different levels and its "knowledge" is not only that of relative durations of dots and dashes, but also of the Morse code and even of certain elementary properties of language. MAUDE has successfully decoded between 90 per cent and 95 per cent of 184 operators. A successful decoding is one which results in a text which can be easily read by a man who knows the language. It is felt that MAUDE can be a practical piece of equipment for a site with heavy traffic. Its performance will be inferior to that of a man until more sophisticated language rules, using at least a word vocabulary, are included. View full abstract»

44. Nonmeansquare error criteria
Page(s): 125  126While in the engineering literature nonmeansquare error criteria for predictors are often presented as physically significant and then shunted aside because of mathematical unmanageability, it is shown here that ia the case of Gaussian processes all such criteria given ia three recent textbooks yield the same predictor as the linear minimum meansquare predictor of Wiener. View full abstract»

45. Some comments on the detection of Gaussian signals in Gaussian noise
Page(s): 65  68It is pointed out that a frequently used mathematical model of the detection problem yields detection with arbitrarily small probability of error in many cases of engineering interest. Some comments are made as to why the model is inadequate from an engineering standpoint. View full abstract»

46. On the problem of time jitter in sampling
Page(s): 226  236There are many communication as well as control systemsin fact, an increasing number of themin which at some stage a continuous data source is sampled "periodically," at the nominal rate of
2 W samples a second,W being the highest frequency component of the data. There are, generally speaking, two kinds of errors introduced by the sampling mechanism: errors in amplitude and errors in timing, or "time jitter." This paper is concerned with the latter. We assume a random model for the jitter. We begin with a study of the properties of the jittered samples for both deterministic and stochastic signals. Depending on the stochastic properties of the jitter, the presence of a discrete component in the signal may give rise to new discrete components as a result of jitter. Generally speaking, however, the effect of jitter is to produce a (frequency) selective attenuation as well as a uniform spectral density component. The more correlation in the jitter, the less the spectral distribution is affected. Various measures of the "error" due to jitter are estimated. Thus the error may be the mean square in the fitted samples or some linear or nonlinear operation thereof. Also, weighted meansquare errors are considered, and general methods of estimating these errors are developed. The problem of optimal use of the jittered samples is next considered. Interpreting the optimality to be in the meansquare sense, an explicit solution for optimal linear operation is obtained. Also for a wide class of signals it is shown that jitter does not affect the nature of the optimal operations; linear operations, for instance, remain linear, although with different weights. To illustrate the methods an example drawn from telemetry is given, where the timing is derived from the zero crossings of a sine wave and the time jitter is taken as due to noise. The jitter is highly correlated and the results involve some lengthy calculations. View full abstract» 
47. On binary channels and their cascades
Page(s): 19  27A detailed analysis of the general binary channel is given, with special reference to capacity (both separately and in cascade), input and output symbol distributions, and probability of error. The infinite number of binary channels with the same capacity lie on doublebranched equicapacity lines. Of the channels on the lower branch of a given equicapacity line, the symmetric channel has the smallest probability of error and the largest capacity in cascade, unless the capacity is small, in which case the asymmetric channel (with one noiseless symbol) has the smallest probability of error and the largest capacity in cascade. By simply reversing the designation of the output (or input) symbols, we can decrease the probability of error of any channel on the upper branch of the equicapacity line and increase the capacity in cascade of any asymmetric channel on the upper branch. In a binary channel neither symbol should be transmitted with a probability lying outside the interval
[1/e, 1  (1/e)] if capacity is to be achieved. The maximally asymmetric input symbol distributions are approached by certain lowcapacity channels. For these channels, redundancy coding permits an appreciable fraction of capacity in cascade if sufficient delay can be tolerated. View full abstract» 
48. Stability of circuits with randomly timevarying parameters
Page(s): 260  270This paper is concerned with the stability, in a stochastic sense, of circuits or systems described by ordinary differential equations with randomly time varying parameters. Sufficient conditions for stability in the mean square are obtained by an extension of "Lyapunov's Second Method" to stochastic problems. The general result while applicable to nonlinear as well as linear systems, presents formidable computational difficulties except for a few special cases which are tabulated. The linear case with certain assumptions concerning the statistical independence of parameter variation is carried out. View full abstract»

49. A note on invariant relations for ambiguity and distance functions
Page(s): 164  167Woodward's result for the ambiguity function, that the volume associated with its squared magnitude over the timeshift and frequencyshift plane is a constant, has been shown to be true also for a crossambiguity function for two time functions. If complex time functions have been obtained by means of a Hilbert transformation from real time functions, it is found for the crossambiguity function that the volumes under the squared real part and under the squared imaginary part are constant and contribute equally to the volume under the squared magnitude function. A "distance" function for two time functions is defined to be the integrated squared difference between these functions. The relation for the squared real part of the ambiguity function readily yields an invariant relation for the volume associated with this distance function in the case of Hilbertderived complex time functions. An especially simple invariant relation for the "mean" distance, as computed over the timeshift and frequencyshift plane, exists for such time functions having finite energy and finite mean value. View full abstract»

50. Locally stationary random processes
Page(s): 182  187A new kind of random process, the locally stationary random process, is defined, which includes the stationary random process as a special case. Numerous examples of locally stationary random processes are exhibited. By the generalized spectral density
Psi(omega, omega prime) of a random process is meant the twodimensional Fourier transform of the covariance of the process; as is well known, in the case of stationary processes,Psi(omega, omega prime) reduces to a positive mass distribution on the lineomega = omega prime in theomega, omega prime plane, a fact which is the gist of the familiar WienerKhintchine relations. In the case of locally stationary random processes, a relation is found between the covariance and the spectral density which constitutes a natural generalization of the WienerKhintchine relations. View full abstract»
Aims & Scope
This Transactions ceased production in 1962. The current retitled publication is IEEE Transactions on Information Theory.
Further Links
Persistent Link: http://ieeexplore.ieee.org/servlet/opac?punumber=4547527 More »
Frequency: 6
ISSN:
00961000
Subjects
 Communication, Networking & Broadcasting
 Signal Processing & Analysis