Loading [MathJax]/extensions/TeX/boldsymbol.js
Cramér–Rao-Bound-Based Sensitivity Analysis and Verification of a Digital Microwave Radiometer | IEEE Journals & Magazine | IEEE Xplore

Cramér–Rao-Bound-Based Sensitivity Analysis and Verification of a Digital Microwave Radiometer


Abstract:

Radiometric sensitivity describes the resolution capabilities of a radiometric system and is, therefore, one of the most important properties of a total power radiometer....Show More

Abstract:

Radiometric sensitivity describes the resolution capabilities of a radiometric system and is, therefore, one of the most important properties of a total power radiometer. This study investigates how different nonidealities influence this quality measure, both theoretically and through simulations and measurements. The novelty of this work lies in applying the theoretical framework of the Cramér–Rao lower bound to derive the sensitivity of a digital total power radiometer. Various receiver chain imperfections, such as additive receiver noise, gain variations, phase noise, and quantization noise, are incorporated into the derivation. Additionally, a suboptimal, yet practically relevant estimator is evaluated for its estimation capabilities.
Published in: IEEE Transactions on Microwave Theory and Techniques ( Volume: 73, Issue: 3, March 2025)
Page(s): 1356 - 1367
Date of Publication: 22 January 2025

ISSN Information:

Funding Agency:


CCBY - IEEE is not the copyright holder of this material. Please follow the instructions via https://creativecommons.org/licenses/by/4.0/ to obtain full-text articles and stipulations in the API documentation.
SECTION I.

Introduction

Radiometers are very sensitive passive microwave power detectors that usually measure the radiation of physical objects described by Planck’s law [1]. The used frequency range can go from the L-, C- X-, Ku-, and Ka-bands [2] to the F-, D-, G-, Y-, and J-bands [3], [4] or even to the sub-THz range [5]. These devices are used in various fields of application like medical applications [6], [7], [8], [9], remote sensing [10], [11], [12], industrial applications [13], [14], forest fire detection [15], and human presence detection [16], to name a few. The radiometric sensitivity is a critical property of a microwave radiometer, as it determines the smallest distinguishable power level at the input of the radiometer device and, therefore, its resolution capabilities.

Mathematically, the sensitivity can be defined [17] as\begin{equation*} \Delta T = \frac {{\mathrm {std}}\lbrace \hat {T}\rbrace }{\dfrac {\partial {\mathbb {E}}\lbrace \hat {T}\rbrace }{\partial T}}. \tag {1}\end{equation*} View SourceRight-click on figure for MathML and additional features.Usually, one uses T = {T_{\mathrm {sys}}} , which is defined [1] as\begin{equation*} {T_{\mathrm {sys}}} = {T_{\mathrm {ant}}} + {T_{\mathrm {rec}}} \tag {2}\end{equation*} View SourceRight-click on figure for MathML and additional features.where T_{\mathrm {ant}} is the equivalent antenna temperature and T_{\mathrm {rec}} is the equivalent receiver noise temperature referred to as the input of the device. For an unbiased estimation {\mathbb {E}}\lbrace \hat {T}\rbrace = T , the sensitivity coincides with the standard deviation of the estimated temperature {\mathrm {std}}\lbrace \hat {T}\rbrace .

Since resolving small temperature differences is essential in various fields of applications, the radiometric sensitivity for analog devices has been studied [1] and is shown to be\begin{equation*} {\Delta {T_{\mathrm {sys}}}} = {T_{\mathrm {sys}}} \sqrt {\frac {1}{ {B}{\tau }} + \left ({{ \frac {\Delta {G}}{G} }}\right)^{2}} \tag {3}\end{equation*} View SourceRight-click on figure for MathML and additional features.for the total power architecture, where B is the bandwidth of the receiver chain, \tau is the integration or averaging time, G is the power gain of the receiver, and \Delta {G} is the rms-value of the randomly fluctuating power gain.

The improvement of analog-to-digital converters (ADCs) enables the shift of signal processing to the digital domain, which means that the signal is directly digitized after downconversion. This leads to many advantages over the analog domain, like increased stability, reconfigurability, and the possibility for more sophisticated signal-processing algorithms like evaluating the fourth moment for interference detection [18]. Due to these advantages, more and more radiometers are being built in a so-called direct sampling or digital way [19], [20], [21].

In our previous work [22], we published an implementation of a passive digital microwave radiometer (PDMR) and laid out some advantages of the processing in the digital domain, like the possibility of evaluating the cross correlation for all possible lag values. The architecture of the previously published dual-channel PDMR can be seen in Fig. 1.

Fig. 1. - Architecture of the previously published dual-channel PDMR. For a detailed description, see [22]. The signal is digitally downconverted and its data rate is reduced using a numerically controlled oscillator (NCO) and a digital down converter (DDC), respectively, before processing.
Fig. 1.

Architecture of the previously published dual-channel PDMR. For a detailed description, see [22]. The signal is digitally downconverted and its data rate is reduced using a numerically controlled oscillator (NCO) and a digital down converter (DDC), respectively, before processing.

This raises the question of how the sensitivity of these direct-sampling radiometers compares to their analog counterparts. Some work in this direction has been done in the following publications (listed from oldest to most recent).

  1. Ohlson and Swett [23] derived how sensitivity is influenced by the quantization and sampling process, using the statistics of the noise processes and the transfer functions of the system devices. However, no measurements are shown, and receiver gain variations are not incorporated.

  2. In [24], a theoretical derivation is provided on how sampling, sampling jitter, dc bias, and quantization influence the correlation output. However, receiver gain variations are not incorporated, and no measurements are presented.

  3. In [25], a PDMR prototype with a center frequency of 1.4 GHz and a bandwidth of 20 MHz is presented, and the influence of sampling on the correlator signal-to-noise ratio is demonstrated. However, receiver gain variations are not incorporated.

  4. Lu et al. [26] provided a thorough theoretical derivation incorporating receiver gain variations, but the transfer function of the system is required. No measurements are conducted, and the signal model does not model quadrature downconversion.

In Table I, a summarized comparison of these publications is provided.

TABLE I Comparison of Existing Literature for the Sensitivity of Digital Radiometers
Table I- Comparison of Existing Literature for the Sensitivity of Digital Radiometers

Unfortunately, to the best of the author’s knowledge, no other publications on this topic are available. With this publication, we aim to fill that gap in the literature by providing a solid theoretical foundation for the sensitivity and estimation capabilities of a digital radiometer that incorporates receiver gain variations, quantization, additive noise, and phase noise, along with simulations and measurements that confirm the theoretical derivations. The main benefits of this work are as follows.

  1. This article offers a relatively simple derivation of the sensitivity of a digital radiometer using the well-known framework of the Cramér–Rao lower bound (CRLB).

  2. Only the variances of the involved random processes need to be known—no transfer functions or higher statistical moments are required.

  3. Different nonidealities are incorporated into a single model.

  4. Insights are given on how the modeling of the amplifier can change the estimation problem.

  5. Real-world measurements and simulations validate the derived theoretical equations.

This publication is structured as follows: The CRLB theory is shortly reviewed in Section II. The signal model and the CRLB are derived in Section III. In Section V-A, the measurement setup is introduced. In Section V-B, the results are shown and discussed. A conclusion follows in Section VI.

In the following italic variables, for example, x, are scalars, lowercase bold symbols, \boldsymbol {x} , are vectors, and uppercase bold symbols, \boldsymbol {X} , are matrices. {\mathbb {E}}\lbrace \cdot \rbrace , {\mathrm {var}}\lbrace \cdot \rbrace , and {\mathrm {std}}\lbrace \cdot \rbrace denote the expectation, variance, and standard deviation of a random variable, respectively. \boldsymbol {X}^{\top } and \boldsymbol {X}^{-1} denote the transpose and inverse of a matrix \boldsymbol {X} , respectively.

Quantities marked with a hat, \hat {x} , are estimated values. {p}(x; {\theta }) describes the probability density function (pdf) of a random variable parameterized by a parameter \theta .

\mathcal {N}(\mu, \sigma ^{2}) describes a Gaussian distribution with the mean \mu and the variance \sigma ^{2} . \mathcal {U}(a,b) describes a uniform distribution in the interval from a to b. \mathcal {R}\lbrace \cdot \rbrace and \mathcal {I}\lbrace \cdot \rbrace are denoting the real part and imaginary part of a complex number, respectively.

SECTION II.

CRLB-Theory

In this section, a short overview of the CRLB theory is given. Assume a vector parameter {\boldsymbol {\theta }} = \begin{bmatrix} {\theta }_{1}, {\theta }_{2}, \ldots, {\theta }_{p} \end{bmatrix}^{\top } , and let \hat {\boldsymbol {\theta }} be an unbiased estimator of \boldsymbol {\theta } , that is, {\mathbb {E}}\lbrace \hat {\boldsymbol {\theta }} \rbrace = {\boldsymbol {\theta }} . Then, a lower bound on the variance of the unbiased estimator is given [27] as\begin{equation*} {\mathrm {var}}\lbrace \hat { {\theta }_{i}}\rbrace \ge \lbrack {\boldsymbol {I}}\left ({{\boldsymbol {\theta }}}\right)^{-1} \rbrack _{ii} \tag {4}\end{equation*} View SourceRight-click on figure for MathML and additional features.where {\boldsymbol {I}}({\boldsymbol {\theta }}) is the Fisher information matrix, defined [27] as\begin{equation*} \lbrack {\boldsymbol {I}}\left ({{\boldsymbol {\theta }}}\right) \rbrack _{ij} = - {\mathbb {E}}\lbrace \frac {\partial ^{2} \ln {p}\left ({{{\boldsymbol {x}}; {\boldsymbol {\theta }}}}\right)}{\partial {\theta }_{i} \partial {\theta }_{j}} \rbrace \tag {5}\end{equation*} View SourceRight-click on figure for MathML and additional features.where {\boldsymbol {x}} = \begin{bmatrix} {x}_{0}, {x}_{1}, \ldots, {x}_{N-1} \end{bmatrix}^{\top } is the measurement vector consisting of the measurement samples. For all the following considerations, we assume that the appearing parameterized PDFs satisfies all needed regularity conditions [27].

Furthermore, a minimum variance unbiased (MVU) estimator \boldsymbol {g}({\boldsymbol {x}}) can be found if and only if a function \boldsymbol {g}({\boldsymbol {x}}) exists that fulfills [27]\begin{equation*} \frac {\partial \ln {p}\left ({{ {\boldsymbol {x}}; {\boldsymbol {\theta }}}}\right)}{\partial {\boldsymbol {x}}} = {\boldsymbol {I}}\left ({{\boldsymbol {\theta }}}\right) \left ({{\boldsymbol {g}\left ({{\boldsymbol {x}}}\right) - {\boldsymbol {\theta }}}}\right). \tag {6}\end{equation*} View SourceRight-click on figure for MathML and additional features.

SECTION III.

Theory

A. Signal Model

In our previous work [22], we introduced a dual-channel PDMR. The architecture of our implemented PDMR is depicted in Fig. 1. One of the two channels is now modeled mathematically. A possible simplified block diagram of a single channel can be seen in Fig. 2(a). In Fig. 2(b) and (c), two different models of the low-noise-block (LNB) converter are shown. The appearing quantities are {i}(t) as the received input signal, {g} as the voltage gain of the low-noise-block (LNB) converter, {\Delta g}(t) as the voltage gain variation, {r_{\textrm {i}}}(t) as the additive noise by the LNB modeled at the input of the amplifier, {r_{\textrm {o}}}(t) as the additive noise by the LNB modeled at the output of the amplifier, {l}(t) as the local oscillator (LO) signal of the LNB, {q}\lbrack n\rbrack as the quantization noise, and {s_{\textrm {I}}}\lbrack n\rbrack and {s_{\textrm {Q}}}\lbrack n\rbrack as the in-phase and quadrature component of the complex baseband signal, respectively.

Fig. 2. - (a) System model of one of the channels of the dual-channel PDMR is shown. The input signal 
$ {i}(t)$
 is amplified, downconverted, and quantized. In the digital domain, the signal is quadrature downconverted, which results in a complex baseband signal 
${s}\lbrack n\rbrack = {s_{\textrm {I}}}\lbrack n\rbrack + {j} {s_{\textrm {Q}}}\lbrack n\rbrack $
. There are two different ways of modeling the additive receiver noise of the LNB. (b) Additive noise is added at the output of the amplifier, which means that the noise is not affected by the receiver gain variations. This implies that the noise figure F of the amplifier is independent of the receiver gain variations 
$ {\Delta g}(t)$
. (c) Additive receiver noise is referred to as the input of the amplifier, which models a case where the noise figure F of the amplifier is not independent of the receiver gain variations 
$ {\Delta g}(t)$
.
Fig. 2.

(a) System model of one of the channels of the dual-channel PDMR is shown. The input signal {i}(t) is amplified, downconverted, and quantized. In the digital domain, the signal is quadrature downconverted, which results in a complex baseband signal {s}\lbrack n\rbrack = {s_{\textrm {I}}}\lbrack n\rbrack + {j} {s_{\textrm {Q}}}\lbrack n\rbrack . There are two different ways of modeling the additive receiver noise of the LNB. (b) Additive noise is added at the output of the amplifier, which means that the noise is not affected by the receiver gain variations. This implies that the noise figure F of the amplifier is independent of the receiver gain variations {\Delta g}(t) . (c) Additive receiver noise is referred to as the input of the amplifier, which models a case where the noise figure F of the amplifier is not independent of the receiver gain variations {\Delta g}(t) .

Fig. 2(b) and (c) differs in how the additive receiver noise is modeled and added. In Fig. 2(b), the receiver noise is modeled at the output of the amplifier. The variance of the added noise would then be\begin{equation*} {\mathrm {var}}\lbrace {r_{\textrm {o}}}\left ({{t}}\right)\rbrace = {g}^{2} {k_{\textrm {B}}} {T_{\mathrm {rec}}} {B} \tag {7}\end{equation*} View SourceRight-click on figure for MathML and additional features.where T_{\mathrm {rec}} is the equivalent receiver noise temperature depending on the noise figure of the amplifier [28]\begin{equation*} {T_{\mathrm {rec}}} = \left ({{{F}-1}}\right) T_{0}. \tag {8}\end{equation*} View SourceRight-click on figure for MathML and additional features.This models a case, where the additive receiver noise is independent of the receiver gain variations introduced with {\Delta g}(t) .

In Fig. 2(c), the additive receiver noise is modeled at the input of the amplifier. In this case, the variance of the added noise would be\begin{equation*} {\mathrm {var}}\lbrace {r_{\textrm {i}}}\left ({{t}}\right)\rbrace = {k_{\textrm {B}}} {T_{\mathrm {rec}}} {B}. \tag {9}\end{equation*} View SourceRight-click on figure for MathML and additional features.This model corresponds to a case, where the noise figure of the device depends on receiver gain variations introduced with {\Delta g}(t) . The relationship given in (3) also models this case because the system temperature is multiplied by a factor depending on the receiver gain variations. In general, identifying the correct model is complicated because of the uncertainty regarding the relationship between receiver gain variations and noise figures, particularly in the absence of detailed knowledge about the architecture of the employed amplifier.

In the subsequent sections, we will adopt the model illustrated in Fig. 2(b), as it aligns more closely with our measurement setup. In this model framework, the concept of system temperature, {T_{\mathrm {sys}}} = {T_{\mathrm {ant}}} + {T_{\mathrm {rec}}} , lacks practical relevance since we do not attribute the additive noise of the receiver to the system input. Consequently, our estimators will focus on determining the equivalent noise temperature of the input signal, {i}(t) , and performance metrics will be assessed accordingly. It is important to acknowledge that while our analysis focuses on this particular model, similar derivations can be applied to alternative models with minimal impact on the results.

The received signal {i}(t) is a Gaussian noise random process with a bandwidth larger than or equal to the bandwidth of the receiver chain B. The received signal can be described as\begin{equation*} {i}\left ({{t}}\right) = {w}\left ({{t}}\right) e^{{j} \left ({{2\pi f_{\mathrm {rf}} t + \varphi _{0}}}\right)} \tag {10}\end{equation*} View SourceRight-click on figure for MathML and additional features.where f_{\mathrm {rf}} is the center frequency of the RF input range and {w}(t) = {w}_{\textrm {I}}(t) + {j} {w}_{\textrm {Q}}(t) is a zero-mean complex baseband Gaussian noise process with\begin{equation*} \sigma _{w}^{2} = {\mathrm {var}}\lbrace {w}_{\textrm {I}}\left ({{t}}\right)\rbrace = {\mathrm {var}}\lbrace {w}_{\textrm {Q}}\left ({{t}}\right)\rbrace = \frac {1}{2} {k_{\textrm {B}}} {T_{\mathrm {ant}}} {B}. \tag {11}\end{equation*} View SourceRight-click on figure for MathML and additional features.T_{\mathrm {ant}} is an equivalent antenna noise temperature. The input signal is amplified and the additive noise is added, which gives\begin{equation*} \left ({{ {g} + {\Delta g}\left ({{t}}\right)}}\right) {w}\left ({{t}}\right) e^{{j} \left ({{2\pi f_{\mathrm {rf}} t + \varphi _{0}}}\right)} + {r_{\textrm {o}}}\left ({{t}}\right) e^{{j} \left ({{2\pi f_{\mathrm {rf}} t + \varphi _{1}}}\right)} \tag {12}\end{equation*} View SourceRight-click on figure for MathML and additional features. {r_{\textrm {o}}}(t) = r_{\textrm {o,I}}(t) + {j} r_{\textrm {o,Q}}(t) describes the additive complex baseband receiver noise with\begin{equation*} {\mathrm {var}}\lbrace r_{\textrm {o,I}}\left ({{t}}\right)\rbrace = {\mathrm {var}}\lbrace r_{\textrm {o,Q}}\left ({{t}}\right)\rbrace = \frac {1}{2} {g}^{2} {k_{\textrm {B}}} {T_{\mathrm {rec}}} {B}. \tag {13}\end{equation*} View SourceRight-click on figure for MathML and additional features.

This signal is downconverted to the IF range by the LNB with the LO signal {l}(t) = \cos (2\pi f_{\mathrm {lo}} t + {\varphi _{\mathrm {lo}}}+{\Delta \varphi }(t)) , where \varphi _{\mathrm {lo}} is the unknown starting phase of the LO signal and {\Delta \varphi }(t) denotes the phase noise process of the LO of the LNB. After that, the signal is quantized, which adds the quantization noise {q}\lbrack n \rbrack = {q}_{\textrm {I}}\lbrack n \rbrack + {j} {q}_{\textrm {Q}}\lbrack n \rbrack . A simple model for the quantization noise assumes a uniform distribution of {q}_{\textrm {I}}\lbrack n\rbrack and {q}_{\textrm {Q}}\lbrack n\rbrack . The variance of the quantization noise is then given [29] as\begin{equation*} {\mathrm {var}}\lbrace {q}_{\textrm {I}}\lbrack n\rbrack \rbrace = {\mathrm {var}}\lbrace {q}_{\textrm {Q}}\lbrack n\rbrack \rbrace = \frac {1}{2 Z_{0}}\frac {\Delta ^{2}}{12} \tag {14}\end{equation*} View SourceRight-click on figure for MathML and additional features.where \Delta is the voltage step of the least-significant bit of the ADC and Z_{0} is the impedance on which this noise is realized.

The quantization process introduces additional effects. For example, the random process before quantization exhibits specific covariance and correlation properties, which may not retain the same covariance and correlation when discretized. However, for simplicity, we will disregard these effects. Subsequent measurements will demonstrate that these effects are negligible in real-world scenarios.

This occurs after low-pass filtering\begin{align*} & \frac {1}{2} \left ({{ {g} + {\Delta g}\lbrack n \rbrack }}\right) {w}\lbrack n \rbrack e^{{j}\left ({{2\pi \frac {f_{\mathrm {if}}}{ {f_{\textrm {s}}}} n + \varphi _{0} + {\Delta \varphi }\lbrack n \rbrack }}\right)} \\ & \quad + \frac {1}{2} {r_{\textrm {o}}}\lbrack n \rbrack e^{{j}\left ({{2\pi \frac {f_{\mathrm {if}}}{f_{\textrm {s}}} n + \varphi _{1} + {\Delta \varphi }\lbrack n \rbrack }}\right)} + {q}\lbrack n \rbrack e^{{j}\left ({{2\pi \frac {f_{\mathrm {if}}}{f_{\textrm {s}}} n + \varphi _{2}}}\right)} \tag {15}\end{align*} View SourceRight-click on figure for MathML and additional features.where f_{\mathrm {if}} = f_{\mathrm {rf}} - f_{\mathrm {lo}} and f_{s} is the sampling frequency. The signal is digitally quadrature downconverted to the baseband. The digital downconversion is assumed to be ideal. With that, the baseband signal can be written as\begin{align*} s\lbrack n \rbrack & =\frac {1}{2} \left ({{ {g} + {\Delta g}\lbrack n \rbrack }}\right) {w}\lbrack n \rbrack e^{{j} \left ({{\varphi _{0} + {\Delta \varphi }\lbrack n \rbrack }}\right)} \\ & \quad + \frac {1}{2} {r_{\textrm {o}}}\lbrack n \rbrack e^{{j} \left ({{\varphi _{1} + {\Delta \varphi }\lbrack n \rbrack }}\right)} + {q}\lbrack n \rbrack e^{{j} \varphi _{2}}. \tag {16}\end{align*} View SourceRight-click on figure for MathML and additional features.The in-phase component can be calculated to be\begin{align*} & {s_{\textrm {I}}}\lbrack n \rbrack \\ & = \mathcal {R}\lbrace s\lbrack n\rbrack \rbrace \\ & = \frac {1}{2} \left ({{ {g}+ {\Delta g}\lbrack n\rbrack }}\right) \\ & \quad \times \left ({{ {w}_{\textrm {I}} \lbrack n\rbrack \cos \left ({{\varphi _{0} + {\Delta \varphi }\lbrack n\rbrack }}\right) - {w}_{\textrm {Q}}\lbrack n \rbrack \sin \left ({{\varphi _{0} + {\Delta \varphi }\lbrack n\rbrack }}\right)}}\right) \\ & \quad + \frac {1}{2} \left ({{r_{\textrm {o,I}} \cos \left ({{\varphi _{1} + {\Delta \varphi }\lbrack n\rbrack }}\right) - r_{\textrm {o,Q}} \sin \left ({{\varphi _{1} + {\Delta \varphi }\lbrack n\rbrack }}\right)}}\right) \\ & \quad + \left ({{ {q}_{\textrm {I}}\lbrack n\rbrack \cos \left ({{\varphi _{2}}}\right) - {q}_{\textrm {Q}}\lbrack n\rbrack \sin \left ({{\varphi _{2}}}\right) }}\right) \tag {17}\end{align*} View SourceRight-click on figure for MathML and additional features.and the quadrature component as\begin{align*} & {s_{\textrm {I}}}\lbrack n \rbrack \\ & = \mathcal {I}\lbrace s\lbrack n\rbrack \rbrace \\ & = \frac {1}{2} \left ({{ {g}+ {\Delta g}\lbrack n\rbrack }}\right) \\ & \quad \times \left ({{ {w}_{\textrm {I}} \lbrack n\rbrack \sin \left ({{\varphi _{0} + {\Delta \varphi }\lbrack n\rbrack }}\right) + {w}_{\textrm {Q}}\lbrack n \rbrack \cos \left ({{\varphi _{0} + {\Delta \varphi }\lbrack n\rbrack }}\right)}}\right) \\ & \quad + \frac {1}{2} \left ({{r_{\textrm {o,I}} \sin \left ({{\varphi _{1} + {\Delta \varphi }\lbrack n\rbrack }}\right) + r_{\textrm {o,Q}} \cos \left ({{\varphi _{1} + {\Delta \varphi }\lbrack n\rbrack }}\right)}}\right) \\ & \quad + \left ({{ {q}_{\textrm {I}}\lbrack n\rbrack \sin \left ({{\varphi _{2}}}\right) + {q}_{\textrm {Q}}\lbrack n\rbrack \cos \left ({{\varphi _{2}}}\right) }}\right). \tag {18}\end{align*} View SourceRight-click on figure for MathML and additional features.From here on, the factor 1/2 stemming from the real downconversion is neglected for readability reasons. In a measurement scenario, this factor will be simply incorporated into the gain of the system during the calibration process. We continue by calculating the expectation and variance of the in-phase and quadrature components. The expectation of the in-phase and quadrature components is given as\begin{equation*} {\mathbb {E}}\lbrace {s_{\textrm {I}}}\lbrack n\rbrack \rbrace = {\mathbb {E}}\lbrace {s_{\textrm {Q}}}\lbrack n\rbrack \rbrace = 0 \tag {19}\end{equation*} View SourceRight-click on figure for MathML and additional features.because independency of the random processes and that {\mathbb {E}}\lbrace {w}\lbrack n \rbrack \rbrace = {\mathbb {E}}\lbrace {r_{\textrm {o}}}\lbrack n\rbrack \rbrace = {\mathbb {E}}\lbrace {q}\lbrack n \rbrack \rbrace = {\mathbb {E}}\lbrace {\Delta g}\lbrack n\rbrack \rbrace = 0 is assumed.

The variance of the in-phase and quadrature component follow as:\begin{align*} & {\mathrm {var}}\lbrace {s_{\textrm {I}}}\lbrack n \rbrack \rbrace \\ & = \left ({{ {g}^{2}+ {\mathrm {var}}\lbrace {\Delta g}\lbrack n\rbrack \rbrace }}\right) {\mathrm {var}}\lbrace {w}_{\textrm {I}}\lbrack n\rbrack \rbrace + {\mathrm {var}}\lbrace r_{\textrm {o,I}}\lbrack n \rbrack \rbrace \\ & \quad + {\mathrm {var}}\lbrace {q}_{\textrm {I}}\lbrack n\rbrack \rbrace \\ & = \left ({{ {g}^{2} + \sigma _{g}^{2}}}\right) \sigma ^{2}_{w} + \sigma ^{2}_{\mathrm {ro}} + \sigma ^{2}_{q} \tag {20}\\ & {\mathrm {var}}\lbrace {s_{\textrm {Q}}}\lbrack n \rbrack \rbrace \\ & = \left ({{ {g}^{2}+ {\mathrm {var}}\lbrace {\Delta g}\lbrack n\rbrack \rbrace }}\right) {\mathrm {var}}\lbrace {w}_{\textrm {Q}}\lbrack n\rbrack \rbrace + {\mathrm {var}}\lbrace r_{\textrm {o,Q}}\lbrack n \rbrack \rbrace \\ & \quad + {\mathrm {var}}\lbrace {q}_{\textrm {Q}}\lbrack n\rbrack \rbrace \\ & = \left ({{ {g}^{2} + \sigma _{g}^{2}}}\right) \sigma ^{2}_{w} + \sigma ^{2}_{\mathrm {ro}} + \sigma ^{2}_{q} \tag {21}\end{align*} View SourceRight-click on figure for MathML and additional features.where it is assumed that the appearing random processes are mutually independent, and the identity [30]\begin{align*} & {\mathrm {var}}\lbrace X Y \rbrace \\ & = \lbrack {\mathbb {E}}\lbrace X\rbrace \rbrack ^{2} {\mathrm {var}}\lbrace Y \rbrace + \lbrack {\mathbb {E}}\lbrace Y\rbrace \rbrack ^{2} {\mathrm {var}}\lbrace X \rbrace + {\mathrm {var}}\lbrace X \rbrace {\mathrm {var}}\lbrace Y \rbrace \tag {22}\end{align*} View SourceRight-click on figure for MathML and additional features.which holds for independent random variables\begin{equation*} {\mathbb {E}}\Big \lbrace \cos ^{2}\left ({{\varphi + {\Delta \varphi }\lbrack n\rbrack }}\right) + \sin ^{2}\left ({{\varphi + {\Delta \varphi }\lbrack n\rbrack }}\right) \Big \rbrace = 1 \tag {23}\end{equation*} View SourceRight-click on figure for MathML and additional features.and\begin{equation*} {\mathbb {E}}\lbrace X^{2} \rbrace = {\mathrm {var}}\lbrace X \rbrace + \lbrack {\mathbb {E}}\lbrace X\rbrace \rbrack ^{2} \tag {24}\end{equation*} View SourceRight-click on figure for MathML and additional features.were used. Interestingly, neither the start phases of the signals nor the phase noise influences the statistics of the baseband signal.

We assume that both the in-phase and quadrature components are normally distributed, {s_{\textrm {I}}}\lbrack n \rbrack \sim \mathcal {N}(0, \sigma _{\textrm {I}}^{2}) and {s_{\textrm {Q}}}\lbrack n \rbrack \sim \mathcal {N}(0,\sigma _{\textrm {Q}}^{2}) , with \sigma ^{2} = \sigma _{\textrm {Q}}^{2} = \sigma _{\textrm {I}}^{2} = {\mathrm {var}}\lbrace s_{\textrm {I}}\lbrack n\rbrack \rbrace = {\mathrm {var}}\lbrace s_{\textrm {Q}}\lbrack n\rbrack \rbrace . One should note that this assumption may not be exactly true if one of the nonidealities dominates the other effects. This might change the distribution and, therefore, this should be considered for these cases. For normal scenarios, this effect is negligible.

We further assume that {s_{\textrm {I}}}\lbrack n \rbrack and {s_{\textrm {Q}}}\lbrack n \rbrack are independent. The pdf of {s}\lbrack n\rbrack then follows as the multiplication of {p}_{\textrm {I}}({s_{\textrm {I}}}\lbrack n\rbrack) and {p}_{\textrm {Q}}({s_{\textrm {Q}}}\lbrack n \rbrack) :\begin{align*} & {p}_{\textrm {I,Q}}\left ({{ {s_{\textrm {I}}}\lbrack n \rbrack, {s_{\textrm {Q}}}\lbrack n \rbrack; {T_{\mathrm {ant}}}}}\right) \\ & = \frac {1}{2\pi \sigma ^{2}\left ({{T_{\mathrm {ant}}}}\right)} \exp \left ({{-\frac { {s_{\textrm {I}}}^{2}\lbrack n\rbrack + {s_{\textrm {Q}}}^{2}\lbrack n\rbrack }{2 \sigma ^{2}\left ({{T_{\mathrm {ant}}}}\right)}}}\right). \tag {25}\end{align*} View SourceRight-click on figure for MathML and additional features.

B. CRLB Derivation

As we have derived the pdf of a complex baseband sample {s}\lbrack n\rbrack = {s_{\textrm {I}}}\lbrack n\rbrack + {j} {s_{\textrm {Q}}}\lbrack n\rbrack , we are now able to express the pdf of the measurement vector \boldsymbol {s} = \begin{bmatrix} {s}\lbrack 0 \rbrack, {s}\lbrack 1 \rbrack, \ldots, {s}\lbrack N-1 \rbrack \end{bmatrix}^{\top } , if we assume that the samples {s}\lbrack n \rbrack are independent. Then, the joint pdf is given as the product of the sample PDFs. Using (4), (5), and (25), one can calculate the CRLB of T_{\mathrm {ant}} as\begin{align*} {\mathrm {var}}\lbrace \hat {T}_{\mathrm {ant}}\rbrace & \ge \frac {\left ({{\sigma ^{2}\left ({{T_{\mathrm {ant}}}}\right)}}\right)^{2}}{N} \left ({{\frac {\partial \sigma ^{2}\left ({{T_{\mathrm {ant}}}}\right)}{\partial {T_{\mathrm {ant}}}}}}\right)^{-2} \\ & \ge \frac {1}{N} \left ({{ {T_{\mathrm {ant}}} + \frac { {g}^{2} {T_{\mathrm {rec}}}}{ {g}^{2} + \sigma _{\textrm {g}}^{2}} + \frac {2 \sigma ^{2}_{\textrm {q}}}{\left ({{ {g}^{2} + \sigma ^{2}_{\textrm {g}}}}\right) {k_{\textrm {B}}} B}}}\right)^{2}. \tag {26}\end{align*} View SourceRight-click on figure for MathML and additional features.It is interesting to see that the variance of the estimated input noise temperature decreases with increasing variance of the receiver gain variations. This can be explained by the fact that the receiver gain variations add power to the input signal or rather lead to additional gain of the system.

The same result can be stated in terms of standard deviation\begin{align*} & {\mathrm {std}}\lbrace \hat {T}_{\mathrm {ant}}\rbrace \\ & \ge \frac {1}{\sqrt {N}} \left ({{ {T_{\mathrm {ant}}} + \frac { {g}^{2} {T_{\mathrm {rec}}}}{ {g}^{2} + \sigma _{\textrm {g}}^{2}} + \frac {2 \sigma _{\textrm {q}}^{2}}{\left ({{ {g}^{2} + \sigma _{\textrm {g}}^{2}}}\right) {k_{\textrm {B}}} B}}}\right) \tag {27}\end{align*} View SourceRight-click on figure for MathML and additional features.which is more commonly used in microwave radiometry [see (1)]. For {T_{\mathrm {rec}}} = 0 and no quantization noise, this reduces to\begin{equation*} {\mathrm {std}}\lbrace \hat {T}_{\mathrm {ant}}\rbrace \ge \frac {T_{\mathrm {ant}}}{\sqrt {N}} \tag {28}\end{equation*} View SourceRight-click on figure for MathML and additional features.which is exactly the known sensitivity of a total power radiometer [31]\begin{equation*} \Delta T_{\mathrm {ant}} = \frac {T_{\mathrm {ant}}}{\sqrt { {B} {\tau }}} \tag {29}\end{equation*} View SourceRight-click on figure for MathML and additional features.if we use {f_{\textrm {s}}} = {B} and N = {\tau } {f_{\textrm {s}}} .

The MVU estimator that attains the CRLB follows from (6) as\begin{align*} \hat {T}_{\mathrm {ant,MVU}} & = \frac {1}{N} \frac {1}{ {k_{\textrm {B}}} {B}\left ({{ {g}^{2}+\sigma _{\textrm {g}}^{2}}}\right)} \sum _{n=0}^{N-1} | {s}\lbrack n\rbrack |^{2} \\ & \quad - \frac { {g}^{2} {T_{\mathrm {rec}}}}{ {g}^{2}+ \sigma _{\textrm {g}}^{2}} - \frac {2 \sigma _{\textrm {q}}^{2}}{\left ({{ {g}^{2} + \sigma _{\textrm {g}}^{2}}}\right) {k_{\textrm {B}}} B}. \tag {30}\end{align*} View SourceRight-click on figure for MathML and additional features.This minimum variance unbiased (MVU) estimator is straightforward to interpret. The noise processes {r_{\textrm {o}}}\lbrack n\rbrack and {q}\lbrack n\rbrack contribute additional noise power to the signal, which is subsequently subtracted from the estimated power. Additionally, {\Delta g}\lbrack n\rbrack introduces multiplicative noise. Consequently, the estimated power is normalized by the variance associated with this noise process.

In practice, the parameters required for implementing the derived MVU are often unavailable. Therefore, we will now examine how a simpler estimator might be affected by these factors.

C. Scaled Variance Estimator

We are going to investigate how the proposed scaled variance estimator (SVE)\begin{equation*} \hat {T}_{\mathrm {ant,SVE}} = \frac {1}{N}\frac {1}{ {k_{\textrm {B}}} {g}^{2} {B}} \sum _{n=0}^{N-1} | {s}\lbrack n\rbrack |^{2} \tag {31}\end{equation*} View SourceRight-click on figure for MathML and additional features.performs. Therefore, we are going to evaluate the expectation and variance of this estimator. We start with evaluating the expectation\begin{align*} & {\mathbb {E}}\lbrace \hat {T}_{\mathrm {ant,SVE}}\rbrace \\ & = \frac {1}{N}\frac {1}{ {k_{\textrm {B}}} {g}^{2} {B}} \sum _{n=0}^{N-1} {\mathbb {E}}\lbrace | {s}\lbrack n\rbrack |^{2} \rbrace \\ & = \frac { {g}^{2} + \sigma _{\textrm {g}}^{2}}{ {g}^{2}} {T_{\mathrm {ant}}} + {T_{\mathrm {rec}}} + \frac {2 \sigma _{\textrm {q}}^{2}}{ {k_{\textrm {B}}} {g}^{2} B}. \tag {32}\end{align*} View SourceRight-click on figure for MathML and additional features.The estimator exhibits bias, which amplifies with higher powers of the random processes {\Delta g}\lbrack n\rbrack, {r_{\textrm {o}}}\lbrack n\rbrack , and {q}\lbrack n\rbrack .

The variance follows as\begin{align*} {\mathrm {var}}\lbrace \hat {T}_{\mathrm {ant,SVE}} \rbrace & = {\mathrm {var}}\Big \lbrace \frac {1}{N} \frac {1}{ {k_{\textrm {B}}} {g}^{2} {B}} \sum _{n=0}^{N-1} | {s}\lbrack n\rbrack |^{2} \Big \rbrace \\ & = \left ({{\frac {1}{N}\frac {1}{ {k_{\textrm {B}}} {g}^{2} {B}}}}\right)^{2} \sum _{n=0}^{N-1} {\mathrm {var}}\lbrace | {s}\lbrack n\rbrack |^{2}\rbrace. \tag {33}\end{align*} View SourceRight-click on figure for MathML and additional features.

To continue the derivation, we need to find an expression for the variance of the magnitude squared of the complex baseband samples.

We have made the assumption that {s_{\textrm {I}}}\lbrack n\rbrack and {s_{\textrm {Q}}}\lbrack n\rbrack adhere to a Gaussian distribution. With that, we find\begin{align*} {\mathrm {var}}\lbrace | {s}\lbrack n\rbrack |^{2}\rbrace & = {\mathrm {var}}\lbrace s_{\textrm {I}}^{2}\lbrack n\rbrack + s_{\textrm {Q}}^{2}\lbrack n\rbrack \rbrace \\ & = {\mathbb {E}}\lbrace s_{\textrm {I}}^{4}\lbrack n\rbrack \rbrace - {\mathbb {E}}^{2}\lbrace s_{\textrm {I}}^{2}\lbrack n\rbrack \rbrace + {\mathbb {E}}\lbrace s_{\textrm {Q}}^{4}\lbrack n\rbrack \rbrace - {\mathbb {E}}^{2}\lbrace s_{\textrm {Q}}^{2}\lbrack n\rbrack \rbrace \\ & = 3 \left ({{\sigma ^{2}}}\right)^{2} - \left ({{\sigma ^{2}}}\right)^{2} + 3 \left ({{\sigma ^{2}}}\right)^{2} - \left ({{\sigma ^{2}}}\right)^{2}= 4 \left ({{\sigma ^{2}}}\right)^{2}. \tag {34}\end{align*} View SourceRight-click on figure for MathML and additional features.This can be inserted in (33) and gives\begin{align*} & {\mathrm {var}}\lbrace \hat {T}_{\mathrm {ant,SVE}} \rbrace \\ & = \frac {1}{N} \left ({{\frac { {g}^{2} + \sigma _{\textrm {g}}^{2}}{ {g}^{2}} {T_{\mathrm {ant}}} + {T_{\mathrm {rec}}} + \frac {2\sigma _{\textrm {q}}^{2}}{ {k_{\textrm {B}}} {g}^{2} B}}}\right)^{2} \tag {35}\end{align*} View SourceRight-click on figure for MathML and additional features.or again in standard deviation\begin{align*} & {\mathrm {std}}\lbrace \hat {T}_{\mathrm {ant,SVE}}\rbrace \\ & = \frac {1}{\sqrt {N}} \left ({{\frac { {g}^{2} + \sigma _{\textrm {g}}^{2}}{ {g}^{2}} {T_{\mathrm {ant}}} + {T_{\mathrm {rec}}} + \frac {2\sigma _{\textrm {q}}^{2}}{ {k_{\textrm {B}}} {g}^{2} B}}}\right). \tag {36}\end{align*} View SourceRight-click on figure for MathML and additional features.

In this case, the standard deviation is not decreased with an increasing variance of the receiver gain variations. On the contrary, the system performance becomes worse with increasing power of the randomly fluctuating receiver gain process.

D. Considerations on Sensitivity

The sensitivity is generally defined as given in (1). If we apply this expression to the derived estimators, we find\begin{align*} & \Delta T_{\mathrm {ant,MVU}} = \frac {\mathrm {std}\lbrace \hat {T}_{\mathrm {ant,MVU}} \rbrace }{\dfrac {\partial {\mathbb {E}}\lbrace \hat {T}_{\mathrm {ant,MVU}} \rbrace }{\partial {T_{\mathrm {ant}}}}} = \mathrm {std}\lbrace \hat {T}_{\mathrm {ant,MVU}} \rbrace \\ & =\frac {1}{\sqrt {N}} \left ({{ {T_{\mathrm {ant}}} + \frac { {g}^{2} {T_{\mathrm {rec}}}}{ {g}^{2} + \sigma _{\textrm {g}}^{2}} + \frac {2 \sigma _{\textrm {q}}^{2}}{\left ({{ {g}^{2} + \sigma _{\textrm {g}}^{2}}}\right) {k_{\textrm {B}}} B}}}\right) \tag {37}\end{align*} View SourceRight-click on figure for MathML and additional features.for the MVU, where \mathrm {std}\lbrace \hat {T}_{\mathrm {ant,MVU}} \rbrace is given with (27). And for the SVE\begin{align*} \Delta T_{\mathrm {ant,SVE}} & = \frac {\mathrm {std}\lbrace \hat {T}_{\mathrm {ant,SVE}} \rbrace }{\dfrac {\partial {\mathbb {E}}\lbrace \hat {T}_{\mathrm {ant,SVE}} \rbrace }{\partial {T_{\mathrm {ant}}}}} = \frac {\mathrm {std}\lbrace \hat {T}_{\mathrm {ant,SVE}} \rbrace }{\dfrac { {g}^{2} + \sigma _{\textrm {g}}^{2}}{ {g}^{2}}} \\ & = \frac {1}{\sqrt {N}} \left ({{ {T_{\mathrm {ant}}} + \frac { {g}^{2} {T_{\mathrm {rec}}}}{ {g}^{2} + \sigma _{\textrm {g}}^{2}} + \frac {2 \sigma _{\textrm {q}}^{2}}{\left ({{ {g}^{2}+\sigma _{\textrm {g}}^{2}}}\right) {k_{\textrm {B}}} B}}}\right) \tag {38}\end{align*} View SourceRight-click on figure for MathML and additional features.using (32) and (36). Comparing (37) and (38), one should see that the sensitivity of both estimators is the same even though their estimation capabilities are inherently different.

E. Receiver Gain Variations

Understanding the variances of the involved random processes, namely {\Delta g}\lbrack n\rbrack , {r_{\textrm {o}}}\lbrack n \rbrack , and {q}\lbrack n\rbrack , is crucial for estimating performance. The variances of the additive receiver noise {r_{\textrm {o}}}\lbrack n\rbrack and the quantization noise {q}\lbrack n\rbrack have already been specified in (7) and (14), respectively.

Similarly, a range for the variance of the receiver gain variation {\Delta g}\lbrack n\rbrack should be given now. For standard low-noise microwave amplifiers, [1] states that\begin{equation*} \frac {\Delta G_{\textrm {s},\text {rms}}}{G_{\textrm {s}}} \in \lbrack 10^{-4}, 10^{-2} \rbrack \tag {39}\end{equation*} View SourceRight-click on figure for MathML and additional features.where G_{\textrm {s}} is the system power gain and \Delta G_{\textrm {s},\text {rms}} is the rms-value of the randomly fluctuating system power gain.

Mathematically, the rms-value can be calculated as\begin{equation*} \Delta G_{\textrm {s},\text {rms}} = \lim _{T\rightarrow \infty } \sqrt {\int _{0}^{T} \Delta G_{\textrm {s}}^{2}\left ({{t}}\right) {d}t} \tag {40}\end{equation*} View SourceRight-click on figure for MathML and additional features.where \Delta G_{\textrm {s}}(t) is the randomly fluctuating power gain component. Assuming that the input and output impedance of the amplifier are the same, the voltage gain and power gain are related with G = {g}^{2} . Using this relationship in (40) gives\begin{align*} \Delta G_{\textrm {s},\text {rms}} & = \lim _{T\rightarrow \infty } \sqrt {\int _{0}^{T} {\Delta g}^{4}\left ({{t}}\right) {d}t} = \sqrt {\lim _{T\rightarrow \infty } \int _{0}^{T} {\Delta g}^{4}\left ({{t}}\right) {d}t} \\ & = \sqrt { {\mathbb {E}}\lbrace {\Delta g}^{4}\left ({{t}}\right) \rbrace } \tag {41}\end{align*} View SourceRight-click on figure for MathML and additional features.where we assumed ergodicity of {\Delta g}(t) .

This expression depends on the distribution of {\Delta g}(t) . For example, if {\Delta g}(t) follows a Gaussian distribution, the expectation can be evaluated to be\begin{equation*} {\mathbb {E}}\lbrace {\Delta g}^{4}\left ({{t}}\right) \rbrace = 3 \left ({{ {\mathrm {var}}\lbrace {\Delta g}\left ({{t}}\right)\rbrace }}\right)^{2}. \tag {42}\end{equation*} View SourceRight-click on figure for MathML and additional features.Combining (39), (41), and (42) gives a range for the variance of the receiver gain variations\begin{equation*} {\mathrm {var}}\lbrace {\Delta g}\left ({{t}}\right)\rbrace \in \frac {\lbrack 10^{-4}, 10^{-2} \rbrack }{\sqrt {3}} G_{\textrm {s}}. \tag {43}\end{equation*} View SourceRight-click on figure for MathML and additional features.For example, for a system gain of G_{\textrm {s}} = {\mathrm {60~dB}} , this leads to\begin{equation*} {\mathrm {var}}\lbrace {\Delta g}\left ({{t}}\right) \rbrace \in \lbrack 57.74, 5773.50 \rbrack. \tag {44}\end{equation*} View SourceRight-click on figure for MathML and additional features.

SECTION IV.

Simulation

To verify the derived equations, a Monte Carlo simulation was conducted using the signal model from (16) to generate realizations of the baseband signal. The simulation results are presented in the following result section.

One should note that when comparing the simulation to the measurement results, not all influences on the real system are modeled in the theoretical framework. For example, the temperature dependency of components and unknown receiver gain variations are not included in the model, which can lead to some differences between the measurements and simulations. To reduce these differences, each element of the receiver chain could be characterized more thoroughly, with these findings incorporated into the simulation. However, in this work, we only considered the artificially added receiver gain variations.

SECTION V.

Measurements

In Section III, we developed expressions for the expectation and standard deviation of estimators. In the following, these models are validated using real-world measurements.

A. Setup

The general measurement setup is depicted in Fig. 3. The measurement setup consists of the RFSoC board, with digital hardware designs introduced in [22], the analog part shown in Fig. 3(a) and a noise diode as a noise source. A detailed description of the whole PDMR is given in [22]. A short revision follows here.

Fig. 3. - (a) Analog part of the measurement setup is embedded in a temperature-controlled chamber. (b) Whole setup. (c) Schematic of the measurement setup. The temperature stability of the temperature-controlled chamber is specified as ±0.3 K in the datasheet.
Fig. 3.

(a) Analog part of the measurement setup is embedded in a temperature-controlled chamber. (b) Whole setup. (c) Schematic of the measurement setup. The temperature stability of the temperature-controlled chamber is specified as ±0.3 K in the datasheet.

The used LNB has an input frequency range from 21.2 to 22.2 GHz and an LO frequency of f_{\mathrm {lo}}= {\mathrm {20.25}}~{\mathrm{GHz}} . The LNB has a nominal power gain of G = 60 dB and a nominal noise figure of F = 1.6 dB. The digital backend consists of the Xilinx RFSoC 4\times 2 evaluation board. The IF-Signal is directly digitized using a 14-bit ADC and processed using a combination of digital hardware and software. A detailed description of the digital hardware designs and software used can be found in a previous publication [22]. For readability, a brief summary is provided here. There are two different digital hardware designs. In the first, complex time-domain samples are stored in memory, where they can be accessed by a processor running software to process the samples. In the second design, designated digital hardware calculates the squared mean of the complex time-domain samples and stores the result in memory, allowing software to access it as needed. All used devices and components are described in Table II.

TABLE II List of All Used Devices in the Measurement Setup Displayed in Fig. 3
Table II- List of All Used Devices in the Measurement Setup Displayed in Fig. 3

The analog part of our PDMR is housed in a temperature-controlled chamber, as a defined environment is crucial for this high-precision measurement task. To visualize the important role of a stable environment, the estimated equivalent input noise temperature in comparison to the physical temperature of the housing of the LNB is shown in Fig. 4.

Fig. 4. - Measured average physical temperature on the surface of the LNB and estimated average system temperature 
$\hat {T_{\mathrm {sys}}}$
 over several hours. The estimated input noise temperature decreases with an increase in LNB temperature. This is due to the decrease in gain. Measurement parameters: 
$ {B} = {\mathrm {256}}~{\mathrm{MHz}}$
, 
$N = 2^{24}$
, and 
$50 \cdot 2^{14}$
 estimated 
$T_{\mathrm {sys}}$
 values.
Fig. 4.

Measured average physical temperature on the surface of the LNB and estimated average system temperature \hat {T_{\mathrm {sys}}} over several hours. The estimated input noise temperature decreases with an increase in LNB temperature. This is due to the decrease in gain. Measurement parameters: {B} = {\mathrm {256}}~{\mathrm{MHz}} , N = 2^{24} , and 50 \cdot 2^{14} estimated T_{\mathrm {sys}} values.

It is evident that as the temperature of the LNB increases, the estimated system temperature decreases. This decline can be attributed to the decrease in gain resulting from the elevated physical temperature of the LNB.

Before the measurements were conducted, a calibration process took place with a noise diode as a known signal source. With that and the gain from the datasheet of the LNB, an equivalent bandwidth was estimated. Because of that, the bandwidth might change in the upcoming measurement results.

As it is depicted in Fig. 3(a), a controllable attenuator at the input of the LNB was added. This allows to artificially introduce receiver gain variations. To be able to accurately create these artificial receiver gain variations with a defined {\mathrm {var}}\lbrace {\Delta g}(t)\rbrace , the system gain was measured for a range control voltages Control voltage of attenuator (U_{\textrm {c}} ) of the attenuator. The result is shown in Fig. 5.

Fig. 5. - Measured system power gain over different control voltages of the attenuator. A linear function is fit to the measurements. This fit is used to calculate the needed control voltage 
$U_{\textrm {c}}$
 for a given 
$ {\mathrm {var}}\lbrace {\Delta g}(t)\rbrace $
.
Fig. 5.

Measured system power gain over different control voltages of the attenuator. A linear function is fit to the measurements. This fit is used to calculate the needed control voltage U_{\textrm {c}} for a given {\mathrm {var}}\lbrace {\Delta g}(t)\rbrace .

To realize the desired gain function with the added gain fluctuations, the attenuator is controlled by an arbitrary waveform generator (AWG). The control signal for different receiver gain variations {\mathrm {var}}\lbrace {\Delta g}(t)\rbrace can be seen in Fig. 6.

Fig. 6. - Control voltage 
$U_{\textrm {c}}$
 of the attenuator for different voltage gain variations 
$ {\mathrm {var}}\lbrace {\Delta g}(t)\rbrace $
. The uniformly distributed voltage gain variation leads to the depicted signals for controlling the attenuator, which is operating in the logarithmic power gain domain.
Fig. 6.

Control voltage U_{\textrm {c}} of the attenuator for different voltage gain variations {\mathrm {var}}\lbrace {\Delta g}(t)\rbrace . The uniformly distributed voltage gain variation leads to the depicted signals for controlling the attenuator, which is operating in the logarithmic power gain domain.

The generated receiver gain variation {\Delta g}\lbrack n\rbrack is uniformly distributed in an interval around zero, where the bounds of the interval are defined by the variance of the random process. Because of bandwidth and modulation speed limitations of the used hardware, the generated noise samples were sorted by amplitude. This makes the samples correlated, but it is not a dominant factor in the measurements. Furthermore, the calculated voltage gain signal is transformed into a voltage suited to control the attenuator, which has a linear attenuation on a logarithmic scale. This leads to the control voltages depicted in Fig. 6.

B. Results

The introduced setup is used to validate the derived formulas given in Section III. All measurements are done using only the RFSoC 4\times 2 evaluation board. No analog measurement hardware is used. One should note that for applying the estimators given with (30) and (31), the quantities {g}^{2}, {B}, {\mathrm {var}}\lbrace {r_{\textrm {o}}}\lbrack n\rbrack \rbrace , and {\mathrm {var}}\lbrace {q}\lbrack n\rbrack \rbrace need to be known. For the dependencies of the estimator measures on the receiver gain variation, we assume that {\mathrm {var}}\lbrace {q}\lbrack n\rbrack \rbrace = 0 , which is a valid assumption as we are discretizing with a 14-bit resolution and the input signal to the ADC is large enough to drive enough ADC steps.

The additive receiver noise is estimated by using the Y-factor method [28] and the ON- and OFF-state of the noise diode to estimate the noise figure of the setup and then using (8) to calculate the equivalent receiver noise temperature T_{\mathrm {rec}} . The gain-bandwidth product is estimated by using the ON-state of the noise diode and the specified excess noise ratio of the noise diode\begin{equation*} {g}^{2} {B} = \frac {P_{\mathrm {on}}}{ {k_{\textrm {B}}} T_{\mathrm {on}}} \tag {45}\end{equation*} View SourceRight-click on figure for MathML and additional features.where P_{\mathrm {on}} is the measured power when the noise diode is on and T_{\mathrm {on}} is the equivalent noise temperature of the diode when the diode is on specified over the excess noise ratio.

The standard deviation decreases as we increase the number of samples N of the squared mean. To check whether this behaves as expected, the measured variance of the estimators for N=2^{16} is plotted relative to the simulated and measured variance of the estimator for one fixed configuration, {\mathrm {var}}\lbrace {\Delta g}(t)\rbrace = 0 . This is shown in Fig. 7. The result is the same for both estimators, as all factors cancel out when calculating the fraction.

Fig. 7. - Measured variance of the estimators over different integration lengths N relative to the measured variance at 
$N=2^{16}$
. Theoretically, one would expect that the variance is decreasing with 
$1/N$
. This is backed by the simulation and measurement data shown in this plot.
Fig. 7.

Measured variance of the estimators over different integration lengths N relative to the measured variance at N=2^{16} . Theoretically, one would expect that the variance is decreasing with 1/N . This is backed by the simulation and measurement data shown in this plot.

The influence of the receiver gain variation is validated next. The expectation and standard deviation of the estimators versus the variance of the receiver gain variation can be seen in Fig. 8. The theoretical values match the simulation and measurement data. The slight offset can be explained by not modeled effects during the measurement, for example, the receiver gain variation of the LNB itself, and by the fact that the simulated and measured samples may not adhere fully to a Gaussian distribution, which was assumed during the derivation. Consistently with theoretical expectations, the MVU provides an unbiased estimate of the input power for all {\mathrm {var}}\lbrace {\Delta g}(t)\rbrace , while the estimate by the SVE exhibits increasing bias with higher {\mathrm {var}}\lbrace {\Delta g}(t)\rbrace .

Fig. 8. - Expectation and standard deviation of the estimators for different receiver gain variations 
$\sigma _{\text {g}}^{2} = {\mathrm {var}}\lbrace {\Delta g}(t)\rbrace $
. The theoretically expected values are given in (32) and as 
$ {\mathbb {E}}\lbrace \hat {T_{\mathrm {ant}}}\rbrace = {T_{\mathrm {ant}}}$
 for the MVU. The expected values are measured by taking the average over 
$M=2^{16}$
 estimates for different realizations of the measurement vector. The simulation results are conducted using the Monte-Carlo method. As one can see, the SVE exhibits a bias, as this estimator calculates a scaled version of the signal power. Used parameters: 
$ {T_{\mathrm {ant}}}= {\mathrm {12140}}~{\mathrm{K}}$
, 
$ {T_{\mathrm {rec}}}= {\mathrm {22480}}~{\mathrm{K}}$
, 
$ {g}^{2} = {\mathrm {42.7~dB}}$
, and 
$ {B}= {\mathrm {230.1}~{{\mathrm {MHz}}}}$
. (a) Shows the Expectation of the estimator. (b) Shows the standard deviation of the estimator.
Fig. 8.

Expectation and standard deviation of the estimators for different receiver gain variations \sigma _{\text {g}}^{2} = {\mathrm {var}}\lbrace {\Delta g}(t)\rbrace . The theoretically expected values are given in (32) and as {\mathbb {E}}\lbrace \hat {T_{\mathrm {ant}}}\rbrace = {T_{\mathrm {ant}}} for the MVU. The expected values are measured by taking the average over M=2^{16} estimates for different realizations of the measurement vector. The simulation results are conducted using the Monte-Carlo method. As one can see, the SVE exhibits a bias, as this estimator calculates a scaled version of the signal power. Used parameters: {T_{\mathrm {ant}}}= {\mathrm {12140}}~{\mathrm{K}} , {T_{\mathrm {rec}}}= {\mathrm {22480}}~{\mathrm{K}} , {g}^{2} = {\mathrm {42.7~dB}} , and {B}= {\mathrm {230.1}~{{\mathrm {MHz}}}} . (a) Shows the Expectation of the estimator. (b) Shows the standard deviation of the estimator.

Another influence on the performance of the estimators is the quantization noise {q}\lbrack n\rbrack , or rather the variance/power of the quantization noise as it can be seen in the equations developed in Section III. This influence is now validated through measurements.

As it is not possible to change the physical bitwidth of the ADC on the RFSoC evaluation board, we resorted to doing the quantization for different bitwidths in software in a rounding manner for s_{\textrm {I}}\lbrack n\rbrack and s_{\textrm {Q}}\lbrack n\rbrack \begin{equation*} s_{\textrm {I},\text {quant}}\lbrack n\rbrack = \mathrm {round}\left ({{\frac {s_{\textrm {I}}\lbrack n\rbrack }{\Delta }}}\right) \times \Delta \tag {46}\end{equation*} View SourceRight-click on figure for MathML and additional features.where \mathrm {round}(\cdot) rounds to the nearest integer.

The relationship between bitwidth and quantization noise power is given by (14).

The impact of the quantization process on the expected values of the estimators is depicted in Fig. 9.

Fig. 9. - Influence of the bitwidth of the ADC on the expected value and standard deviation of the estimators given in (30) and (31). The simulation data was conducted using the Monte-Carlo method. The measurement results are estimated by averaging over 
$M=2^{12}$
 different realizations of the measurement vector per bitwidth. It is assumed that 
$ {\Delta g}\lbrack n\rbrack = 0$
. Measured parameters: 
$ {T_{\mathrm {rec}}}= {\mathrm {1459.6}}~{\mathrm{K}}$
, 
$ {T_{\mathrm {ant}}}= {\mathrm {12140}}~{\mathrm{K}}$
, 
$ {g}^{2} = {\mathrm {54.5}}~{\mathrm{dB}}$
, and 
$ {B}= {\mathrm {241.1}}~{\mathrm{MHz}}$
. (a) Shows the Expectation of the estimator. (b) Shows the standard deviation of the estimator.
Fig. 9.

Influence of the bitwidth of the ADC on the expected value and standard deviation of the estimators given in (30) and (31). The simulation data was conducted using the Monte-Carlo method. The measurement results are estimated by averaging over M=2^{12} different realizations of the measurement vector per bitwidth. It is assumed that {\Delta g}\lbrack n\rbrack = 0 . Measured parameters: {T_{\mathrm {rec}}}= {\mathrm {1459.6}}~{\mathrm{K}} , {T_{\mathrm {ant}}}= {\mathrm {12140}}~{\mathrm{K}} , {g}^{2} = {\mathrm {54.5}}~{\mathrm{dB}} , and {B}= {\mathrm {241.1}}~{\mathrm{MHz}} . (a) Shows the Expectation of the estimator. (b) Shows the standard deviation of the estimator.

While the MVU provides an unbiased estimate of the input power, the SVE tends to yield a higher equivalent input temperature. Notably, the expected value remains relatively consistent across different integration times N, as proposed by (32) and {\mathbb {E}}\lbrace \hat {T}_{\mathrm {ant,MVU}}\rbrace = {T_{\mathrm {ant}}} .

Furthermore, the standard deviation of the estimators is also affected by quantization noise, as illustrated in Fig. 9(b). Although it is assumed that {\Delta g}\lbrack n\rbrack = 0 , which may not be entirely realistic due to inherent receiver gain variations within the LNB, resulting in a slight deviation from the theoretical expectations. Additionally, the assumption {\Delta g}\lbrack n\rbrack = 0 leads to identical outcomes for both the MVU and the SVE, as evidenced by the comparison between (27) and (36).

Another influence on the performance of the estimators is the additive receiver noise modeled by T_{\mathrm {rec}} . As it was not possible to validate this effect through measurements with the proposed setup, only simulations were conducted. The result is displayed in Fig. 10. Due to the assumption of {\Delta g}\lbrack n\rbrack = 0 , the result for the standard deviation of the MVU and the SVE coincide. The simulation results match very well with the theoretical values.

Fig. 10. - Influence of the additive receiver noise modeled by 
$T_{\mathrm {rec}}$
 on the expected value and standard deviation of the estimators given in (30) and (31). The simulation data was conducted using the Monte-Carlo method. It is assumed that 
$ {\Delta g}\lbrack n\rbrack = 0$
. Used parameters: 
$ {T_{\mathrm {ant}}}= {\mathrm {12140}}~{\mathrm{K}}$
, 
$ {g}^{2} = {\mathrm {42.7}}~{\mathrm{dB}}$
, and 
$ {B}= {\mathrm {230.1}}~{\mathrm{MHz}}$
. (a) Shows the Expectation of the estimator. (b) Shows the standard deviation of the estimator.
Fig. 10.

Influence of the additive receiver noise modeled by T_{\mathrm {rec}} on the expected value and standard deviation of the estimators given in (30) and (31). The simulation data was conducted using the Monte-Carlo method. It is assumed that {\Delta g}\lbrack n\rbrack = 0 . Used parameters: {T_{\mathrm {ant}}}= {\mathrm {12140}}~{\mathrm{K}} , {g}^{2} = {\mathrm {42.7}}~{\mathrm{dB}} , and {B}= {\mathrm {230.1}}~{\mathrm{MHz}} . (a) Shows the Expectation of the estimator. (b) Shows the standard deviation of the estimator.

The last investigated influence is phase noise. During the theoretical derivation, the phase noise of the LNB completely cancels and therefore should not influence the expectation or standard deviation of the estimators. To check this behavior, a simulation was conducted, where the phase noise was created by using the specified phase noise spectrum of the used LNB. The phase noise spectrum was shifted vertically in the frequency domain and the variance of the time-domain process was calculated by integrating over the spectral representation. The simulation result is shown in Fig. 11.

Fig. 11. - Influence of the phase noise of the LNB on the expected value and standard deviation of the estimators given in (30) and (31). The simulation data was conducted using the Monte-Carlo method. Used parameters: 
$ {T_{\mathrm {ant}}}= {\mathrm {12140}}~{\text {K}}$
, 
$ {g}^{2} = {\mathrm {42.7}}~{\mathrm{dB}}$
, 
$ {B}= {\mathrm {230.1}}~{\mathrm{MHz}}$
, and 
$\sigma _{\text {g}}^{2} = 1000$
. (a) Shows the Expectation of the estimator. (b) Shows the standard deviation of the estimator.
Fig. 11.

Influence of the phase noise of the LNB on the expected value and standard deviation of the estimators given in (30) and (31). The simulation data was conducted using the Monte-Carlo method. Used parameters: {T_{\mathrm {ant}}}= {\mathrm {12140}}~{\text {K}} , {g}^{2} = {\mathrm {42.7}}~{\mathrm{dB}} , {B}= {\mathrm {230.1}}~{\mathrm{MHz}} , and \sigma _{\text {g}}^{2} = 1000 . (a) Shows the Expectation of the estimator. (b) Shows the standard deviation of the estimator.

As expected by the theoretical results, the phase noise of the LNB does not influence the estimation result.

SECTION VI.

Conclusion

In this article, a novel way of deriving the sensitivity of a digital total power radiometer using the CRLB was introduced. The receiver of the digital total power radiometer was mathematically modeled, and the MVU was derived. This derived MVU assumes knowledge of the statistics of various random processes. For a practical implementation, knowledge of these statistics cannot be assumed. Therefore, a suboptimum but simple estimator, which works without knowledge of noise parameters, was introduced. This estimator was investigated regarding its performance when imperfections are present.

Theoretical results were validated using measurements and simulations. The measurements showed the importance of a controlled environment for this kind of highly sensitive device. This became especially evident with a long-term measurement of the antenna temperature in comparison to the physical temperature of the LNB.

The theoretical expressions for the variance, standard deviation, and expectation of the introduced estimators were validated using measurements and simulations, especially the dependence of these quantities on the receiver gain variations, quantization noise, additive noise, and phase noise. This provides novel insights and a deeper understanding of digital total power radiometers.

In future work, this theoretical framework and the developed estimators could be applied to the design of digital radiometers, providing insights into how various nonidealities affect device performance and sensitivity. This approach could help optimize radiometer sensitivity under real-world conditions and guide improvements in system design.

References

References is not available for this document.