Introduction
Radiometers are very sensitive passive microwave power detectors that usually measure the radiation of physical objects described by Planck’s law [1]. The used frequency range can go from the L-, C- X-, Ku-, and Ka-bands [2] to the F-, D-, G-, Y-, and J-bands [3], [4] or even to the sub-THz range [5]. These devices are used in various fields of application like medical applications [6], [7], [8], [9], remote sensing [10], [11], [12], industrial applications [13], [14], forest fire detection [15], and human presence detection [16], to name a few. The radiometric sensitivity is a critical property of a microwave radiometer, as it determines the smallest distinguishable power level at the input of the radiometer device and, therefore, its resolution capabilities.
Mathematically, the sensitivity can be defined [17] as\begin{equation*} \Delta T = \frac {{\mathrm {std}}\lbrace \hat {T}\rbrace }{\dfrac {\partial {\mathbb {E}}\lbrace \hat {T}\rbrace }{\partial T}}. \tag {1}\end{equation*}
\begin{equation*} {T_{\mathrm {sys}}} = {T_{\mathrm {ant}}} + {T_{\mathrm {rec}}} \tag {2}\end{equation*}
Since resolving small temperature differences is essential in various fields of applications, the radiometric sensitivity for analog devices has been studied [1] and is shown to be\begin{equation*} {\Delta {T_{\mathrm {sys}}}} = {T_{\mathrm {sys}}} \sqrt {\frac {1}{ {B}{\tau }} + \left ({{ \frac {\Delta {G}}{G} }}\right)^{2}} \tag {3}\end{equation*}
The improvement of analog-to-digital converters (ADCs) enables the shift of signal processing to the digital domain, which means that the signal is directly digitized after downconversion. This leads to many advantages over the analog domain, like increased stability, reconfigurability, and the possibility for more sophisticated signal-processing algorithms like evaluating the fourth moment for interference detection [18]. Due to these advantages, more and more radiometers are being built in a so-called direct sampling or digital way [19], [20], [21].
In our previous work [22], we published an implementation of a passive digital microwave radiometer (PDMR) and laid out some advantages of the processing in the digital domain, like the possibility of evaluating the cross correlation for all possible lag values. The architecture of the previously published dual-channel PDMR can be seen in Fig. 1.
Architecture of the previously published dual-channel PDMR. For a detailed description, see [22]. The signal is digitally downconverted and its data rate is reduced using a numerically controlled oscillator (NCO) and a digital down converter (DDC), respectively, before processing.
This raises the question of how the sensitivity of these direct-sampling radiometers compares to their analog counterparts. Some work in this direction has been done in the following publications (listed from oldest to most recent).
Ohlson and Swett [23] derived how sensitivity is influenced by the quantization and sampling process, using the statistics of the noise processes and the transfer functions of the system devices. However, no measurements are shown, and receiver gain variations are not incorporated.
In [24], a theoretical derivation is provided on how sampling, sampling jitter, dc bias, and quantization influence the correlation output. However, receiver gain variations are not incorporated, and no measurements are presented.
In [25], a PDMR prototype with a center frequency of 1.4 GHz and a bandwidth of 20 MHz is presented, and the influence of sampling on the correlator signal-to-noise ratio is demonstrated. However, receiver gain variations are not incorporated.
Lu et al. [26] provided a thorough theoretical derivation incorporating receiver gain variations, but the transfer function of the system is required. No measurements are conducted, and the signal model does not model quadrature downconversion.
In Table I, a summarized comparison of these publications is provided.
Unfortunately, to the best of the author’s knowledge, no other publications on this topic are available. With this publication, we aim to fill that gap in the literature by providing a solid theoretical foundation for the sensitivity and estimation capabilities of a digital radiometer that incorporates receiver gain variations, quantization, additive noise, and phase noise, along with simulations and measurements that confirm the theoretical derivations. The main benefits of this work are as follows.
This article offers a relatively simple derivation of the sensitivity of a digital radiometer using the well-known framework of the Cramér–Rao lower bound (CRLB).
Only the variances of the involved random processes need to be known—no transfer functions or higher statistical moments are required.
Different nonidealities are incorporated into a single model.
Insights are given on how the modeling of the amplifier can change the estimation problem.
Real-world measurements and simulations validate the derived theoretical equations.
This publication is structured as follows: The CRLB theory is shortly reviewed in Section II. The signal model and the CRLB are derived in Section III. In Section V-A, the measurement setup is introduced. In Section V-B, the results are shown and discussed. A conclusion follows in Section VI.
In the following italic variables, for example, x, are scalars, lowercase bold symbols,
Quantities marked with a hat,
CRLB-Theory
In this section, a short overview of the CRLB theory is given. Assume a vector parameter \begin{equation*} {\mathrm {var}}\lbrace \hat { {\theta }_{i}}\rbrace \ge \lbrack {\boldsymbol {I}}\left ({{\boldsymbol {\theta }}}\right)^{-1} \rbrack _{ii} \tag {4}\end{equation*}
\begin{equation*} \lbrack {\boldsymbol {I}}\left ({{\boldsymbol {\theta }}}\right) \rbrack _{ij} = - {\mathbb {E}}\lbrace \frac {\partial ^{2} \ln {p}\left ({{{\boldsymbol {x}}; {\boldsymbol {\theta }}}}\right)}{\partial {\theta }_{i} \partial {\theta }_{j}} \rbrace \tag {5}\end{equation*}
Furthermore, a minimum variance unbiased (MVU) estimator \begin{equation*} \frac {\partial \ln {p}\left ({{ {\boldsymbol {x}}; {\boldsymbol {\theta }}}}\right)}{\partial {\boldsymbol {x}}} = {\boldsymbol {I}}\left ({{\boldsymbol {\theta }}}\right) \left ({{\boldsymbol {g}\left ({{\boldsymbol {x}}}\right) - {\boldsymbol {\theta }}}}\right). \tag {6}\end{equation*}
Theory
A. Signal Model
In our previous work [22], we introduced a dual-channel PDMR. The architecture of our implemented PDMR is depicted in Fig. 1. One of the two channels is now modeled mathematically. A possible simplified block diagram of a single channel can be seen in Fig. 2(a). In Fig. 2(b) and (c), two different models of the low-noise-block (LNB) converter are shown. The appearing quantities are
(a) System model of one of the channels of the dual-channel PDMR is shown. The input signal
Fig. 2(b) and (c) differs in how the additive receiver noise is modeled and added. In Fig. 2(b), the receiver noise is modeled at the output of the amplifier. The variance of the added noise would then be\begin{equation*} {\mathrm {var}}\lbrace {r_{\textrm {o}}}\left ({{t}}\right)\rbrace = {g}^{2} {k_{\textrm {B}}} {T_{\mathrm {rec}}} {B} \tag {7}\end{equation*}
\begin{equation*} {T_{\mathrm {rec}}} = \left ({{{F}-1}}\right) T_{0}. \tag {8}\end{equation*}
In Fig. 2(c), the additive receiver noise is modeled at the input of the amplifier. In this case, the variance of the added noise would be\begin{equation*} {\mathrm {var}}\lbrace {r_{\textrm {i}}}\left ({{t}}\right)\rbrace = {k_{\textrm {B}}} {T_{\mathrm {rec}}} {B}. \tag {9}\end{equation*}
In the subsequent sections, we will adopt the model illustrated in Fig. 2(b), as it aligns more closely with our measurement setup. In this model framework, the concept of system temperature,
The received signal \begin{equation*} {i}\left ({{t}}\right) = {w}\left ({{t}}\right) e^{{j} \left ({{2\pi f_{\mathrm {rf}} t + \varphi _{0}}}\right)} \tag {10}\end{equation*}
\begin{equation*} \sigma _{w}^{2} = {\mathrm {var}}\lbrace {w}_{\textrm {I}}\left ({{t}}\right)\rbrace = {\mathrm {var}}\lbrace {w}_{\textrm {Q}}\left ({{t}}\right)\rbrace = \frac {1}{2} {k_{\textrm {B}}} {T_{\mathrm {ant}}} {B}. \tag {11}\end{equation*}
\begin{equation*} \left ({{ {g} + {\Delta g}\left ({{t}}\right)}}\right) {w}\left ({{t}}\right) e^{{j} \left ({{2\pi f_{\mathrm {rf}} t + \varphi _{0}}}\right)} + {r_{\textrm {o}}}\left ({{t}}\right) e^{{j} \left ({{2\pi f_{\mathrm {rf}} t + \varphi _{1}}}\right)} \tag {12}\end{equation*}
\begin{equation*} {\mathrm {var}}\lbrace r_{\textrm {o,I}}\left ({{t}}\right)\rbrace = {\mathrm {var}}\lbrace r_{\textrm {o,Q}}\left ({{t}}\right)\rbrace = \frac {1}{2} {g}^{2} {k_{\textrm {B}}} {T_{\mathrm {rec}}} {B}. \tag {13}\end{equation*}
This signal is downconverted to the IF range by the LNB with the LO signal \begin{equation*} {\mathrm {var}}\lbrace {q}_{\textrm {I}}\lbrack n\rbrack \rbrace = {\mathrm {var}}\lbrace {q}_{\textrm {Q}}\lbrack n\rbrack \rbrace = \frac {1}{2 Z_{0}}\frac {\Delta ^{2}}{12} \tag {14}\end{equation*}
The quantization process introduces additional effects. For example, the random process before quantization exhibits specific covariance and correlation properties, which may not retain the same covariance and correlation when discretized. However, for simplicity, we will disregard these effects. Subsequent measurements will demonstrate that these effects are negligible in real-world scenarios.
This occurs after low-pass filtering\begin{align*} & \frac {1}{2} \left ({{ {g} + {\Delta g}\lbrack n \rbrack }}\right) {w}\lbrack n \rbrack e^{{j}\left ({{2\pi \frac {f_{\mathrm {if}}}{ {f_{\textrm {s}}}} n + \varphi _{0} + {\Delta \varphi }\lbrack n \rbrack }}\right)} \\ & \quad + \frac {1}{2} {r_{\textrm {o}}}\lbrack n \rbrack e^{{j}\left ({{2\pi \frac {f_{\mathrm {if}}}{f_{\textrm {s}}} n + \varphi _{1} + {\Delta \varphi }\lbrack n \rbrack }}\right)} + {q}\lbrack n \rbrack e^{{j}\left ({{2\pi \frac {f_{\mathrm {if}}}{f_{\textrm {s}}} n + \varphi _{2}}}\right)} \tag {15}\end{align*}
\begin{align*} s\lbrack n \rbrack & =\frac {1}{2} \left ({{ {g} + {\Delta g}\lbrack n \rbrack }}\right) {w}\lbrack n \rbrack e^{{j} \left ({{\varphi _{0} + {\Delta \varphi }\lbrack n \rbrack }}\right)} \\ & \quad + \frac {1}{2} {r_{\textrm {o}}}\lbrack n \rbrack e^{{j} \left ({{\varphi _{1} + {\Delta \varphi }\lbrack n \rbrack }}\right)} + {q}\lbrack n \rbrack e^{{j} \varphi _{2}}. \tag {16}\end{align*}
\begin{align*} & {s_{\textrm {I}}}\lbrack n \rbrack \\ & = \mathcal {R}\lbrace s\lbrack n\rbrack \rbrace \\ & = \frac {1}{2} \left ({{ {g}+ {\Delta g}\lbrack n\rbrack }}\right) \\ & \quad \times \left ({{ {w}_{\textrm {I}} \lbrack n\rbrack \cos \left ({{\varphi _{0} + {\Delta \varphi }\lbrack n\rbrack }}\right) - {w}_{\textrm {Q}}\lbrack n \rbrack \sin \left ({{\varphi _{0} + {\Delta \varphi }\lbrack n\rbrack }}\right)}}\right) \\ & \quad + \frac {1}{2} \left ({{r_{\textrm {o,I}} \cos \left ({{\varphi _{1} + {\Delta \varphi }\lbrack n\rbrack }}\right) - r_{\textrm {o,Q}} \sin \left ({{\varphi _{1} + {\Delta \varphi }\lbrack n\rbrack }}\right)}}\right) \\ & \quad + \left ({{ {q}_{\textrm {I}}\lbrack n\rbrack \cos \left ({{\varphi _{2}}}\right) - {q}_{\textrm {Q}}\lbrack n\rbrack \sin \left ({{\varphi _{2}}}\right) }}\right) \tag {17}\end{align*}
\begin{align*} & {s_{\textrm {I}}}\lbrack n \rbrack \\ & = \mathcal {I}\lbrace s\lbrack n\rbrack \rbrace \\ & = \frac {1}{2} \left ({{ {g}+ {\Delta g}\lbrack n\rbrack }}\right) \\ & \quad \times \left ({{ {w}_{\textrm {I}} \lbrack n\rbrack \sin \left ({{\varphi _{0} + {\Delta \varphi }\lbrack n\rbrack }}\right) + {w}_{\textrm {Q}}\lbrack n \rbrack \cos \left ({{\varphi _{0} + {\Delta \varphi }\lbrack n\rbrack }}\right)}}\right) \\ & \quad + \frac {1}{2} \left ({{r_{\textrm {o,I}} \sin \left ({{\varphi _{1} + {\Delta \varphi }\lbrack n\rbrack }}\right) + r_{\textrm {o,Q}} \cos \left ({{\varphi _{1} + {\Delta \varphi }\lbrack n\rbrack }}\right)}}\right) \\ & \quad + \left ({{ {q}_{\textrm {I}}\lbrack n\rbrack \sin \left ({{\varphi _{2}}}\right) + {q}_{\textrm {Q}}\lbrack n\rbrack \cos \left ({{\varphi _{2}}}\right) }}\right). \tag {18}\end{align*}
\begin{equation*} {\mathbb {E}}\lbrace {s_{\textrm {I}}}\lbrack n\rbrack \rbrace = {\mathbb {E}}\lbrace {s_{\textrm {Q}}}\lbrack n\rbrack \rbrace = 0 \tag {19}\end{equation*}
The variance of the in-phase and quadrature component follow as:\begin{align*} & {\mathrm {var}}\lbrace {s_{\textrm {I}}}\lbrack n \rbrack \rbrace \\ & = \left ({{ {g}^{2}+ {\mathrm {var}}\lbrace {\Delta g}\lbrack n\rbrack \rbrace }}\right) {\mathrm {var}}\lbrace {w}_{\textrm {I}}\lbrack n\rbrack \rbrace + {\mathrm {var}}\lbrace r_{\textrm {o,I}}\lbrack n \rbrack \rbrace \\ & \quad + {\mathrm {var}}\lbrace {q}_{\textrm {I}}\lbrack n\rbrack \rbrace \\ & = \left ({{ {g}^{2} + \sigma _{g}^{2}}}\right) \sigma ^{2}_{w} + \sigma ^{2}_{\mathrm {ro}} + \sigma ^{2}_{q} \tag {20}\\ & {\mathrm {var}}\lbrace {s_{\textrm {Q}}}\lbrack n \rbrack \rbrace \\ & = \left ({{ {g}^{2}+ {\mathrm {var}}\lbrace {\Delta g}\lbrack n\rbrack \rbrace }}\right) {\mathrm {var}}\lbrace {w}_{\textrm {Q}}\lbrack n\rbrack \rbrace + {\mathrm {var}}\lbrace r_{\textrm {o,Q}}\lbrack n \rbrack \rbrace \\ & \quad + {\mathrm {var}}\lbrace {q}_{\textrm {Q}}\lbrack n\rbrack \rbrace \\ & = \left ({{ {g}^{2} + \sigma _{g}^{2}}}\right) \sigma ^{2}_{w} + \sigma ^{2}_{\mathrm {ro}} + \sigma ^{2}_{q} \tag {21}\end{align*}
\begin{align*} & {\mathrm {var}}\lbrace X Y \rbrace \\ & = \lbrack {\mathbb {E}}\lbrace X\rbrace \rbrack ^{2} {\mathrm {var}}\lbrace Y \rbrace + \lbrack {\mathbb {E}}\lbrace Y\rbrace \rbrack ^{2} {\mathrm {var}}\lbrace X \rbrace + {\mathrm {var}}\lbrace X \rbrace {\mathrm {var}}\lbrace Y \rbrace \tag {22}\end{align*}
\begin{equation*} {\mathbb {E}}\Big \lbrace \cos ^{2}\left ({{\varphi + {\Delta \varphi }\lbrack n\rbrack }}\right) + \sin ^{2}\left ({{\varphi + {\Delta \varphi }\lbrack n\rbrack }}\right) \Big \rbrace = 1 \tag {23}\end{equation*}
\begin{equation*} {\mathbb {E}}\lbrace X^{2} \rbrace = {\mathrm {var}}\lbrace X \rbrace + \lbrack {\mathbb {E}}\lbrace X\rbrace \rbrack ^{2} \tag {24}\end{equation*}
We assume that both the in-phase and quadrature components are normally distributed,
We further assume that \begin{align*} & {p}_{\textrm {I,Q}}\left ({{ {s_{\textrm {I}}}\lbrack n \rbrack, {s_{\textrm {Q}}}\lbrack n \rbrack; {T_{\mathrm {ant}}}}}\right) \\ & = \frac {1}{2\pi \sigma ^{2}\left ({{T_{\mathrm {ant}}}}\right)} \exp \left ({{-\frac { {s_{\textrm {I}}}^{2}\lbrack n\rbrack + {s_{\textrm {Q}}}^{2}\lbrack n\rbrack }{2 \sigma ^{2}\left ({{T_{\mathrm {ant}}}}\right)}}}\right). \tag {25}\end{align*}
B. CRLB Derivation
As we have derived the pdf of a complex baseband sample \begin{align*} {\mathrm {var}}\lbrace \hat {T}_{\mathrm {ant}}\rbrace & \ge \frac {\left ({{\sigma ^{2}\left ({{T_{\mathrm {ant}}}}\right)}}\right)^{2}}{N} \left ({{\frac {\partial \sigma ^{2}\left ({{T_{\mathrm {ant}}}}\right)}{\partial {T_{\mathrm {ant}}}}}}\right)^{-2} \\ & \ge \frac {1}{N} \left ({{ {T_{\mathrm {ant}}} + \frac { {g}^{2} {T_{\mathrm {rec}}}}{ {g}^{2} + \sigma _{\textrm {g}}^{2}} + \frac {2 \sigma ^{2}_{\textrm {q}}}{\left ({{ {g}^{2} + \sigma ^{2}_{\textrm {g}}}}\right) {k_{\textrm {B}}} B}}}\right)^{2}. \tag {26}\end{align*}
The same result can be stated in terms of standard deviation\begin{align*} & {\mathrm {std}}\lbrace \hat {T}_{\mathrm {ant}}\rbrace \\ & \ge \frac {1}{\sqrt {N}} \left ({{ {T_{\mathrm {ant}}} + \frac { {g}^{2} {T_{\mathrm {rec}}}}{ {g}^{2} + \sigma _{\textrm {g}}^{2}} + \frac {2 \sigma _{\textrm {q}}^{2}}{\left ({{ {g}^{2} + \sigma _{\textrm {g}}^{2}}}\right) {k_{\textrm {B}}} B}}}\right) \tag {27}\end{align*}
\begin{equation*} {\mathrm {std}}\lbrace \hat {T}_{\mathrm {ant}}\rbrace \ge \frac {T_{\mathrm {ant}}}{\sqrt {N}} \tag {28}\end{equation*}
\begin{equation*} \Delta T_{\mathrm {ant}} = \frac {T_{\mathrm {ant}}}{\sqrt { {B} {\tau }}} \tag {29}\end{equation*}
The MVU estimator that attains the CRLB follows from (6) as\begin{align*} \hat {T}_{\mathrm {ant,MVU}} & = \frac {1}{N} \frac {1}{ {k_{\textrm {B}}} {B}\left ({{ {g}^{2}+\sigma _{\textrm {g}}^{2}}}\right)} \sum _{n=0}^{N-1} | {s}\lbrack n\rbrack |^{2} \\ & \quad - \frac { {g}^{2} {T_{\mathrm {rec}}}}{ {g}^{2}+ \sigma _{\textrm {g}}^{2}} - \frac {2 \sigma _{\textrm {q}}^{2}}{\left ({{ {g}^{2} + \sigma _{\textrm {g}}^{2}}}\right) {k_{\textrm {B}}} B}. \tag {30}\end{align*}
In practice, the parameters required for implementing the derived MVU are often unavailable. Therefore, we will now examine how a simpler estimator might be affected by these factors.
C. Scaled Variance Estimator
We are going to investigate how the proposed scaled variance estimator (SVE)\begin{equation*} \hat {T}_{\mathrm {ant,SVE}} = \frac {1}{N}\frac {1}{ {k_{\textrm {B}}} {g}^{2} {B}} \sum _{n=0}^{N-1} | {s}\lbrack n\rbrack |^{2} \tag {31}\end{equation*}
\begin{align*} & {\mathbb {E}}\lbrace \hat {T}_{\mathrm {ant,SVE}}\rbrace \\ & = \frac {1}{N}\frac {1}{ {k_{\textrm {B}}} {g}^{2} {B}} \sum _{n=0}^{N-1} {\mathbb {E}}\lbrace | {s}\lbrack n\rbrack |^{2} \rbrace \\ & = \frac { {g}^{2} + \sigma _{\textrm {g}}^{2}}{ {g}^{2}} {T_{\mathrm {ant}}} + {T_{\mathrm {rec}}} + \frac {2 \sigma _{\textrm {q}}^{2}}{ {k_{\textrm {B}}} {g}^{2} B}. \tag {32}\end{align*}
The variance follows as\begin{align*} {\mathrm {var}}\lbrace \hat {T}_{\mathrm {ant,SVE}} \rbrace & = {\mathrm {var}}\Big \lbrace \frac {1}{N} \frac {1}{ {k_{\textrm {B}}} {g}^{2} {B}} \sum _{n=0}^{N-1} | {s}\lbrack n\rbrack |^{2} \Big \rbrace \\ & = \left ({{\frac {1}{N}\frac {1}{ {k_{\textrm {B}}} {g}^{2} {B}}}}\right)^{2} \sum _{n=0}^{N-1} {\mathrm {var}}\lbrace | {s}\lbrack n\rbrack |^{2}\rbrace. \tag {33}\end{align*}
To continue the derivation, we need to find an expression for the variance of the magnitude squared of the complex baseband samples.
We have made the assumption that \begin{align*} {\mathrm {var}}\lbrace | {s}\lbrack n\rbrack |^{2}\rbrace & = {\mathrm {var}}\lbrace s_{\textrm {I}}^{2}\lbrack n\rbrack + s_{\textrm {Q}}^{2}\lbrack n\rbrack \rbrace \\ & = {\mathbb {E}}\lbrace s_{\textrm {I}}^{4}\lbrack n\rbrack \rbrace - {\mathbb {E}}^{2}\lbrace s_{\textrm {I}}^{2}\lbrack n\rbrack \rbrace + {\mathbb {E}}\lbrace s_{\textrm {Q}}^{4}\lbrack n\rbrack \rbrace - {\mathbb {E}}^{2}\lbrace s_{\textrm {Q}}^{2}\lbrack n\rbrack \rbrace \\ & = 3 \left ({{\sigma ^{2}}}\right)^{2} - \left ({{\sigma ^{2}}}\right)^{2} + 3 \left ({{\sigma ^{2}}}\right)^{2} - \left ({{\sigma ^{2}}}\right)^{2}= 4 \left ({{\sigma ^{2}}}\right)^{2}. \tag {34}\end{align*}
\begin{align*} & {\mathrm {var}}\lbrace \hat {T}_{\mathrm {ant,SVE}} \rbrace \\ & = \frac {1}{N} \left ({{\frac { {g}^{2} + \sigma _{\textrm {g}}^{2}}{ {g}^{2}} {T_{\mathrm {ant}}} + {T_{\mathrm {rec}}} + \frac {2\sigma _{\textrm {q}}^{2}}{ {k_{\textrm {B}}} {g}^{2} B}}}\right)^{2} \tag {35}\end{align*}
\begin{align*} & {\mathrm {std}}\lbrace \hat {T}_{\mathrm {ant,SVE}}\rbrace \\ & = \frac {1}{\sqrt {N}} \left ({{\frac { {g}^{2} + \sigma _{\textrm {g}}^{2}}{ {g}^{2}} {T_{\mathrm {ant}}} + {T_{\mathrm {rec}}} + \frac {2\sigma _{\textrm {q}}^{2}}{ {k_{\textrm {B}}} {g}^{2} B}}}\right). \tag {36}\end{align*}
In this case, the standard deviation is not decreased with an increasing variance of the receiver gain variations. On the contrary, the system performance becomes worse with increasing power of the randomly fluctuating receiver gain process.
D. Considerations on Sensitivity
The sensitivity is generally defined as given in (1). If we apply this expression to the derived estimators, we find\begin{align*} & \Delta T_{\mathrm {ant,MVU}} = \frac {\mathrm {std}\lbrace \hat {T}_{\mathrm {ant,MVU}} \rbrace }{\dfrac {\partial {\mathbb {E}}\lbrace \hat {T}_{\mathrm {ant,MVU}} \rbrace }{\partial {T_{\mathrm {ant}}}}} = \mathrm {std}\lbrace \hat {T}_{\mathrm {ant,MVU}} \rbrace \\ & =\frac {1}{\sqrt {N}} \left ({{ {T_{\mathrm {ant}}} + \frac { {g}^{2} {T_{\mathrm {rec}}}}{ {g}^{2} + \sigma _{\textrm {g}}^{2}} + \frac {2 \sigma _{\textrm {q}}^{2}}{\left ({{ {g}^{2} + \sigma _{\textrm {g}}^{2}}}\right) {k_{\textrm {B}}} B}}}\right) \tag {37}\end{align*}
\begin{align*} \Delta T_{\mathrm {ant,SVE}} & = \frac {\mathrm {std}\lbrace \hat {T}_{\mathrm {ant,SVE}} \rbrace }{\dfrac {\partial {\mathbb {E}}\lbrace \hat {T}_{\mathrm {ant,SVE}} \rbrace }{\partial {T_{\mathrm {ant}}}}} = \frac {\mathrm {std}\lbrace \hat {T}_{\mathrm {ant,SVE}} \rbrace }{\dfrac { {g}^{2} + \sigma _{\textrm {g}}^{2}}{ {g}^{2}}} \\ & = \frac {1}{\sqrt {N}} \left ({{ {T_{\mathrm {ant}}} + \frac { {g}^{2} {T_{\mathrm {rec}}}}{ {g}^{2} + \sigma _{\textrm {g}}^{2}} + \frac {2 \sigma _{\textrm {q}}^{2}}{\left ({{ {g}^{2}+\sigma _{\textrm {g}}^{2}}}\right) {k_{\textrm {B}}} B}}}\right) \tag {38}\end{align*}
E. Receiver Gain Variations
Understanding the variances of the involved random processes, namely
Similarly, a range for the variance of the receiver gain variation \begin{equation*} \frac {\Delta G_{\textrm {s},\text {rms}}}{G_{\textrm {s}}} \in \lbrack 10^{-4}, 10^{-2} \rbrack \tag {39}\end{equation*}
Mathematically, the rms-value can be calculated as\begin{equation*} \Delta G_{\textrm {s},\text {rms}} = \lim _{T\rightarrow \infty } \sqrt {\int _{0}^{T} \Delta G_{\textrm {s}}^{2}\left ({{t}}\right) {d}t} \tag {40}\end{equation*}
\begin{align*} \Delta G_{\textrm {s},\text {rms}} & = \lim _{T\rightarrow \infty } \sqrt {\int _{0}^{T} {\Delta g}^{4}\left ({{t}}\right) {d}t} = \sqrt {\lim _{T\rightarrow \infty } \int _{0}^{T} {\Delta g}^{4}\left ({{t}}\right) {d}t} \\ & = \sqrt { {\mathbb {E}}\lbrace {\Delta g}^{4}\left ({{t}}\right) \rbrace } \tag {41}\end{align*}
This expression depends on the distribution of \begin{equation*} {\mathbb {E}}\lbrace {\Delta g}^{4}\left ({{t}}\right) \rbrace = 3 \left ({{ {\mathrm {var}}\lbrace {\Delta g}\left ({{t}}\right)\rbrace }}\right)^{2}. \tag {42}\end{equation*}
\begin{equation*} {\mathrm {var}}\lbrace {\Delta g}\left ({{t}}\right)\rbrace \in \frac {\lbrack 10^{-4}, 10^{-2} \rbrack }{\sqrt {3}} G_{\textrm {s}}. \tag {43}\end{equation*}
\begin{equation*} {\mathrm {var}}\lbrace {\Delta g}\left ({{t}}\right) \rbrace \in \lbrack 57.74, 5773.50 \rbrack. \tag {44}\end{equation*}
Simulation
To verify the derived equations, a Monte Carlo simulation was conducted using the signal model from (16) to generate realizations of the baseband signal. The simulation results are presented in the following result section.
One should note that when comparing the simulation to the measurement results, not all influences on the real system are modeled in the theoretical framework. For example, the temperature dependency of components and unknown receiver gain variations are not included in the model, which can lead to some differences between the measurements and simulations. To reduce these differences, each element of the receiver chain could be characterized more thoroughly, with these findings incorporated into the simulation. However, in this work, we only considered the artificially added receiver gain variations.
Measurements
In Section III, we developed expressions for the expectation and standard deviation of estimators. In the following, these models are validated using real-world measurements.
A. Setup
The general measurement setup is depicted in Fig. 3. The measurement setup consists of the RFSoC board, with digital hardware designs introduced in [22], the analog part shown in Fig. 3(a) and a noise diode as a noise source. A detailed description of the whole PDMR is given in [22]. A short revision follows here.
(a) Analog part of the measurement setup is embedded in a temperature-controlled chamber. (b) Whole setup. (c) Schematic of the measurement setup. The temperature stability of the temperature-controlled chamber is specified as ±0.3 K in the datasheet.
The used LNB has an input frequency range from 21.2 to 22.2 GHz and an LO frequency of
The analog part of our PDMR is housed in a temperature-controlled chamber, as a defined environment is crucial for this high-precision measurement task. To visualize the important role of a stable environment, the estimated equivalent input noise temperature in comparison to the physical temperature of the housing of the LNB is shown in Fig. 4.
Measured average physical temperature on the surface of the LNB and estimated average system temperature
It is evident that as the temperature of the LNB increases, the estimated system temperature decreases. This decline can be attributed to the decrease in gain resulting from the elevated physical temperature of the LNB.
Before the measurements were conducted, a calibration process took place with a noise diode as a known signal source. With that and the gain from the datasheet of the LNB, an equivalent bandwidth was estimated. Because of that, the bandwidth might change in the upcoming measurement results.
As it is depicted in Fig. 3(a), a controllable attenuator at the input of the LNB was added. This allows to artificially introduce receiver gain variations. To be able to accurately create these artificial receiver gain variations with a defined
Measured system power gain over different control voltages of the attenuator. A linear function is fit to the measurements. This fit is used to calculate the needed control voltage
To realize the desired gain function with the added gain fluctuations, the attenuator is controlled by an arbitrary waveform generator (AWG). The control signal for different receiver gain variations
Control voltage
The generated receiver gain variation
B. Results
The introduced setup is used to validate the derived formulas given in Section III. All measurements are done using only the RFSoC
The additive receiver noise is estimated by using the Y-factor method [28] and the ON- and OFF-state of the noise diode to estimate the noise figure of the setup and then using (8) to calculate the equivalent receiver noise temperature \begin{equation*} {g}^{2} {B} = \frac {P_{\mathrm {on}}}{ {k_{\textrm {B}}} T_{\mathrm {on}}} \tag {45}\end{equation*}
The standard deviation decreases as we increase the number of samples N of the squared mean. To check whether this behaves as expected, the measured variance of the estimators for
Measured variance of the estimators over different integration lengths N relative to the measured variance at
The influence of the receiver gain variation is validated next. The expectation and standard deviation of the estimators versus the variance of the receiver gain variation can be seen in Fig. 8. The theoretical values match the simulation and measurement data. The slight offset can be explained by not modeled effects during the measurement, for example, the receiver gain variation of the LNB itself, and by the fact that the simulated and measured samples may not adhere fully to a Gaussian distribution, which was assumed during the derivation. Consistently with theoretical expectations, the MVU provides an unbiased estimate of the input power for all
Expectation and standard deviation of the estimators for different receiver gain variations
Another influence on the performance of the estimators is the quantization noise
As it is not possible to change the physical bitwidth of the ADC on the RFSoC evaluation board, we resorted to doing the quantization for different bitwidths in software in a rounding manner for \begin{equation*} s_{\textrm {I},\text {quant}}\lbrack n\rbrack = \mathrm {round}\left ({{\frac {s_{\textrm {I}}\lbrack n\rbrack }{\Delta }}}\right) \times \Delta \tag {46}\end{equation*}
The relationship between bitwidth and quantization noise power is given by (14).
The impact of the quantization process on the expected values of the estimators is depicted in Fig. 9.
Influence of the bitwidth of the ADC on the expected value and standard deviation of the estimators given in (30) and (31). The simulation data was conducted using the Monte-Carlo method. The measurement results are estimated by averaging over
While the MVU provides an unbiased estimate of the input power, the SVE tends to yield a higher equivalent input temperature. Notably, the expected value remains relatively consistent across different integration times N, as proposed by (32) and
Furthermore, the standard deviation of the estimators is also affected by quantization noise, as illustrated in Fig. 9(b). Although it is assumed that
Another influence on the performance of the estimators is the additive receiver noise modeled by
Influence of the additive receiver noise modeled by
The last investigated influence is phase noise. During the theoretical derivation, the phase noise of the LNB completely cancels and therefore should not influence the expectation or standard deviation of the estimators. To check this behavior, a simulation was conducted, where the phase noise was created by using the specified phase noise spectrum of the used LNB. The phase noise spectrum was shifted vertically in the frequency domain and the variance of the time-domain process was calculated by integrating over the spectral representation. The simulation result is shown in Fig. 11.
Influence of the phase noise of the LNB on the expected value and standard deviation of the estimators given in (30) and (31). The simulation data was conducted using the Monte-Carlo method. Used parameters:
As expected by the theoretical results, the phase noise of the LNB does not influence the estimation result.
Conclusion
In this article, a novel way of deriving the sensitivity of a digital total power radiometer using the CRLB was introduced. The receiver of the digital total power radiometer was mathematically modeled, and the MVU was derived. This derived MVU assumes knowledge of the statistics of various random processes. For a practical implementation, knowledge of these statistics cannot be assumed. Therefore, a suboptimum but simple estimator, which works without knowledge of noise parameters, was introduced. This estimator was investigated regarding its performance when imperfections are present.
Theoretical results were validated using measurements and simulations. The measurements showed the importance of a controlled environment for this kind of highly sensitive device. This became especially evident with a long-term measurement of the antenna temperature in comparison to the physical temperature of the LNB.
The theoretical expressions for the variance, standard deviation, and expectation of the introduced estimators were validated using measurements and simulations, especially the dependence of these quantities on the receiver gain variations, quantization noise, additive noise, and phase noise. This provides novel insights and a deeper understanding of digital total power radiometers.
In future work, this theoretical framework and the developed estimators could be applied to the design of digital radiometers, providing insights into how various nonidealities affect device performance and sensitivity. This approach could help optimize radiometer sensitivity under real-world conditions and guide improvements in system design.