Skip to Main Content
Validation of satellite land surface temperature (LST) is a challenge because of spectral, spatial, and temporal variabilities of land surface emissivity. Highly accurate in situ LST measurements are required for validating satellite LST products but are very hard to obtain, except at discrete points or for very short time periods (e.g., during field campaigns). To compare these field-measured point data with moderate-resolution (1 km) satellite products requires a scaling process that can introduce errors that ultimately exceed those in the satellite-derived LST products whose validation is sought. This paper presents a new method of validating the Geostationary Operational Environmental Satellite (GOES) R-Series (GOES-R) Advanced Baseline Imager (ABI) LST algorithm. It considers the error structures of both ground and satellite data sets. The method applies a linear fitting model to the satellite data and coregistered “match-up” ground data for estimating the precisions of both data sets. In this paper, GOES-8 Imager data were used as a proxy of the GOES-R ABI data for the satellite LST derivation. The in situ data set was obtained from the National Oceanic and Atmospheric Administration's SURFace RADiation (SURFRAD) budget network using a stringent match-up process. The data cover one year of GOES-8 Imager observations over six SURFRAD sites. For each site, more than 1000 cloud-free match-up data pairs were obtained for day and night to ensure statistical significance. The average precision over all six sites was found to be 1.58 K, as compared to the GOES-R LST required precision of 2.3 K. The least precise comparison at an individual SURFRAD site was 1.8 K. The conclusion is that, for these ground truth sites, the GOES-R LST algorithm meets the specifications and that an upper boundary on the precision of the satellite LSTs can be determined.