Earth observation satellites provide a valuable source of data which when conveniently processed can be used to better understand the Earth system dynamics. In this regard, one of the prerequisites for the analysis of satellite image time series is that the images are spatially coregistered so that the resulting multitemporal pixel entities offer a true temporal view of the area under study. This implies that all the observations must be mapped to a common system of grid cells. This process is known as gridding and, in practice, two common grids can be used as a reference: 1) a grid defined by some kind of external data set (e.g., an existing land-cover map) or 2) a grid defined by one of the images of the time series. The aim of this paper is to study the impact that gridding has on the quality of satellite time series. More precisely, the impact of the so-called gridding artifacts is quantified using a time series of 12 images acquired over The Netherlands by the Medium Resolution Imaging Spectrometer (MERIS). First, the impact of selecting a reference grid is evaluated in terms of geolocation errors and pixel overlap. Then, the effect of observation geometry is studied as nongeostationary satellites, like MERIS, can acquire images from the same area from a number of orbits. Finally, a high-resolution land-cover data set is used to account for temporal information consistency (pixel homogeneity in terms of land-cover composition). Results have shown an average pixel overlap with the nearest pixel between 20% and 41% depending on the selected reference grid and on the differences in observation geometry. These results indicate that inappropriate gridding might result in collocated time series that are not adequate for temporal studies at pixel level (particularly over nonhomogeneous areas) and that, in any case, it is interesting to identify areas with low pixel overlap in order to further analyze the reliability of the products derived over these areas.