Skip to Main Content
The total list of processing required to view a pixel includes antialiasing, offset sampling, color space projection, reconstruction filter compensation, compositing, gamma correction, and quantization and dithering. If we look at all these operations we can see a pattern: Almost all of them throw away information. When we filter out high frequencies, quantize intensities into bins, project a continuous color spectrum into three numbers, and represent geometric edges with a single transparency value we can see that an ordinary-hardware pixel, either refreshed on the screen or stored in a file, is simply a bad data compression technique. Any rendering algorithm or image processing operation that converts data to pixels generally loses information about the original data that it uses as input. A few polygons become thousands of pixels; a high-resolution image becomes a low-resolution image. Conversion to pixels for viewing purposes used to be a slow operation, but with faster processors we no longer need to do the image generation offline for speed purposes. We can recalculate the image whenever we need to look at it.