An ever-growing number of real-world computer vision applications require classification, segmentation, retrieval, or realistic rendering of genuine materials. However, the appearance of real materials dramatically changes with illumination and viewing variations. Thus, the only reliable representation of material visual properties requires capturing of its reflectance in as wide range of light and camera position combinations as possible. This is a principle of the recent most advanced texture representation, the bidirectional texture function (BTF). Multispectral BTF is a seven-dimensional function that depends on view and illumination directions as well as on planar texture coordinates. BTF is typically obtained by measurement of thousands of images covering many combinations of illumination and viewing angles. However, the large size of such measurements has prohibited their practical exploitation in any sensible application until recently. During the last few years, the first BTF measurement, compression, modeling, and rendering methods have emerged. In this paper, we categorize, critically survey, and psychophysically compare such approaches, which were published in this newly arising and important computer vision and graphics area.