Skip to Main Content
Traffic sign inventories are created for road safety and maintenance based on street-level panoramic images. Due to the large capturing interval, large viewpoint deviations between the different capturings occur. These viewpoint variations complicate the classification procedure, which aims at the selection of the correct sign type, out of a high number of nearly similar sign types, typically resulting in misclassifications. This paper describes a novel approach for incorporating viewpoint information to the classification procedure, where the sign orientation is estimated based on dense matching. Afterwards, each sample is corrected to a frontal viewpoint, which is then classified. Finally, the sign type is obtained by weighted voting. Large-scale experiments including 2, 224 traffic signs show that this approach reduces the misclassification rate by about 33% compared to the single-view case.