Skip to Main Content
Human detection has always been an important part of computer vision but many implementations lack the real-time performance that real world applications require. This paper presents a real-time implementation of human detection in video using the state-of-the-art histograms of oriented gradients method. Each image in the video sequence is tested at multiple scales using a sliding window. Histograms of oriented gradients are created for each window and passed to a support vector machine to classify it as human or not. The histograms of oriented gradients method is implemented on a GPU using the NVIDIA CUDA architecture. The implementation significantly speeds up computation, achieving approximately 38 frames a second on VGA video while testing 11,160 windows per frame. Accuracy remains comparable to the CPU implementation. The flexibility and computational power the GPU affords users is discussed. These discussions should benefit those researchers who are interested in using a GPU for high-performance computing tasks.