Skip to Main Content
The transmission and analysis of video is frequently used for a variety of applications outside the entertainment sector, and it is generally used to perform specific tasks. The Quality of Experience (QoE) concept for video content used for entertainment differs significantly from the QoE of video used for recognition tasks, because in the latter case the subjective satisfaction of the user depends on achieving the given task, e.g. event detection or object recognition. Additionally, the quality of video used by a human observer is distinct from objective video quality used in computer processing - Computer Vision. The VQiPS (Video Quality in Public Safety) Working Group, established in 2009 and supported by the U.S. Department of Homeland Security's Office for Interoperability and Compatibility, has been developing a user guide for public safety video applications. According to VQiPS, the ability to achieve a recognition task is influenced by many parameters, and five of them have been selected as being of particular importance. They are: usage time-frame, discrimination level, target size, lighting level, and level of motion. These parameters form what are referred to as Generalized Use Classes, or GUCs. The aim of our research was to develop algorithms that would automatically support classification of input sequences into one of the GUCs. Target size and lighting level parameters were approached. The described experiment reveals the ambiguity and hesitation of the experts during the manual target size determination process. However, the developed automatic methods of target size classification make it possible to determine GUC parameters with satisfactory efficiency at compliance levels of 70% with end-users opinion. Lighting levels of the entire sequence can be classified with an efficiency reaching 93%.