By Topic

Classification of video sequences into specified Generalized Use Classes of target size and lighting level

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Witkowski, M. ; AGH Univ. of Sci. & Technol., Kraków, Poland ; Leszczuk, M.I.

The transmission and analysis of video is frequently used for a variety of applications outside the entertainment sector, and it is generally used to perform specific tasks. The Quality of Experience (QoE) concept for video content used for entertainment differs significantly from the QoE of video used for recognition tasks, because in the latter case the subjective satisfaction of the user depends on achieving the given task, e.g. event detection or object recognition. Additionally, the quality of video used by a human observer is distinct from objective video quality used in computer processing - Computer Vision. The VQiPS (Video Quality in Public Safety) Working Group, established in 2009 and supported by the U.S. Department of Homeland Security's Office for Interoperability and Compatibility, has been developing a user guide for public safety video applications. According to VQiPS, the ability to achieve a recognition task is influenced by many parameters, and five of them have been selected as being of particular importance. They are: usage time-frame, discrimination level, target size, lighting level, and level of motion. These parameters form what are referred to as Generalized Use Classes, or GUCs. The aim of our research was to develop algorithms that would automatically support classification of input sequences into one of the GUCs. Target size and lighting level parameters were approached. The described experiment reveals the ambiguity and hesitation of the experts during the manual target size determination process. However, the developed automatic methods of target size classification make it possible to determine GUC parameters with satisfactory efficiency at compliance levels of 70% with end-users opinion. Lighting levels of the entire sequence can be classified with an efficiency reaching 93%.

Published in:

Broadband Multimedia Systems and Broadcasting (BMSB), 2012 IEEE International Symposium on

Date of Conference:

27-29 June 2012