Skip to Main Content
An important and unsolved problem today is that of automatic quantification of the quality of video flows transmitted over packet networks. In particular, the ability to perform this task in real time (typically for streams sent themselves in real time) is especially interesting. The problem is still unsolved because there are many parameters affecting video quality, and their combined effect is not well identified and understood. Among these parameters, we have the source bit rate, the encoded frame type, the frame rate at the source, the packet loss rate in the network, etc. Only subjective evaluations give good results but, by definition, they are not automatic. We have previously explored the possibility of using artificial neural networks (NNs) to automatically quantify the quality of video flows and we showed that they can give results well correlated with human perception. In this paper, our goal is twofold. First, we report on a significant enhancement of our method by means of a new neural approach, the random NN model, and its learning algorithm, both of which offer better performances for our application. Second, we follow our approach to study and analyze the behavior of video quality for wide range variations of a set of selected parameters. This may help in developing control mechanisms in order to deliver the best possible video quality given the current network situation, and in better understanding of QoS aspects in multimedia engineering.