By Topic

Learning Probabilistic Automata for Model Checking

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

6 Author(s)
Hua Mao ; Dept. of Comput. Sci., Aalborg Univ., Aalborg, Denmark ; Yingke Chen ; Jaeger, M. ; Nielsen, T.D.
more authors

Obtaining accurate system models for verification is a hard and time consuming process, which is seen by industry as a hindrance to adopt otherwise powerful model driven development techniques and tools. In this paper we pursue an alternative approach where an accurate high-level model can be automatically constructed from observations of a given black-box embedded system. We adapt algorithms for learning finite probabilistic automata from observed system behaviors. We prove that in the limit of large sample sizes the learned model will be an accurate representation of the data-generating system. In particular, in the large sample limit, the learned model and the original system will define the same probabilities for linear temporal logic (LTL) properties. Thus, we can perform PLTL model-checking on the learned model to infer properties of the system. We perform experiments learning models from system observations at different levels of abstraction. The experimental results show the learned models provide very good approximations for relevant properties of the original system.

Published in:

Quantitative Evaluation of Systems (QEST), 2011 Eighth International Conference on

Date of Conference:

5-8 Sept. 2011