I. Introduction
With the development and application of electrification, connectivity, intelligence, and sharing, autonomous vehicles have become the strategic direction of the global automotive industry, and their safety issues have gradually become the focus of the public attention. In July 2019, 11 companies including Baidu, BMW and Intel jointly released the "Safety First for Automated Driving" [1], which aims to promote the safety research of autonomous vehicles. With the improvement of the functional architecture of the autonomous driving system, the analysis of functional safety problems caused by the failure risk covered by the international standard ISO 26262 has been unable to meet the safety analysis requirements of highly complex systems, and the emergences of safety risks when the system does not fail gathered more and more attention. ISO/PAS 21448 attributed such risks to Safety of The Intended Functionality (SOTIF) [2], [3], and gave its detailed definition: the performance limitations and insufficient functions of the perception system and execution system of automatic driving, the algorithm defects of the perception algorithm, decision algorithm, control algorithm, etc., the safety risk caused by the special weather, traffic, or the reasonably foreseeable misuse of the personnel. For example, in the two consecutive Boeing 737Max passenger plane crashes in 2018 and 2019, the elevation sensor on the nose of the aircraft produced incorrect aircraft attitude signals due to improper calibration, the automatic anti-stall system took over the aircraft after receiving the wrong signal and issued wrong instructions, which lead to the tragedy of crashes and deaths. Another example is the Tesla Model S incident when a Tesla Model S hit a road sweeper directly on the expressway due to no emergency braking measurement, the driver of the sweeper died on the spot. These examples are typical vicious aviation and traffic accidents caused by defects in the interaction and coordination of software and hardware.