Skip to Main Content
A Turing test is a promising way to validate AI systems which usually have no way to proof correctness. However, human experts (validators) are often too busy to participate in it and sometimes have different opinions per person as well as per validation session. To cope with these and increase the validation dependability, a validation knowledge base (VKB) in Turing test-like validation is proposed. The VKB is constructed and maintained across various validation sessions. Primary benefits are (1) decreasing validators' workload, (2) refining the methodology itself, e.g. selecting dependable validators using VKB, and (3) increasing AI systems' dependabilities through dependable validation, e.g. support to identify optimal solutions. Finally, validation experts software agents (VESA) are introduced to further break limitations of human validator's dependability. Each VESA is a software agent corresponding to a particular human validator. This suggests the ability to systematically "construct" human-like validators by keeping personal validation knowledge per corresponding validator. This will bring a new dimension towards dependable AI systems.