Skip to Main Content
Modeling the performance of large scale systems is the core idea of this paper.We focus on modeling the performance specific behavior of LarKC1- The Large Knowledge Collider a platform for large scale integrated reasoning and Web-search. A set of instrumentation and monitoring tools are employed to collect metrics related to execution time, resources, and specific platform measurements like running workflows and plug-ins. Our method performs machine learning on top of instrumented data and tries to find relations between input defined metrics and output metrics that describe the instrumentation observations of the LarKC platform, plug-ins or workflows. The proposed method is a combination of clustering and regression techniques.