Understanding data-modeling performance can provide valuable lessons for the selection, training, research, and development of data models. Data modeling is the process of transforming expressions in loose natural language communications into formal diagrammatic or tabular expressions. While researchers generally agree that abstraction levels can be used to explain general performance differences across models, empirical studies have reported many construct level results that cannot be explained. To explore further explanations, we develop a set of model-specific construct complexity values based on both theoretical and empirical support from complexity research in databases and other areas. We find that abstraction levels and complexity values together are capable of providing a consistent explanation of laboratory experiment data. In our experiment, data were drawn from three models: the relational model, the extended-entity-relationship model, and the object-oriented model. With the newly developed complexity measures, a consistent explanation can be made for findings from other studies which provide sufficient model details for complexity values to be calculated.