We describe an experiment in which art and illustration experts evaluated six 2D vector visualization methods. We found that these expert critiques mirrored previously recorded experimental results; these findings support that using artists, visual designers, and illustrators to critique scientific visualizations can be faster and more productive than quantitative user studies. Our participants successfully evaluated how well the given methods would let users complete a given set of tasks. Our results show a statistically significant correlation with a previous objective study: designers' subjective predictions of user performance by these methods match the users measured performance. The experts improved the evaluation by providing insights into the reasons for the effectiveness of each visualization method and suggesting specific improvements.