This paper proposes using a simulation model of software testing to assess the cost effectiveness of test effort allocation strategies based on fault prediction results. The simulation model estimates the number of discoverable faults with respect to the given test resources, the resource allocation strategy, a set of modules to be tested, and the fault prediction results. In a case study applying fault prediction of a small system to acceptance testing in the telecommunication industry, results from our simulation model showed that the best strategy was to let the test effort be proportional to "the number of expected faults in a module x log(module size)". By using this strategy with our best fault prediction model, the test effort could be reduced by 25% while still detecting as many faults as were normally discovered in testing, although the company required about 6% of the test effort for metrics collection and modeling. The simulation results also indicate that the lower bound of acceptable prediction accuracy is around .78 in terms of an effort-aware measure, Norm(Popt). The results indicate that reduction of the test effort can be achieved by fault prediction only if the appropriate test strategy is employed with a high enough fault prediction accuracy.