Skip to Main Content
The Monte-Carlo simulation based leakage current analysis provides very accurate results, but it has high computational complexity. In this paper, we present the Monte-Carlo based leakage estimation method, which is implemented on a GPU and a CUDA platform. Thereby using the Monte-Carlo method on a GPU, we can expect not only high simulation accuracy but also high estimation efficiency. Because a GPU contains hundreds of computational cores, it can perform the Monte-Carlo simulation for leakage estimation efficiently if we parallelize the algorithm. When the GPU executes the program in parallel, a large number of threads are generated. Each sample of the Monte-Carlo simulation is assigned to each thread and then all threads are executed at the same time. Because the proposed method is based on the Monte-Carlo simulation, any leakage model and any distribution of process parameter variation can be used. In the experiment, for the simplicity of simulation, the 1st order polynomial model is used. The simulation results imply that our approach is faster than the conventional Monte-Carlo simulation by about 130 times.