In this paper, we describe an accelerated implementation of the Method of Moments (MoM). A framework is proposed, exploiting the graphics processing unit (GPU) computing power by means of the software platform compute unified device architecture (CUDA). The mixed-potential integral-equation formulation, applied to microstrip circuit modeling, is adopted, and both the impedance matrix computation and the linear system solution of the MoM are accelerated. The latter point has been handled by considering both a direct dense (LU-based) and an iterative sparse (bi-conjugate gradient stabilized) solver. The best suited method is selected automatically during a thresholding phase, which renders the impedance matrix as sparse as possible. The effectiveness of the GPU-enabled MoM has been extensively tested in the analysis of planar circuits and components. The results achieved confirm the validity, accuracy, and high performance of the proposed approach.