Next-generation mesoscale numerical weather prediction system, the Weather Research and Forecasting (WRF) model, is a designed for dual use for forecasting and research. WRF offers multiple physics options that can be combined in any way. One of the physics options is radiance computation. The major source for energy for the earth's climate is solar radiation. Thus, it is imperative to accurately model horizontal and vertical distribution of the heating. Goddard solar radiative transfer model includes the absorption duo to water vapor, O3, O2, CO2, clouds and aerosols. The model computes the interactions among the absorption and scattering by clouds, aerosols, molecules and surface. Finally, fluxes are integrated over the entire shortwave spectrum from 0.175 μm to 10 μm. In this paper, we develop an efficient graphics processing unit (GPU) based Goddard shortwave radiative scheme. The GPU-based Goddard shortwave scheme was compared to a CPU-based single-threaded counterpart on a computational domain of 422 × 297 horizontal grid points with 34 vertical levels. Both the original FORTRAN code on CPU and CUDA C code on GPU use double precision floating point values for computation. Processing time for Goddard shortwave radiance on CPU is 22106 ms. GPU accelerated Goddard shortwave radiance on 4 GPUs can be computed in 208.8 ms and 157.1 ms with and without I/O, respectively. Thus, the speedups are 116 × with data I/O and 141× without I/O on two NVIDIA GTX 590 s . Using single precision arithmetic and less accurate arithmetic modes the speedups are increased to 536× and 259×, with and without I/O, respectively.