A computational approach based on differential transformation is proposed to solve optimal control problems of dynamical systems. The optimal control law is constructed by solving a two-point-boundary value problem or a Hamilton-Jacobi-Bellman partial differential equation. Using differential transformation, ordinary or partial differential equations are transformed into a system of nonlinear algebraic equations. By using the inverse transformation, the optimal solution is computed in the form of a finite series of a chosen basis system. The differential transformation approach has been shown to be simple for implementation, flexible in handling optimal control problems with various types of dynamics, and computationally efficient. The performance of the proposed approach is demonstrated through numerical examples.