Many camera sensors suffer from limited dynamic range. The result is that there is a lack of clear details in displayed images and videos. This paper describes our approach to generate high dynamic range (HDR) from an image sequence while modifying exposure times for each new frame. For this purpose, we propose an FPGA-based architecture that can produce a real-time high dynamic range video from successive image acquisition. Our hardware platform is build around a standard low dynamic range CMOS sensor and a Virtex 5 FPGA board. The CMOS sensor is a EV76C560 provided by e2v. This 1.3 Megapixel device offers novel pixel integration/readout modes and embedded image pre-processing capabilities including multiframe acquisition with various exposure times, approach consists of a pipeline of different algorithmic phases: automatic exposure control during image capture, alignment of successive images in order to minimize camera and objects movements, building of an HDR image by combining the multiple frames, and final tonemapping for viewing on a LCD display. Our aim is to achieve a realtime video rate of 25 frames per second for a full sensor resolution of 1, 280 × 1, 024 pixels.