Skip to Main Content
In the 1980s the industry's transition from bipolar to CMOS technology was driven by the increased power demand for high-performance circuits. Although the reduced power consumption of CMOS technology initially came at the cost of performance, the density advantage of CMOS was soon turned into a performance benefit. In the early years of CMOS technology, it was inconceivable that the power limit would be reached again. Simple scaling rules dictated the progress of the technology, doubling the transistor count every two years. However, a significant transition appears in the 130-nm to 65-nm-node regime, in which passive power density moves from forming a minor part of the total to becoming dominant. This has effectively halted traditional scaling in CMOS and forced technologists and designers to deal once more with a power-constrained device and chip design. At the core of this dilemma is the MOSFET transistor, which must deliver a maximum drive current at the lowest possible off-leakage current. The undesired leakage from the device subthreshold regime is a manifestation of degraded short-channel control. Another source of leakage is the increase in gate current that is due to thinner gate dielectrics. The latter is responsible for the fact that scaling of traditional oxidebased gate dielectrics comes to a halt at the 90-nm node with a physical dielectric thickness of about 1 nm. Of course, this has a direct impact on gate-length scaling and short-channel control.
Note: The Institute of Electrical and Electronics Engineers, Incorporated is distributing this Article with permission of the International Business Machines Corporation (IBM) who is the exclusive owner. The recipient of this Article may not assign, sublicense, lease, rent or otherwise transfer, reproduce, prepare derivative works, publicly display or perform, or distribute the Article.