Scheduled System Maintenance
On Friday, October 20, IEEE Xplore will be unavailable from 9:00 PM-midnight ET. We apologize for the inconvenience.
Notice: There is currently an issue with the citation download feature. Learn more.

IEEE Quick Preview
  • Abstract

SECTION I

INTRODUCTION

Physical defects like shorts, opens, and transistor defects may occur during the fabrication process of semiconductor devices. To detect such defects, fault models have been proposed and used for the generation of test patterns. The most well-known fault models include stuck-at (SA) [1], bridge [2] [3] [4], transition (TR), [5], [6], N-detect [7], gate-exhaustive (GE) [8], and embedded-multi-detect (EMD) [9], as well as timing-aware [10], and layout-aware [11] fault models on interconnect lines. Furthermore, the fault models stuck-short and stuck-open have been addressed by transistor switch level simulation [12]. Notwithstanding the great successes of those fault models, customers report that they increasingly receive too many defective parts from their suppliers. Hence, they demand higher quality tests during the production process. We have investigated this problem for the past few years, and we found that many defects that escape testing are in fact defects within the standard library cells. Many of those cell-internal defects remain undetected when using traditional ATPG tools and the above mentioned well-known fault models. Also, the pattern fault model, which was introduced about 10 years ago, addresses the problem only partially. Successful attempts of fault modeling approaches and analog simulations based on SPICE netlists containing just transistors and no parasitic objects, but already considering the physical layout [13], have been done using inductive fault or contamination analysis, also for standard library cells [14], [15]. Basic approaches to model physical defects on transistor level by means of SPICE simulations without considering parasitic objects [12], [16], or on logic level [17], have shown how low-level defect information can be used to achieve better and high efficient fault models. Transistor-level ATPG solutions have been performed [18], [19], but these quickly turn inept when applied to multimillion transistor designs. Our research over the past five years resulted in the new cell-aware test (CAT) fault model. This fault model is based on a post-layout transistor-level netlist including parasitic objects, resulting into a defect-based ATPG approach, which can be applied to large, state-of-the-art designs. An introduction to the CAT methodology was published in [20]. We compared CAT and GE patterns in [21]. CAT has proven to be effective in detecting cell-internal defects, as demonstrated through high-volume production test results presented in [22] [23] [24] [25] [26] [27]. In these cases, state-of-the-art fault models were shown to be insufficient; only CAT achieved the demanded low defective-parts-per-million (DPPM) rates.

In this paper, we will give a complete overview of the CAT method in Section III. In Section IV, we present CAT view generation results from a library with 1,940 cells. In Section V, we outline the CAT application to various industrial designs with detailed information about the pattern count, the overall coverage gain, as well as which cell types from the standard cell library contribute most to the defect rate reduction. In Section VI, we present high-volume production test results from a 32 nm notebook processor after testing 800,000 parts. Production test results from 1,000,000 parts of a 350 nm automotive design are shown in Section VII. In Section VIII, we give an overview of the diagnosis flow based on the CAT method, including physical failure analysis (PFA) results from one selected 32 nm part. The advantages of the CAT methodology over traditional fault models for FinFET technologies are described in Section IX.

SECTION II

MOTIVATION

IC manufacturers are required to deliver well-tested ICs to their customers with a certain maximum defect rate. Customers are performing incoming application-related acceptance tests, which often uncover a higher defect rate than permissible. A detailed analysis published in [28] has shown that the majority of test escapes have their root-cause in insufficiently detecting cell-internal defects. For this analysis, over one million parts were tested with traditional SA patterns, experimentally followed by EMD patterns. It was determined that numerous parts that were failing EMD patterns but not SA patterns were failing due to one missing cell input condition at a simple multiplexer with two data inputs. The missing cell input condition was Formula$\(\text {D}0 = 0\)$, Formula$\(\text {D}1 = 0\)$, and Formula$\(\text{S} = 0\)$, as shown in Table I. Knowing that traditional ATPG tools are using gate-level primitives, it became obvious that the ATPG was not required to generate the 000 condition, which was necessary to detect the identified silicon defect. Further, this pattern never occurred after random fill of the unspecified bits.

Table 1
Table I Multiplexer Test Pattern at Cell Inputs

Table I summarizes the required multiplexer cell input pattern to detect all SA port faults, the actual applied pattern during the SA production test, as well as the experimental test pattern with the added test highlighted in the shaded table cells. After performing an analog fault simulation for that multiplexer, we found that one bridging fault required exactly the 000 condition, and that no other cell input condition would test that particular bridge defect. Similar situations also exist for open defects that require sequential cell input conditions; this was addressed in [28] as well. These findings motivated us to research and subsequently develop a new fault model which is based on actual cell layouts that can be created fully automatically, and that forces the ATPG tool to deterministically generate a set of cell input combinations which detect all cell-internal defects. This methodology is named cell-aware-test.

SECTION III

CELL-AWARE TEST METHODOLOGY

The CAT methodology consists of two major parts. The first part (see Fig. 1) is the technology-dependent CAT view generation flow, which is a one-time task that is performed once for each technology library.

Figure 1
Fig. 1. Cell-aware test library view generation.

The flow as shown in Fig. 1 is a combination of traditional functions/tools and new functions and algorithms that were developed for this CAT methodology. The flow starts with layout extraction, followed by an analog fault simulation, and fault model synthesis to create the CAT library models.

The second major part is the well-known design flow (see Fig. 2) where we use our CAT ATPG instead of a traditional ATPG.

Figure 2
Fig. 2. Well-known design flow.

Our CAT ATPG is a defect-based ATPG that uses the technology-dependent and transistor-level-based CAT view to generate high-quality test patterns to significantly reduce the defect level of delivered ICs. The CAT ATPG is able to generate patterns for very large multimillion gate designs. Current results achieved by this new CAT methodology show a significant increase of the defect coverage and as such a significant reduction of the defect rate measured in DPPM as presented in Sections VI and VII.

A. Layout Extraction

The first part of the flow in Fig. 1 is the layout extraction step, which reads the layout data (file F1) of the individual library cell and creates a SPICE transistor netlist in detailed standard parasitic format (DSPF) including parasitic elements like resistors and capacitors which is stored in file F2. As an example, let’s consider a 3-to-1 multiplexer cell from a 65 nm library. The corresponding layout with some defects highlighted is shown in Fig. 3.

Figure 3
Fig. 3. Defect extraction from cell layout.

B. Analog Fault Simulation

The second part of the CAT view generation flow as shown in Fig. 1 is analog fault simulation, which starts with the extraction of considered defects from the DSPF SPICE netlist. The resulting considered defects are stored in the cell-dependent defects file (file F3 in Fig. 1). The defects considered are single hard defects, not parametric variations from IC processing, and they occur at sites where there are SPICE netlist components, both intentional and parasitic. The CAT methodology differentiates the following six defect types.

  1. Open: Any cell-internal open defect, such as an open in poly, metal, diffusion, or vias. In the extracted SPICE netlist, these defects are matched to existing resistor elements by increasing their resistance values. Different open resistor values are considered as necessary.
  2. Bridge: Any cell-internal bridge defect such as bridges between adjacent objects in the same layer or different layers. In the extracted SPICE netlist, these defects are matched to existing capacitor elements by inserting resistors in parallel with them. Different bridge resistor values are considered as necessary.
  3. Tleak: Any cell-internal transistor defect that will switch a transistor partially on with a certain resistive value. Different leakage resistor values are considered as necessary.
  4. Tdrive: Any cell-internal defect that will switch a transistor partially off with a certain resistive value. Different drive strength resistor values are considered as necessary.
  5. PortBridge: A bridge between a port (e.g., D1) and VSS, VDD, or any other port of the cell. Different bridge resistor values are considered as necessary.
  6. PortOpen: A disconnected port (e.g., D1), to analyze the effect of cell-external disconnects to cell ports. Different open resistor values are considered as necessary.

SA and TR faults at the cell ports are contained in the CAT defect lists for each cell as well, and as such the defects that CAT considers are a superset of SA and TR fault sites, since both DC and transient analog simulations are performed.

Also note that CAT covers all bridges between cell ports, even when they occur external to the cell. But CAT does not consider cell-external bridges to other cell instances in the design. This means, layout-aware bridges on interconnect lines should still be considered for high quality test as it is done in the experiment as presented in Section VII.

A schematic representation of how and where such defects are inserted into the DSPF SPICE netlist is shown partially in Fig. 4.

Figure 4
Fig. 4. Extracted transistor netlist and inserted defects.

The next step is to perform an exhaustive analog simulation for each of the extracted defects to determine the complete set of cell input combinations that detect the defect. The resulting defect matrix for the particular library cell, which summarizes the detection results for each defect, is contained in file F4 as shown in Fig. 1.

The analog simulator environment that is used during the analog fault simulation step for a cell with three data inputs, one cell output, and the two power pins VDD, and VSS is shown in Fig. 5. Always one single defect is inserted at a time.

Figure 5
Fig. 5. Analog simulation environment.

The performed simulations are analog transient analysis simulations, which determine the voltage at the cell output at a calculated strobe time. Both a static (one time frame) and a delay (two time frame) analog simulation is performed.

A defect is considered detected if, for example, the cell’s output voltage deviates from the defect-free voltage by more than 60% of the supply voltage for at least one input combination. The deviation threshold, however, can be specified by the user. All analog simulations, including the creation of the stimuli, are fully automated by the CAT library view generation tool.

For the static analysis, exhaustive one time frame stimuli are analyzed. For the delay analysis, robust two time frame stimuli are analyzed by default, but also exhaustive two time frame stimuli can be analyzed on user request.

C. Cell-Aware Fault Model Synthesis

The goal of this step is to identify and store the cell input conditions that are useful in detecting the defects inside each cell. This third part of the CAT view generation flow as shown in Fig. 1 is called the CAT synthesis, which optimizes the newly created exhaustive defect matrix in order to generate the corresponding CAT library view that is stored in the CAT view file (file F5 in Fig. 1). For each detected cell-internal defect, the CAT view file contains one or more alternative test conditions for detecting the corresponding defect. This ensures that the subsequent CAT ATPG still has the freedom to choose between all alternative test conditions for detecting a certain cell-internal defect, while maintaining a very compact test pattern set for a complete design.

D. Transistor-Level Defect-Based ATPG

As shown in Fig. 2, the CAT ATPG is part of the well-known design flow which uses the gate-level netlist from the logic synthesis without any transistor netlist and without any layout data, but adds the CAT views that have been created up-front by using the transistor level netlist including all parasitic elements, extracted from the cells layout, as shown in Fig. 1.

The major difference between the CAT ATPG and traditional SA and TR pattern generation is the modeling of the fault. We can demonstrate those differences using an example of a 3-to-1 multiplexer that is instantiated somewhere in the design.

Fig. 6 shows how the SA ATPG will generate a test for detecting a port fault; in this case, a SA 0 fault at the cell input D0.

Figure 6
Fig. 6. Normal ATPG process.

In a traditional SA ATPG engine, the fault position (initial fault injection) and the condition for the fault excitation is predefined for every ATPG primitive. In this example the SA ATPG would justify Formula$\(\text{D}0 = 1\)$, Formula$\(\text{S}0 = 0\)$, and Formula$\(\text{S}1 = 0\)$. The other inputs (D1 and D2) are not required. The generation process of the CAT ATPG for the same multiplexer is shown in Fig. 7. In this case, an intracell bridge is assumed between two nets A and B as indicated in the layout.

Figure 7
Fig. 7. CAT ATPG for an intracell bridge defect.

The initial fault injection of a CAT defect is always at the cell output port. The condition for the fault excitation and its propagation to the cell outputs is fully disconnected from any predefined ATPG primitive. It strictly applies the necessary conditions at the input ports of the library cell as defined by the corresponding CAT model.

Considering the bridge B1 in the above example, the necessary assignments at the cell inputs are D0 = 1, D2 = 0, S0 = 0, and S1 = 0. That means the CAT ATPG is forced to assign an additional cell input, which is in this case D2, in order to detect the bridging defect B1. As described earlier, a traditional SA ATPG would only be forced to assign one data input. In other words, in contrast to previous approaches, the CAT ATPG deterministically applies the conditions to detect all detectable intracell defects. Traditional ATPGs, however, may detect them only by chance.

To guarantee a very compact set of test patterns, the CAT ATPG algorithm makes use of all possible conditions given by the CAT view for detecting a certain defect.

SECTION IV

LIBRARY VIEW GENERATION RESULTS

For creating the CAT views for a complete library (including both, combinational and sequential cells), the CAT flow as described in Section III is executed for each standard library cell. In this paper, we present results from 1,940 cells from a 65 nm technology. Other technologies do show very similar results. Two important graphs are the defect coverage graph and the pattern graph which are explained in the following sub-sections.

A. CAT Defect Coverage Graph

The CAT defect coverage graph presents coverage results with respect to each library cell in isolation without instantiating the cell in a design. The graph in Fig. 8 shows the deficiency of traditional SA and TR patterns for detecting detectable layout-based cell-internal defects.

Figure 8
Fig. 8. CAT defect coverage graph.

The horizontal axis represents the library cells numbered from 1 to 1,940. The vertical axis represents three defect coverage rates in percent as follows: the blue line is the defect coverage rate for detectable bridges, opens, and transistor defects that is achieved with state-of-the-art SA patterns. The red line is the defect coverage rate for detectable bridges, opens, and transistor defects that is achieved with state-of-the-art TR patterns. The green line is the defect coverage rate of CAT static and CAT delay patterns (which is always, by definition, at 100%, because only detectable cell-internal defects are considered). CAT static patterns are using one-cycle test conditions, and CAT delay patterns are using two-cycle conditions at the cell inputs. CAT static tests are typically performed at slow-speed, CAT delay tests typically as at-speed tests. The graph shows that defect coverage of SA patterns is less than 100% for about 50% of the cells, and some cells reach defect coverage of only 46%. The situation is worse for TR patterns, where about 80% of the cells do not reach 100% defect coverage, and about 200 cells have defect coverage of less than 50%, while some cells reach just 20%.

B. CAT Pattern Graph

The CAT pattern graph in Fig. 9 presents results with respect to the number of test patterns generated on each library cell in isolation without instantiating the cell in a design.

Figure 9
Fig. 9. CAT pattern graph on standard cell level.

Again, the horizontal axis represents the individual library cells. The vertical axis represents the number of patterns for each cell. The cells are sorted from left to right in descending order of their traditional pattern count. The blue line is the sum of essential SA and TR patterns for each cell, and the green line is the sum of essential CAT static and CAT delay patterns. Complex cells like multiplexers with four (MUX4) and three data inputs (MUX3), scan flip-flops (SFF), and AND-OR (AO) type cells with up to six inputs, all cluster on the left hand side of the graph.

The peak in the graph at the x-axis position around 1200 is from full-adder (FA) type cells with three inputs, and the peak at the x-axis position around 1500 is from XOR gates with three inputs. Simple logic gates (AND, NAND, OR, NOR) with 2–4 inputs are shown as corresponding GATE cell types.

The graph in Fig. 9 shows that traditional SA and TR patterns are insufficient to detect all dectable cell-internal bridges, opens, and transistor defects. Nearly all cells require more patterns to detect all cell-internal defects and the CAT ATPG is forced to generate those essential patterns.

SECTION V

EVALUATION WITH INDUSTRIAL DESIGNS

To evaluate the effectiveness and the quality of CAT patterns, we executed the evaluation flow as shown in Fig. 10 on 10 industrial multimillion gate designs. The design data is given in Table II.

Figure 10
Fig. 10. Industrial design evaluation.
Table 2
Table II Design Data/Results of Industrial Designs

The evaluation flow starts with the generation of state-of-the-art SA and TR patterns, which give test coverage TCFormula$\({}_{\rm SA}\)$ and TCFormula$\({}_{\rm TR}\)$. The second step in the flow is the fault grading of SA and TR patterns with respect to the CAT fault model considering all traditional SA and TR cell port faults and the cell-internal defects. This results in the defect coverage DCFormula$\({}_{\rm SA}\)$ and DCFormula$\({}_{\rm TR}\)$. The third step in the evaluation flow in Fig. 10 is the generation of CAT patterns to reach the maximum achievable defect coverage.

DCFormula$\({}_{\rm CA1}\)$ is the defect coverage achieved with CAT static pattern, and DCFormula$\({}_{\rm CA2}\)$ is the defect coverage achieved with CAT delay pattern. For all three ATPG runs, the same random fill strategy was used, and a single detection was requested. In addition to the comparison of the defect coverage, we also compared the number of CAT test patterns with the number of SA and TR patterns.

The selected industrial designs are implemented in a 65, 55, 32, and 28 nm technology. The design data is shown in Table II.

As an example, design #10 has 5.6 million gates, 458 k flip flops, and 1020 internal scan chains, resulting in 22.4 million SA faults and 92.6 million CA defects. All but two of the designs use on-chip test compression; the two designs without test compression are #2 and #6.

Looking at the faults and defects columns in Table II, it can be seen that the number of CAT defects is in all cases significantly higher than the number of SA faults. On average, there are about four times more CAT defects than there are SA faults.

A. CAT Static Defect Coverage Gain

Fig. 11 shows the achieved defect coverage gain in percent [%] that is achieved with CAT static patterns over the defect coverage that is achieved with SA patterns. The static defect coverage gain is defined as FormulaTeX Source$$\begin{equation*} {\rm DC}_{\rm gain} = {\rm DC}_{\rm CA1} - {\rm DC}_{\rm SA}. \end{equation*}$$

Figure 11
Fig. 11. CAT static defect coverage gain over SA.

Fig. 11 illustrates that the average defect coverage gain achieved by CAT static patterns compared to SA patterns is almost 1%. As explained before, the increase in defect coverage is because the CAT ATPG targets cell-internal defects explicitly, but traditional ATPG misses these faults both by failing to target them and by failing to cover them by chance.

B. CAT Delay Defect Coverage Gain

The defect coverage gain is even higher for the CAT delay patterns, where an average defect coverage gain of over 4% is achieved compared to TR patterns, as shown in Fig. 12. The delay defect coverage gain is defined as FormulaTeX Source$$\begin{equation*} {\rm DC}_{\rm gain} = {\rm DC}_{\rm CA2} - {\rm DC}_{\rm TR}. \end{equation*}$$

Figure 12
Fig. 12. CAT delay defect coverage gain over TR.

The high-volume production test results (shown in Fig. 21) confirm these evaluation results; i.e., there are at least four times more CAT delay detections than there are CAT static detections.

Figure 21
Fig. 21. PPM reduction AMD 32 nm notebook processor.

C. Comparison of SA, TR, and CAT Patterns

In Table III, we show the number of additional CAT patterns and their corresponding defect coverage gain in relation to the control set of SA and TR patterns. So, Table III shows the defect coverage gain that is achieved for 0% pattern increase, for 25%, 50%, and the maximum additional pattern that are needed to get the highest defect coverage gain. This means for this experiment we did in total four different CAT ATPG runs. For the first run, we limited the number of CAT patterns to be exactly the same number as there are SA and TR patterns. For the second and third run, we limited the amount of CAT patterns to 25% more and 50% more, respectively, than there are SA and TR patterns. For the fourth run, we removed any pattern count limitation and generated as many patterns as needed for the detection of all cell-internal defects as well as for the detection of the traditional cell port faults.

Table 3
Table III Number of Test Patterns Related to Defect Coverage Gain

As can be seen in Table III, in the column titled “+ 0% pattern,” a significant defect coverage gain of about 2.5% on average, can already be achieved without any pattern or test-time increase when using CAT patterns instead of using SA and TR patterns. The average defect coverage gain increases up to 5% (1% static and 4% delay) when more CAT patterns are applied. For the maximum defect coverage, the average CAT static pattern increase is about 49% over SA pattern. The comparison of TR and CAT delay patterns resulted into an average pattern increase of about 70%.

Overall we can state that CA patterns are significantly better than SA and TR pattern. Even without any pattern or test time increase a significant defect coverage gain is achieved by CAT patterns.

D. CAT Static Defect Coverage Gain Per Cell Type

To further investigate which cell types contribute most to the CAT defect coverage gain, we created coverage gain data not just for the complete design, but separately for each standard cell type as used in the evaluated designs. The results are shown in Figs. 1315.

Figure 13
Fig. 13. CAT static defect coverage gain per cell type.
Figure 14
Fig. 14. CAT delay defect coverage gain per cell type.
Figure 15
Fig. 15. CAT-only detected defects per cell type.

Fig. 13 shows the defect coverage gain achieved with CAT static patterns on average over the selected designs, by cell type. The blue line shows the absolute CAT coverage gain in percent [%] in relation to the total number of chip defects for each cell type, and the red line shows the relative coverage gain in relation to the number of defects of the corresponding cell type.

The graph shows that the SFF cell type contributes most to the absolute CAT static coverage gain. This means the SFF cells contribute, on average, with about 0.38% additional detected defects in total per design, and with about 1.2% relative coverage gain in relation to the number of defects of the SFF cell type. The second most contributing cell type is a MUX4, followed by the AO, and OR-AND (OA) cell types. The simple logic gates (AND, NAND, OR, NOR gates from 2 to 4 inputs) are summarized as GATES cell type. The relative coverage gain of the MUX4 cell type of about 1.6% is higher than the 1.2% from the SFF cell type, but the overall impact of MUX4 is smaller, because there are fewer instances of MUX4 cells than SFF cells. Because of that, the SFF cells contribute most to the coverage gain for all 10 designs. The half-adder (HA), FA, and XOR3 cells contribute least to the overall coverage gain because there are very few instances of those cells in the selected designs.

E. CAT Delay Defect Coverage Gain Per Cell Type

An overview of the defect coverage gain by cell type for CAT delay patterns is given in Fig. 14.

The blue line shows the absolute CAT coverage gain in percent [%] for each cell type, and the red line shows the relative coverage gain. The graph shows that the OA cell type achieves the highest absolute coverage gain of 1%. The second most contributing cell type, with also nearly 1% absolute coverage gain, is the AO cell type, the simple logic (GATES) are now third. The MUX4 cell type is now on the fourth position with about 0.8% absolute gain, although its relative gain is the highest with about 10%. Again, the FA, HA, and XOR3 cell types contribute least to the overall coverage gain because there are very few instances of those cell types in the selected designs.

F. Total CAT-Only Detected Defects Per Cell Type

The total percentage of CAT-only detected defects per cell type from both the CAT static and CAT delay patterns is shown in Fig. 15.

Fig. 15 shows that the OA cell type contributes most to the additional detected defects, with 1.08%. The MUX4 cell type contributes 1.07% defects, the AO cell type also adds 1.07% defects, and the simple logic GATES cell type contributes with 0.90% defects.

G. Cell-Aware Detections in AO Cells

To further investigate the large CAT-only detected defects within AO and OA cell types, we selected an AO cell with four inputs. The traditional (sum of SA + TR) test pattern count for this cell is 10, but CAT requires 23 patterns. Fig. 16 shows the layout of this AO cell and one of the defects (named D30) that is not guaranteed to be detected with traditional test patterns.

Figure 16
Fig. 16. Layout of the AO cells.

Fig. 16 shows the bridge defect D30 (irregular shaped red objects) on metal1 between the cell input “B” and a cell-internal net “3.” This bridge defect can occur in two different physical locations.

Fig. 17 shows the same bridge defect D30 between the cell input “B” and the cell-internal net “3,” mapped back to the transistor schematic. There is only one test pattern in the total of the 16 possible input patterns that guarantees the detection of this bridge defect. This input pattern is A = 1, B = 0, C = 0, and D = 0. A traditional SA ATPG is not required to generate this input combination. The CAT ATPG however, is forced deterministically to generate this pattern that will detect the bridge defect D30.

Figure 17
Fig. 17. Transistor schematic of the AO cell.

H. Cell-Aware Detections in SFF Cells

As shown previously in Fig. 13, SFF cells also contribute significantly to CAT-only static detected defects.

Fig. 18 shows a few basic SFFs connected to each other to form a scan-chain. Their data input (D) and their output (Q) are connected to a cloud of combinational logic. In typical configurations like this, there are usually some cell-internal defects within the SFFs that are not detected by the chain test and SA/TR tests. This is because the chain test does not consider states at the D inputs, and traditional ATPG tools will assign the needed states at the D inputs of the SFF cells, but the tool is not required to also assign the needed state at the test input (TI) of the SFF.

Figure 18
Fig. 18. Undetected defects in SFF cells.

To further investigate the CAT-only detected defects in the SFF cells types, we analyzed the cell input patterns that are applied during the SA and TR tests. We noticed that in many cases all possible input combinations for D and TI are present in the huge pattern set. But often when the required input combination was applied, the clock was gated, preventing capture. Due to low-power requirements, there are often multiple stages of clock gating for nearly all flip-flops. In these cases where the enable condition is complex, the capture happens in only very few patterns.

The CAT ATPG tool is forced to enable the clock gates when generating patterns to test all cell-internal defects, and so the required states are assigned to the D and the TI inputs of SFF cells.

In many cases, there are also more complex SFF cells that do not just have a simple multiplexer for D and TI, but also include much more logic like AO functions, hold functions, set and reset functions, etc.

The CAT ATPG will always be forced to make the necessary assignments at all required flip-flop inputs. In addition, there are typically many more SFF instances than instances from other cell types, which also is a root-cause for the significant contribution of the SFF cells to CAT-only static detected defects.

SECTION VI

PRODUCTION TEST OF A 32 NM DESIGN

To investigate the effectiveness of CAT patterns in relation to SA and TR patterns, as used in the normal production test, we partnered with Advanced Micro Devices (AMD) to execute an experiment with a four-core AMD 32 nm notebook processor, see Fig. 19.

Figure 19
Fig. 19. AMD 32 nm notebook processor.

In this experiment, CAT patterns were added to the test program and the test flow was changed to log unique fails of the CAT patterns as shown in Fig. 20.

Figure 20
Fig. 20. Production test flow 32 nm notebook processor.

The production test consists of a TR-N-det5, with an N-detection limit of five, and a static SA top-off test. The TR and CAT delay patterns were applied at-speed and the SA static and CAT static patterns were applied by a slow-speed test. All experimental CAT patterns were in data-collection mode, otherwise known as “continue on fail”. The experimental tests were applied to all die where all four cores passed the existing ATE production test suite of wafer sort patterns.

After testing 800,000 ICs, the fail and PPM reduction results were summarized, as shown in Fig. 21.

This data shows that the CAT static patterns detect a total of 231 defects that state-of-the-art traditional SA patterns do not detect. The CAT delay patterns detect a total of 609 defects that state-of-the-art traditional TR patterns do not detect. These fail counts can be easily transformed into PPM rates, i.e., the CAT static patterns are reducing the defect rate by 292 PPM and the CAT delay patterns by 771 PPM. The Venn diagram also shows the overlap of 141 detected defects between the two tests. In total, the CAT patterns are reducing the PPM rates for this 32 nm design by 885 PPM. Later performed parametric tests and final tests including functional tests, carried out at different voltage and temperature levels, confirmed 66% of the CAT-only fails. Additional 16% have been confirmed by expensive system level tests (SLT). That means a total of 82% of the CAT-only failing parts have been confirmed to be defective or too slow. Just 18% of those parts made it through the SLT, and are to be further analyzed. This data confirms that CATs performed at wafer test detect real defects that otherwise are only detected with very expensive SLTs.

The test costs related to the analyzed fault models have a direct relation to the number of patterns generated for the fault models. The additional test costs for the CAT patterns for this design are about 43% of the existing structural production tests costs.

These production test results were collected during 2012, and since then AMD has tested other designs in 32 and 28 nm technology in production with CAT patterns. The defect rate reduction is consistently high as shown in Fig. 21, at about 900 PPM for the 32 nm products and even higher for the 28 nm products at about 1500 PPM.

In addition, various CAT-only failing parts have been analyzed by PFA to prove that the CAT-only fails are real physical defects. For further details see Section VIII.

SECTION VII

PRODUCTION TEST OF A 350 NM DESIGN

For evaluating the effectiveness of CAT patterns on a 350 nm technology, we worked with ON semiconductor to execute an experiment for an automotive design.

The design shown in Fig. 22 integrates high-performance, power-efficient analog, and digital parts. The analog circuits implement the LIN physical layer and analog front-end interface to the external sensor. The digital circuits implement the LIN data link layer, LIN application layer, and DSP logic responsible for processing of the ADC bit stream generated by the analog front-end. The design also includes complex DFT logic to achieve the top-class analog and digital testability.

Figure 22
Fig. 22. ON semiconductor 350 nm automotive design.

To investigate the effectiveness of the CAT patterns in relation to the normal production test patterns, we added the experimental CAT patterns to the test program and changed the test flow to log unique fails of the CAT patterns as shown in Fig. 23.

Figure 23
Fig. 23. Production test flow 350 nm automotive design.

The production test consists of an IddQ test, followed by a SA static and layout-aware interconnect bridge test, and a single detect TR test. The experimental patterns applied after the normal production tests consist of a CAT static as well as a CAT delay test. As before, the TR and CAT delay tests were applied at-speed, all other tests at slow-speed. All experimental patterns were in data-collection mode. The production patterns have been executed in “stop on fail” mode. This means that any fail detected by the CAT patterns completely passed the normal production tests.

The additional test costs for the CAT test patterns were 59% in total, which was mainly consumed by the 1135 CAT delay patterns, while the normal production test pattern had in total 2019 patterns.

After enhancing the production test program as shown in Fig. 23, we applied the normal production patterns and experimental CAT patterns during wafer sort for the 350 nm automotive designs. Fig. 24 summarizes the reject count and their projection to the measured DPPM rate after testing 1,000,000 ICs.

Figure 24
Fig. 24. Measured PPM reduction 350 nm design.

This data shows that the CAT patterns detected a total of 114 rejected parts that traditional SA, bridge, and TR patterns do not detect, resulting in a measured defect rate reduction of 114 PPM.

The CAT static patterns detected a total of nine parts. These reject counts can be directly correlated with measured PPM rates i.e., the CAT static patterns resulted in a measured defect rate reduction of 9 PPM.

The CAT delay patterns detect a total of 105 parts resulting in a measured defect rate reduction of 105 PPM. The Venn diagram also shows the overlap between the two tests which was zero in this experiment. The reason for this is mainly related to the fact that defects that can be detected by CAT static tests are not targeted again by CAT delay patterns.

The validation of the test results (retesting of the rejected parts using the standard and CAT flow) confirmed that all 114 rejects are permanent fails. The nine parts that failed the CAT static patterns were confirmed to be hard defects failing under all test conditions. The other 105 parts that failed the CAT delay patterns proved to be parametric outliers.

For ON Semiconductor CAT provides a method to screen delay defects in the digital circuits without relying on over-constraining the design during synthesis and over-screening on ATE. This will allow further optimization of power dissipation and the digital area of the design. Based on this result, ON Semiconductor concluded that the CAT method detects various otherwise undetected defects (mainly parametrical defects) and does improve overall test quality.

SECTION VIII

CELL-AWARE DIAGNOSIS

Based on the CAT method, we have also performed cell-aware (CA) diagnosis. One important component for achieving CA diagnosis is a SPICE netlist that includes physical properties like X, Y coordinates and layer information for each cell-internal defect. The complete CA diagnosis experimental flow is shown in Fig. 25.

Figure 25
Fig. 25. Cell-aware diagnosis experiment steps.

Step 1 extracts a SPICE transistor-level netlist in DSPF, which includes all transistors, all parasitic elements (resistors and capacitors), and all layout properties such as layer information and X, Y coordinates.

Step 2 performs an analog diagnosis fault simulation for all extracted defects. The stimuli used for this analog diagnosis fault simulation are just the failing and passing cell input conditions that have been retrieved from the traditional gate-level-based electrical diagnosis run. This step deviates from the analog simulations as shown in Fig. 1; where we simulate an exhaustive set of stimuli. For diagnosis, we only need to simulate the subset of stimuli that was actually used in the test pattern set of the cell instance that was called out to be suspect.

Step 3 performs CA defect scoring. It calculates the probability that a specific defect is the root cause of the faulty behavior.

Step 4 creates a layout marker file such that a GDS viewer can be used to highlight the defects.

A. Step1: Layout Extraction With Physical Properties

To enable a CA diagnosis, the layout in GDS format of a standard cell is used to extract a DSPF SPICE netlist, which now includes process layer information and the X, Y coordinates of each extracted resistor and transistor. A partial SPICE schematic is shown in Fig. 26.

Figure 26
Fig. 26. SPICE schematic with layout properties.

A fraction of the related SPICE netlist is shown in Fig. 27.

Figure 27
Fig. 27. SPICE netlist with physical properties.

Fig. 27 shows that the layout extraction tools produce a DSPF netlist in which all resistors and transistors have physical properties. However, the layout extraction tools do not provide physical properties for capacitors, so CA diagnosis calculates those on its own based on the physical properties of the resistors and transistors.

B. Step 2: Analog Diagnosis Fault Simulation

To calculate the defect behaviors of all extracted potential CA defects, we performed an analog diagnosis fault simulation using the extracted DSPF SPICE netlist. For this, we fault-simulated all extracted defects for all failing and passing cell input patterns obtained from the usual electrical diagnosis step.

The considered CA defects are the same as used during the CAT library view generation as explained previously in Section III-B. These defects include opens, bridges, transistor leakage (tleak), transistor drive-strength (tdrive), port bridges, and port opens.

The output of this analog fault simulation step is a defect matrix containing all defect behaviors for each potential defect. Like a traditional fault dictionary the defect matrix has as many rows as there are stimuli that have been simulated (these are the failing and passing cell input stimuli), and as many columns as there are defects. This matrix contains the information if a certain defect can be detected, and if so, which stimuli will detect it.

C. Step 3: Cell-Aware Defect Scoring

The third step of the CA diagnosis is to compute a defect scoring. The input for this step is the defect matrix as created by the analog diagnosis fault simulation. In addition, we used the physical properties of each defect. Consider the following simplified formula, which compares the simulated defect behavior with the actual defect behavior:FormulaTeX Source$$\begin{equation*} S = AF/RF \times 100\%. \end{equation*}$$

S is the score in percentages, AF is the match of the actual failing pattern with the required failing pattern, and RF is the number of required failing patterns which detect the defect to 100%. The actual formula that we used also considers the physical properties, e.g., for bridges, the distance of the objects and the length of the bridging area. The result of this scoring process is a list containing all CA defects sorted from highest score to lowest, as shown in Table IV.

Table 4
Table IV Cell-Aware Defect Scoring Table

D. Step 4: Defect Highlighting

The fourth step of the CA diagnosis is to create a layout marker file such that a graphical highlighting of the CA diagnosis results can be done. For this, the scored defect list, created during the third step of the flow, is used as one of the inputs. In addition, the physical properties for each defect are used as well. The marker file is then used as input for modern GDS viewers to highlight the scored CA defect in the original cell layout.

E. Cell-Aware Diagnosis Result

The CA diagnosis method as described in the previous four sub-sections has been applied to various failing parts from the AMD 32 nm processor.

The traditional layout-aware diagnosis results for one of those failing parts had just two potential failing instances.

One was an XNOR gate with two inputs and the other was a NAND gate with four inputs. Both candidates are connected to each other and lay on an observation path to a SFF, see Fig. 28.

Figure 28
Fig. 28. Logic diagram at suspect defect location.

The results from the CA defect scoring table for the NAND4 gate are shown in Table IV.

Column 1 in Table IV is a defect identification number. Score is a measure of the defect probability as calculated by the equation detailed in Section VIII-C. The defect-type column lists the CA type of defect. The Net1 and Net2 columns are the cell-internal net names in the transistor schematic. The Layer column contains the process layer (e.g., M01 = metal1, DIFF = diffusion, VIA = contact from metal1 to metal2). The X, Y column contains a rectangle identification (R) number and the X, Y coordinates (center) of the rectangle. A graphical representation of the CA diagnosis result from Table IV is highlighted in Fig. 29.

Figure 29
Fig. 29. Cell-aware PFA guidance.

The potential defective areas as calculated by the CA diagnosis experimental flow are all related to the C input of the NAND4 cell and related to the area from the C input to the right to its connected p-transistors named MP2. The red outlined rectangles at “C” are potential open defects. The shaded rectangles above and below “C” are potential BR areas. These highlighted defects together with their layer and X, Y coordinates have been used to ease and guide the PFA process.

F. Physical Failure Analysis

To further prove the effectiveness and correct detection of physical defects by CAT patterns and to prove the effectiveness of the CA diagnosis, the selected part was put through the PFA process. As a consequence of the destructive nature of PFA, we utilized a multistep approach. This process begins with a fault-isolation technique known as laser voltage probing (LVP) [29]. In LVP, the silicon is exposed to an infra-red laser. The tool generates waveforms by studying the interaction of laser that is modulated by the electric field in the space-charge regions of the transistor.

Physical de-processing based on the fault isolation was conducted and no physical anomaly was observed on any metal layers down to metal1. Nano probing [30] was then performed on each suspected transistor in the NAND4 cell which identified the pMOS transistor receiving the C input signal as being non-responsive to the gate voltage.

Finally, a focused ion beam (FIB) [31] cross section was performed to collect a lamella for scanning tunneling electron microscopy (STEM) analysis [32]. The gate and the pMOS source contact within the lamella is shown in Fig. 30(a).

Figure 30
Fig. 30. (a) FIB section for STEM analysis. (b) STEM picture of cross section.

The STEM analysis in Fig. 30(b) clearly indicated a broken poly in the region between the nMOS and pMOS. The physical gate contact lies closer to the active of the nMOS; so controlling the nMOS gate was still possible. However, the connection to the pMOS poly gate was broken. This proves that CAT correctly detected a real physical cell-internal defect. It confirms the correct prediction of CA diagnostics defect scoring which was the third-highest scoring candidate.

SECTION IX

FINFET TECHNOLOGIES

The CAT methodology fully supports FinFET technologies. The CAT view generation process for FinFET technologies is in principle the same as for other technologies. For FinFET transistors the analog fault simulation introduces transistor defects per fin, so that transistor drive-strength and leakage defects can be analyzed accurately per fin.

Because of the 3-D nature of a FinFET transistor as shown in Fig. 31, each fin of the 3-D transistors can have defects on its own, which will result either in reduced drive strength because one or more fins are not operating as they should, or in leakage current within one or more fins of the transistor.

Figure 31
Fig. 31. 3-D FinFET transistor.

The leakage defects are analyzed by CAT by inserting and simulating different leaking resistors from drain to source as shown in Fig. 32, indicated by the red resistors.

Figure 32
Fig. 32. FinFET leakage and drive-strength defects.

Drive-strength defects are analyzed by simulating different resistor values for the drain and source resistors as shown in Fig. 32, indicated by the green resistors R1 and R2.

The black falling edge in Fig. 32 represents a fault-free cell output waveform. Depending on the severity of a leakage or drive strength defect, a larger delay will be observed at the cell output (see the green falling edges), and in addition the final settled state may not reach the required low or high state in case of a leakage defect (see the red falling edges).

Simulating drive strength and leakage defects accurately per fin ensures that the CAT ATPG is forced to generate all needed cell input conditions to fully test the drive-strength and leakage defects for FinFET technologies.

SECTION X

FUTURE WORK

In the past years, we concentrated our research and development effort on methods and tools to ensure that CAT will detect otherwise undetected defects. We have now proven, through high-volume production test results from over 50 million parts in 28, 32, and 350 nm automotive technologies, that CAT uniquely detects otherwise undetected defects; the defect rate measured in DPPM is reduced significantly. We will concentrate our future research and development work on further decreasing the CAT pattern count and related production test costs. We see various opportunities to achieve this goal, e.g., by taking defect probabilities, defect impacts and design for manufacturing aspects into account. Our final goal is to obsolete SA and TR patterns by just applying CAT patterns, which are by definition a superset of SA and TR patterns. In addition, we also will continue our research and development work on CA diagnosis [33] to enable a fast yield ramp.

Acknowledgment

The authors would like to thank J. Rivers, A. Over (AMD), W. Howell, M. Patyra (INTEL), R. Arnold, M. Baby, M. Beck (INFINEON), Z. Susser (ON), C.P. Thomas, W. Ke (CSR), T. Fryars (TI), L. Richter (BROADCOM), T. Latzke, S. Karunanayake, B. Huynh (MARVELL), V. Vorisek, T. Sorokin (FREESCALE), T. Herrmann (GF), F. Yan (LSI), S. Eichenberger, T. Waayers, R. Hinze, B. Kruseman (NXP), R. Krenz-Baath (HSHL), as well as S. Komar, M. Laplante, O. Osmani, K. Maruo, H. Keller, A. Sticht, M. Wittke, G. Mueller, J. Schmerberg, S. Ochsenknecht, R. Press, E. Polyakov, H. Tang, M. Kassab, and B. Benware (MENTOR) for their assistance, valuable discussion, implementations, and insight over the course of developing the CAT method.

Footnotes

This paper was recommended by Associate Editor X. Wen.

References

No Data Available

Authors

Friedrich Hapke

Friedrich Hapke

Friedrich Hapke (M’08) received the Diploma in electrical engineering from the University of Applied Sciences, Hamburg, Germany.

He is the Director of Engineering Germany, at Mentor Graphics Silicon Test Solution Division. His primary focus is in research and development of new methods and tools for supporting defect-oriented cell-aware testing, IEEE P1687, logic BIST, boundary-scan, and cell-internal failure diagnosis. His interests also include electronic design automation in general for deep-submicron technologies. Before joining Mentor Graphics, he held various research and development management positions at NXP and Philips Semiconductors. He has authored and co-authored several publications and holds over 20 patents in the area of design for test. His recent publications have been on the topic of defect-oriented cell-aware testing at the International Symposium on VLSI Design Automation and Test, 2011, in Taiwan, at the Design Automation and Test in Europe, 2012, in Dresden, Germany, at the European Test Symposium, 2012 in Annecy, France, at the International Test Conference, 2012, in Anaheim, CA, USA, at the European Test Symposium 2013, in Avignon, France, and at the International Symposium for Testing and Failure Analysis, 2013, in San Jose, CA, USA.

Wilfried Redemund

Wilfried Redemund

Wilfried Redemund (M’10) received the Diploma in technical informatics from the University of Applied Sciences, Hamburg, Germany, in 1986.

He is a Principal Engineer and a Software Architect with Mentor Graphics Silicon Test products. Prior to joining Mentor Graphics, he was an independent consultant in the DFT and test area. He has co-authored several professional publications about cell-aware test methods and holds the U.S. patent in the same area.

Andreas Glowatz

Andreas Glowatz

Andreas Glowatz (M’03) received the Diploma in electronics and computer science from the University of Applied Sciences, Hamburg, Germany, in 1988.

He is a Principal Engineer and a Software Architect with Mentor Graphics Development, Hamburg, Germany. He has over 25 years of experience in research and development of DFT tools, mainly focused on ATPG, test compression, and cell-aware test. He holds several patents and has co-authored several publications in this area.

Janusz Rajski

Janusz Rajski

Janusz Rajski (A’87-SM’10-F’11) received the Ph.D. degree in electrical engineering from the Poznań University of Technology, Poznań, Poland, in 1982.

He is a Chief Scientist and the Director of Engineering with the Silicon Test Solutions Division at Mentor Graphics, Wilsonville, OR, USA. He has published over 220 research papers in these areas and is a co-inventor of 81 U.S. and 27 international patents. He is also the Principal Inventor of Embedded Deterministic Test (EDT) technology used in the first commercial test compression product TestKompress. He has co-authored the book Arithmetic Built-In Self-Test for Embedded Systems published by Prentice Hall, in 1997.

Dr. Rajski was the co-recipient of the 1993 Best Paper Award for the paper on logic synthesis published in the IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, co-recipient of the 1995 and 1998 Best Paper Awards at the IEEE VLSI Test Symposium, co-recipient of the 1999 and 2003 Honorable Mention Awards at the IEEE International Test Conference, as well as the co-recipient of the 2006 IEEE Circuits and Systems Society Donald O. Pederson Outstanding Paper Award recognizing the paper on EDT published in the IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS. He has served on the technical program committees of various conferences, and has also served as a Program Chair of the IEEE International Test Conference.

Michael Reese

Michael Reese

Michael Reese (M’01) received the B.S. degree in chemical engineering from the University of Texas at Austin, Austin, TX, USA, and the M.S. degree in electrical engineering from Walden University, Minneapolis, MN, USA.

He is a Senior Member of Technical Staff with Advanced Micro Devices, Austin, TX, USA, with over 25 years of industry experience, where he spent half of his time in semiconductor manufacturing and the balance in semiconductor design focused on the design for test. His current research interests include scan compression hardware for AMDs CPU core team. He has also been published several times on the emerging technology known as cell-aware ATPG.

Marek Hustava

Marek Hustava

Marek Hustava received the Diploma in computer science from the Slovak University of Technology in Bratislava, Bratislava, Slovakia, in 2002.

He is a Principal Digital Engineer and a Digital Group Leader with ON Semiconductor Design Center, Brno, Czech Republic. He has researched for several years in the design of complex automotive mix-signal ICs, and is responsible for digital design methodology including the DFT methodology.

Martin Keim

Martin Keim

Martin Keim (M’06) received the Ph.D. degree in informatics from the Albert-Ludwigs University of Freiburg, Freiburg im Breisgau, Germany.

He joined the Silicon Test Solutions Group of Mentor Graphics, Wilsonville, OR, USA, in 2001, where he is currently an Engineering Manager of the memory built-in self-test team. He is an active member of the IEEE P1687 Working Group and was an Editor of the sixth edition of the Microelectronics Failure Analysis Desk Reference Manual, responsible for the test and diagnosis chapters. He also holds several national and international patents and has authored several technical publications.

Dr. Keim has researched for several years on the organizing committee of the International Symposium for Testing and Failure Analysis, for which he will be the General Chair in 2016.

Juergen Schloeffel

Juergen Schloeffel

Juergen Schloeffel (M’04) received the Diploma in physics from the University of Goettingen, Goettingen, Germany.

He is a Program Manager in the area of EDA and DFT with Mentor Graphics Development, Hamburg, Germany. His current research interests include advanced testing techniques, IJTAG, design automation for DSM technologies, and 3-D test. He is a member of VDE, and a Board Member of the German ITG Working Group for test and reliability. He holds several patents and has authored and co-authored over 60 conference papers and journals. He has served on program committees of several conferences and workshops.

Anja Fast

Anja Fast

Anja Fast initially completed an apprenticeship as a Hotel Manageress in Hamburg, Germany in 1992. She is now a Technical Assistant with Mentor Graphics Development, Hamburg, Germany. Before joining Mentor Graphics in 2012, she was self-employed with Technical Documentation and worked as an Independent Consultant for Mentor Graphics in the area of DFT. She has co-authored several professional publications about cell-aware test methods.

Cited By

No Data Available

Keywords

Corrections

None

Multimedia

No Data Available
This paper appears in:
No Data Available
Issue Date:
No Data Available
On page(s):
No Data Available
ISSN:
None
INSPEC Accession Number:
None
Digital Object Identifier:
None
Date of Current Version:
No Data Available
Date of Original Publication:
No Data Available

Text Size