You are here
Home > VLSI Design >

Basics of DFT in VLSI Scan Design and DFMA

Basics of DFT in VLSI Scan Design and DFMA

Let us talk about DFT in VLSI that is, Scan Design for Testing or Design for Testability and it is not Discrete Fourier Transform from Mathematics. Here the main purpose of the DFT Engineers in VLSI is to incorporate some extra logic structure in the design to make the testing easy, cost effective and efficient design for manufacturing and assembly (DFMA). Mainly here we are going to add some DFT testability features to design of a hardware product.

After going through this post completely we must be able to answer all the following questions and the same these questions can be asked in VLSI interview.

What is DFT in VLSI?
Why DFT is used in VLSI?
Why DFT is required?
What is DFT architecture?
What is DFT verification?
What is a DFT engineer?
What is DFT in DSP?
What is DFT and DFM?
What is DFT and its properties?
Why DFT is important in ASIC flow?
What are the types of BIST?
What is BIST mode?
What does testability mean?
What are the DFT goals and test plans?
How do I become a DFT engineer?

Why is chip testing needed?
What is the difference between FFT and DFT?
What is DFT stand for?
What is CTFT in DSP?
What is the main purpose of DFMA?
What is DFT in PCB?
What are DFM issues?
What is the difference between functional verification and DFT?

Before proceeding further let us know full form of all the acronyms used in this article. If I miss any please google them all.

DFT = Design for Testability

VLSI = Very Large-Scale Integration

CMOS = Complementary Metal Oxide Semiconductor

DSP = Digital Signal Processing

PCB = Printed Circuit Board

BIST/MBIST/LBIST = Memory and Logic Built in Self-Test

DFM/DFMA = Design for Manufacturing and Assembly

ASIC = Application Specific Integrated Chips

ATPG = Automated test pattern generation

Basics of Testing

What is controllability in DFT?

The controllability is nothing but the difficulty in setting the signal line to the required logic value either 1 or 0 from the input side of the design (from primary inputs).

What is observability in DFT?

The observability in DFT is nothing but the difficulty in propagating or passing the logic value either 1 or 0 through signal line to the output side of the design. (to primary output).

What is the difference between a fault, failure, and an error?

We can say that a fault is a physical damage/defect compared to the good system, which may or may not cause system failure. An error is caused by a fault because of which system went to erroneous state. And at last failure is when the system is not providing the expected service.
A fault causes and error which leads to the system failure.

What are the different Fault Models?

Fault modeling plays a very important role in reducing the burden in the testing, because many of the physical defect’s maps to a single fault at the higher level and is independent of technology.
We have behavioral fault models, which is the highest level of abstraction, such as Verilog HDL modelling.

Then functional fault models or truth table-based model that is nothing but the RTL level testing such as testing a microprocessor through instructions. Third is structural fault model where we deal with logic gates which includes stuck at faults such as stuck at 1 and stuck at 0. Fourth model is switch level fault model where we deal with transistors such as NMOS, CMOS and PMOS which includes stuck open fault and stuck on faults.

Combinational logic circuits and their fault models in DFT.

Basic Testing Principle VLSI UNIVERSE

The basic test principle of a combinational logic circuit is the combinational logic circuit which needs to be tested would be considered as Circuit/Unit under test. The primary input patterns are applied to the logic circuit under test which produces the output response. The output response is further compared with golden response (for example correct truth table) to decide the circuit is working correctly or not.

Here the logic circuit is considered as a five values system that is 0 – logic zero, 1 – logic one, D – 1/0 (stuck at 1 upon actual value 0), D’ – 0/1 (stuck at 0 upon actual value 1) and X – Don’t care.

Below are the truth tables for basic logic gates for these five valued systems (0,1,D,D’,X).




Sensitized path-based testing.

The sensitized path-based testing follows three steps procedure step1 manifestation, step2 propagation and step3 justification.

In step1 manifestation, by assuming the stuck at faults (1 for stuck at 0 or 0 for stuck at 1) are specified oppositely.

In step2 propagation, here the main purpose is to propagate the fault to the primary output by changing the associate gate inputs. This is done by setting AND or NAND inputs to logic 1 and by setting OR or NOR inputs to logic 0.

In step3 justification, is done by tracing back to primary inputs from the gate inputs which are set in step2 to propagate the fault.

DFT Basics and associated techniques. (Ad-hot and Structured)

Scan-based design technique in DFT

It is a technique to have a good testability for sequential circuits. A usual sequential circuit have large number of internal states so to control these state many input events would be required. So, to have this to do in a simple manner a large amount of sequential logic must be inserted during the initial design stage only. This is known as scan-based design which contain series of flop elements like a shift register.

DFT Ad-hoc Approach

An Ad-hoc DFT technique used to improve the testability procedure. The typical ad-hoc approaches would be,

1.      To improve the controllability and observability of the design ad-hoc technique inserts test points TPI’s at the internal nodes of the design.

2.      Using the sequential elements with no set/reset that is avoid asynchronous flop as scan elements. Also avoid feedback loops in the design.

3.      Ad-hoc technique also adds to avoid redundant logic rather partition a large logic circuit into small segments or blocks.

The drawback of Ad-hoc techniques in DFT that they are not reusable and standard. For every new design they must be approached differently. Also, these techniques started giving some unpredictable results on new designs.

DFT Structured Approach

The structured approach provides a methodical process to improve the testability where the ad-hoc technique wont. A structured DFT technique is easy to budget and easy to deploy in the design initial stage. Also, this can be automated by using sophisticated DFT tools. It will provide the predicted results and it more of test-oriented design technique.

It is the most widely used DFT techniques which helps to improve controllability and observability of the design. In this technique the sequential design is converted into three different modes of operation Normal mode (functional mode), Shift mode (test mode) and Capture mode with the associated clock network for each mode.

Storage elements flops are converted in to scan cells or scan element to form scan chains. The scan design typically looks like below figure.

DFT Structured Approach

Typically, there many was to design a scan cell such as, muxed-D scan cell, clocked scan cell and LSSD – level sensitive scan design.

DFT muxed-D scan cell

Here, DI – Data Input, SI – Scan Input, SE- Scan Enable, Q/SO – Data Out/Scan Out

For example, a Partial Scan Design would require only a subset of storage elements which are to be replaced with scan cells to form a scan chain.

DFT Partial Scan Design VLSI Universe

Design for Debug – debugging using the DFT features.

The design for testing is not only useful for concurrent and nonconcurrent testing, the scan chains in the DFT can also be used in debugging the VLSI IC design. Hera a VLSI chip is normally considered to be working in functional mode or normal mode. A functionally working VLSI chip and be reconfigured to the testing mode by stopping the VLSI chip clock signal.

During the test mode, by using the DFT scan chains the VLSI chip can be fully controlled that signal lines can be set to any desired value for debugging the VLSI IC. In another way, the scanning of the chip can be done in the initial state to all the available memory elements, and then system debug can be done by going back to the normal mode or functional mode. By doing so the system can be brought to a known state by use of a lesser number of clock cycles.

The ability of scan chains or scan design to work along with the clock gating circuits in debugging a system we call it as Design for debug or Design for debuggability.

DFMA – Design for Manufacturability and Assembly

Let us understand briefly about DFMA – Design for Manufacturing and Assembly. DFMA mainly focuses on two factors. The first one is reducing the time to market and the second one is reducing the total product cost. To achieve these two goals, it is very important to consider the ease of manufacturing the product parts and also the assembly of those parts to form a final product. A good designing methodology will consider these factors during the early stage of the product design lifecycle.

Earlier this DFMA was being considered as two parts of different design methodologies. First DFM – Design for Manufacturing separately and DFA – Design for Assembly separately.

DFM – Design for Manufacturing was mainly involved in time and cost reducing process like below,
1. Deciding the raw material for the product which are cost effective.
2. Reducing the manufacturing operation complexity during initial product design life cycle.

Important factors to be considered for DFM are Proper planning, selecting easily manufacture compliant materials, knowing the cost-effective manufacturing process, and at last, using the standard components to design the main product. By using standard components new design costs and improvement of time to market can be reduced.

DFA – Design for Assembly is more concerned with minimizing the assembly time, costs, and complexities of the product. This can be done by reducing the number of individual parts and avoiding assembly steps for them.

Summary on DFT

Testing and Debugging a VLSI chip is a very huge task and an expensive effort. So, planning for the debug in the design at the early stage by use of Scan cell, BIST technique, and ATPG. Leave some extra gates and room for extra logic for later to use them if needed after during testing and debugging.

Finding a defect in the early stage of the design is very crucial because if one finds a defect after manufacturing the chip it will be a very big loss. One must go for all the steps again which double the effort and cost of a product to design.

Leave a Reply