Design Technology Challenges in the Sub-100 Nanometer Era

Similar documents
VSI VISION. The Periodical of the VLSI Society of India August 2005 Volume 1 Issue 1. VSI VISION August 2005

EECS 427 Lecture 21: Design for Test (DFT) Reminders

PROCESS-VOLTAGE-TEMPERATURE (PVT) VARIATIONS AND STATIC TIMING ANALYSIS

Introduction. Digital Integrated Circuits A Design Perspective. Jan M. Rabaey Anantha Chandrakasan Borivoje Nikolic. July 30, 2002

LSI Design Flow Development for Advanced Technology

A Survey of the Low Power Design Techniques at the Circuit Level

1 Digital EE141 Integrated Circuits 2nd Introduction

Variation-Aware Design for Nanometer Generation LSI

Changing the Approach to High Mask Costs

I DDQ Current Testing

UNIT-III POWER ESTIMATION AND ANALYSIS

Exploring the Basics of AC Scan

1 Introduction COPYRIGHTED MATERIAL

Preface to Third Edition Deep Submicron Digital IC Design p. 1 Introduction p. 1 Brief History of IC Industry p. 3 Review of Digital Logic Gate

On Chip Active Decoupling Capacitors for Supply Noise Reduction for Power Gating and Dynamic Dual Vdd Circuits in Digital VLSI

Low Transistor Variability The Key to Energy Efficient ICs

EC 1354-Principles of VLSI Design

POWER GATING. Power-gating parameters

Lecture 16: Design for Testability. MAH, AEN EE271 Lecture 16 1

ASICs Concept to Product

Amber Path FX SPICE Accurate Statistical Timing for 40nm and Below Traditional Sign-Off Wastes 20% of the Timing Margin at 40nm

Course Outcome of M.Tech (VLSI Design)

LOW-POWER SOFTWARE-DEFINED RADIO DESIGN USING FPGAS

White Paper Stratix III Programmable Power

A Case Study of Nanoscale FPGA Programmable Switches with Low Power

Leakage Power Minimization in Deep-Submicron CMOS circuits

ECE 5745 Complex Digital ASIC Design Topic 2: CMOS Devices

Low Power Design Methods: Design Flows and Kits

450mm and Moore s Law Advanced Packaging Challenges and the Impact of 3D

Policy-Based RTL Design

DIGITAL INTEGRATED CIRCUITS A DESIGN PERSPECTIVE 2 N D E D I T I O N

A Review of Clock Gating Techniques in Low Power Applications

A Novel Low-Power Scan Design Technique Using Supply Gating

Fault Testing of Analog Circuits Using Combination of Oscillation Based Built-In Self- Test and Quiescent Power Supply Current Testing Method

The Need for Gate-Level CDC

BASICS: TECHNOLOGIES. EEC 116, B. Baas

Signal integrity means clean

Low-Power Digital CMOS Design: A Survey

Lecture #2 Solving the Interconnect Problems in VLSI

Research in Support of the Die / Package Interface

Microcircuit Electrical Issues

ECE 484 VLSI Digital Circuits Fall Lecture 02: Design Metrics

Low Power VLSI Circuits and Systems Prof. Ajit Pal Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Introduction to VLSI ASIC Design and Technology

CMOS VLSI IC Design. A decent understanding of all tasks required to design and fabricate a chip takes years of experience

Market and technology trends in advanced packaging

Testing of Complex Digital Chips. Juri Schmidt Advanced Seminar

EECS 579 Fall What is Testing?

VLSI Designed Low Power Based DPDT Switch

Reliable Electronics? Precise Current Measurements May Tell You Otherwise. Hans Manhaeve. Ridgetop Europe

Semiconductor Memory: DRAM and SRAM. Department of Electrical and Computer Engineering, National University of Singapore

Datorstödd Elektronikkonstruktion

UNIT-II LOW POWER VLSI DESIGN APPROACHES

EMT 251 Introduction to IC Design

MDLL & Slave Delay Line performance analysis using novel delay modeling

Overview of Design Methodology. A Few Points Before We Start 11/4/2012. All About Handling The Complexity. Lecture 1. Put things into perspective

VLSI Design I; A. Milenkovic 1

Probabilistic and Variation- Tolerant Design: Key to Continued Moore's Law. Tanay Karnik, Shekhar Borkar, Vivek De Circuit Research, Intel Labs

Progress due to: Feature size reduction - 0.7X/3 years (Moore s Law). Increasing chip size - 16% per year. Creativity in implementing functions.

Lecture 9: Clocking for High Performance Processors

Low Power VLSI Circuit Synthesis: Introduction and Course Outline

In 1951 William Shockley developed the world first junction transistor. One year later Geoffrey W. A. Dummer published the concept of the integrated

Low-Power VLSI. Seong-Ook Jung VLSI SYSTEM LAB, YONSEI University School of Electrical & Electronic Engineering

Lecture 10. Circuit Pitfalls

Course Content. Course Content. Course Format. Low Power VLSI System Design Lecture 1: Introduction. Course focus

Pramoda N V Department of Electronics and Communication Engineering, MCE Hassan Karnataka India

Testing Digital Systems II

Static Power and the Importance of Realistic Junction Temperature Analysis

The challenges of low power design Karen Yorav

Advanced Digital Design

VLSI: An Introduction

Low Power Design Part I Introduction and VHDL design. Ricardo Santos LSCAD/FACOM/UFMS

DesignCon On-Chip Power Supply Noise and Reliability Analysis for Multi-Gigabit I/O Interfaces

High Temperature Mixed Signal Capabilities

Analysis and Reduction of On-Chip Inductance Effects in Power Supply Grids

Transistor was first invented by William.B.Shockley, Walter Brattain and John Bardeen of Bell Labratories. In 1961, first IC was introduced.

Jan M. Rabaey Anantha Chandrakasan Borivoje Nikolic. July 30, Digital EE141 Integrated Circuits 2nd Introduction

Thank you for downloading one of our ANSYS whitepapers we hope you enjoy it.

Chapter 1 Introduction to VLSI Testing

RECENT technology trends have lead to an increase in

VLSI Physical Design Prof. Indranil Sengupta Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Intel's 65 nm Logic Technology Demonstrated on 0.57 µm 2 SRAM Cells

A Level-Encoded Transition Signaling Protocol for High-Throughput Asynchronous Global Communication

Ruixing Yang

A HIGH SPEED & LOW POWER 16T 1-BIT FULL ADDER CIRCUIT DESIGN BY USING MTCMOS TECHNIQUE IN 45nm TECHNOLOGY

Yet, many signal processing systems require both digital and analog circuits. To enable

Chapter 1 Introduction

UT90nHBD Hardened-by-Design (HBD) Standard Cell Data Sheet February

CS 6135 VLSI Physical Design Automation Fall 2003

EECS150 - Digital Design Lecture 28 Course Wrap Up. Recap 1

Ultra Low Power VLSI Design: A Review

DATE 2016 Early Reliability Modeling for Aging and Variability in Silicon System (ERMAVSS Workshop)

Fault Tolerance and Reliability Techniques for High-Density Random-Access Memories (Hardcover) by Kanad Chakraborty, Pinaki Mazumder

Comparative Study of Different Low Power Design Techniques for Reduction of Leakage Power in CMOS VLSI Circuits

MICROPROCESSOR TECHNOLOGY

Manufacturing Characterization for DFM

A NEW APPROACH FOR DELAY AND LEAKAGE POWER REDUCTION IN CMOS VLSI CIRCUITS

Trends and Challenges in VLSI Technology Scaling Towards 100nm

DESIGNING powerful and versatile computing systems is

EE 434 Lecture 2. Basic Concepts

Transcription:

(Published in the Periodical of the VLSI Society of India VSI VISION Vol 1, Issue 1, 2005) Design Technology Challenges in the Sub-100 Nanometer Era V. Vishvanathan, C.P. Ravikumar, and Vinod Menezes Texas Instruments Bagmane Tech Park, CV Raman Nagar Bangalore 560093 {vish,ravikumar,inod}@india.ti.com Abstract The much-debated Moore s law is expected to hold for another decade, and we have already seen the commercialization of 90 nm and 65 nanometer technologies. Designing chips in these sub-100 nanometer technologies has proven to be a challenge. Since the cost of manufacturing in these technologies is so hgih, only major semiconductor vendors appear to be geared to face the technological challenge. The smaller players in the field are looking for alternate solutions such as reconfigurable computing platforms. To push the technological limits and yet be economically viable, it is important to get the chips right-the-first-time. This article explores the challenges of semiconductor design technology that occupy today s design engineers and will continue to do so for some years to come. I Introduction Ever since Jack Kilby made the first integrated circuit (IC) in 1958, nothing has remained the same except for the incredible rate at which the IC is shrinking in size. Today s engineers are designing IC s targeted for manufacture with 90 nanometer (nm) and 65 nm technologies. Work is already ongoing on the 45 nanometer node. There were prophecies about the end of the scaling at the turn of the century when it was believed that the wavelength of light was a limit on the feature size. Yet, the submicron and the sub-100 nm technology are now realities. As a consequence, it is now possible to build circuits which are less than one square centimeter in surface area and have more than 100 million transistors on them. With such huge capacity, the IC s that we design today are not component chips but systems-on-chip(soc) where the complete functionality of a system is packed into a small piece of silicon. While the raw power of semiconductor manufacturing technology is impressive, it is only half the story. In today s IC business the key to success is to able to rapidly design a differentiated product and quickly bring it to the market place. However, this cannot be done without a sophisticated infrastructure of design components and software to support an efficient design process that ensures that we manufacture silicon that is right-the-first-time. This infrastructure that supports the design process is called: design technology. As we will elaborate in greater detail below, the progress of manufacturing technology into the sub-100 nanometer regime has thrown many new complexities into the design process thereby creating significant challenges in the field of design technology. Solving these design technology challenges is critical to achieving market success with sub-100 nanometer integrated circuits. Gordon Moore, who cofounded Intel Corporation, is well known for his observation made in 1965 that the number of devices that can be placed on a chip doubles every year. He later revised this law and predicted that the number of devices will double every two years. He received a B.S. degree in Chemistry from the University of California, Berkeley in 1950 and a Ph.D. in Chemistry and Physics from the California Institute of Technology in 1954. He worked at Shockley Semiconductors, founded by William Shockley, the inventor of the transistor, during 1956. In 1957, he left the company along with 7 others (Julius Blank, Victor Grinich, Jean Hoerni, Gene Kleiner, Jay Last, Robert Noyce, and Sheldon Roberts) to form Fairchild Semiconductor. In 1968, Moore and Robert Noyce left Fairchild and founded Intel. In 2001, Moore and his wife Betty made an educational donation of $600 million to Caltech University. Moore is currently the Emeritus Chairman of Intel.

II Semiconductor Manufacturing Technology Trends It would be an understatement to say that progress in semiconductor manufacturing technology has been phenomenal. A combination of evolutionary and revolutionary ideas have ensured that Moore s law stays on track, as it has for the past three decades. Every time we thought fundamental limits were reached, scientists and engineers would make a breakthrough. As we progress towards very deep sub-micron feature sizes, the dawn of the nano-meter era, we are once again faced with new challenges - be it process, circuit design or design technology. Business needs drive technology. Reducing die sizes reduces costs. It also enables us to integrate multiple features in a single die more economically, leading to super-chips or what is known as System-on-Chip (SoC). Shrinking does not however come for free and is associated with both positives and negatives. Approximately every two years, device dimensions shrink by 30%, resulting in doubling the number of transistors in the same area. This is also accompanied by doubling clock frequencies and better performance and area. However, physical realities on silicon paint a different picture. Shrinking dimensions are accompanied with reliability and functional issues. The manufacturing process can be subdivided into two parts the Front end of line (FEOL) and the Backend of Line (BEOL). FEOL deals with the manufacture of Metal Oxide Semiconductor (MOS) transistors. These are basically switches. Shrinking of the transistor leads to electrical stresses, which cause transistors to leak such that they may never completely turn off. Reduced gate oxide thickness causes gate leakage which can be as significant as sub-threshold leakage. Ion implants are required to alter the threshold voltage (VT) of the transistor to ensure there is gate control. BEOL deals with the manufacture of the interconnects. While transistor scaling ensures we can pack more in the same area, however an efficient metal stack is required to wire all these transistors. If this were not to scale, we would not get our 50% area reduction every technology node. To offset this, metal width and pitch are reducing, and additional layers of metal are being added. The reduction of metal width causes increased resistance, which is compensated by vertical scaling. This leads to increased coupling capacitance which resuts in more crosstalk between signal lines. Yield is a metric of manufacturability, but is not limited to process defects. Today, yield can be impacted by crosstalk which could cause glitches on data or clock resulting in intermittent failures which can be impossible to debug if not caught early in the design. The role of design technology therefore, is to provide SoC designs with the right methodology and flow in order for them to maximally leverage the gains of technology shrink, while insulating them from the electrical challenges caused by scaling. III Design Technology At a high level, design technology can be viewed as having two separate but interacting parts, namely, Design Components and Design Flow. Design Components consists of the various building blocks that are necessary to create the design. At a minimum it includes a library of logic cells (also known as a standard cell library), input/output (I/O) cells which are needed to interface the chip to the outside world and memory blocks. Increasingly however, as we move to the SoC era design components are getting to be varied and complex and could even be a full-fledged processor. These complex design components are usually referred to as intellectual property (IP) and the ready availability of the right IP often differentiates winners and losers in the race to the market place. Building large complex chips is similar to constructing a building from a standard set of pre-fabricated bricks, doors, or perhaps even pre-fabricated walls. In the chip building business these blocks are made available in a standard cell library. As customers tastes differ, one needs to offer variants of the library to please every one. Traditionally, offerings have been high density or high performance. Today's libraries are very sophisticated due to the demands made on them. A library needs to meet customer requirements such as performance (MHz), active power dissipation (mw/mhz), density (K gates/mm^2) and standby leakage (na/gate). These objectives are met through careful design of the building blocks so that they are both individually efficient, and collectively efficient when used on the chip. In the nanometer era, timing accuracy has moved into picoseconds, which requires greater accuracy of circuit simulation, and modeling. Since a transistor level circuit simulator would be very inefficient or perhaps impossible to run at the chip level, timing models of the library cells are created. These timing models are pre-characterized tables or equations which capture the input slews and output load seen by a logic gate. Selection of load and slew points is an art. Making this selection without knowing chip level issues, could lead to significant inaccuracies in timing. High speed clocking requires special care in the design of clock buffers so that insertion delays are reduced and cells produce balanced delays for both low to high, and high to low edges. Double data rate (DDR) applications puts further constraints on the duty cycle and correspondingly on the jitter budget.

Complex library cells require detailed analysis using statistical simulation using statistical models of the transistors and passive components. Understanding the impact of process variation is needed to guarantee functionality under all process, temperature and voltage variations. At the chip level this is handled by accounting for on-chip variation (OCV) during the timing analysis phase. The reduced size of sequential elements, such as latches and flip-flops make them prone to Soft error rate (SER). Alpha particles from the environment, including the package and the lead solder balls can cause upset charge being held at a node, which could flip the state of a storage element. Robust circuit and layout styles are required to address such soft errors, which are not physically damaging but can corrupt data inside a chip. Embedded memory designers need to also tradeoff static noise margin (SNM) vs. bit cell density. As the threshold voltage drops, to accommodate voltage scaling of 30% per node, the leakage component of power increases. Transistor leakage which was once confined to pa/um is now in the upper na/um, resulting in Amps of static leakage if power management techniques are not employed. Innovative process, circuit, and chip design technology is required to address such issues. The previous paragraphs have described the many challenges in ensuring that the right components are in place for the timely design of an SoC. A Design Flow provides the software infrastructure that makes it possible to assemble the SoC while meeting the speed, power and area targets for the design. The flow consists of many steps which the designers need to correctly use in order to meet the particular performance, area and power goals of the chip while ensuring that they deliver the final layout to manufacturing as per the planned schedule. In many modern IC design companies the design team is distributed all over the world and the design task is split into various parts which are handled by these geographically separate teams. Managing the correct transfer and assembly of data as the parts of the chip are being designed concurrently, is a major challenge that needs to be addressed by the flow and needs sophisticated software techniques. Flow support for such concurrent design is not a luxury but a necessity in order to leverage the best available talent anywhere in the world in order to get the chip designed right and designed fast. As previously mentioned, the design flow consists of many steps each one of which needs complex engineering. Since it is not possible for us in one article to do justice to the technical challenges in each of these areas, we have picked two key major sub-flows, namely, Physical Design Closure and Test. IV Physical Design Closure In the nanometer era, complex electrical effects are making the timely design of functional, reliable chips, a major challenge. These electrical effects are problems of the small, since they relate to phenomena that happen in the nanometer scale. The same technology also creates for us, problems of the large, since it makes possible the design of chips with hundreds of millions of transistors with a complex interconnection. To further compound the challenge, these effects must be controlled with a methodology and flow that delivers high-performance, low-power, low-cost chips under aggressive time-to-market constraints. In order to verify the integrity of the design in the presence of these electrical effects, until recently, design flows relied primarily on checkers that are run once the physical design is complete. While such checks are important and necessary in order to highlight the existence of a design integrity problem, the drawback of this approach is that not much help is provided to the designer in order for him/her to quickly solve the problem by modifying the chip layout, i.e., to quickly achieve physical design closure. This challenge is currently being very actively addressed by design technology specialists. The basic requirement of physical design closure is well understood: a large percentage of manufactured chips should be functionally correct at the required clock speed. However, the manufacturing technology trends that we described previously, have created two physical effects that have complicated this problem. These are the coupling capacitance between signal wires and the voltage drop (IR drop) on the power grid that distributes the power supply to all parts of the chip. In nanometer technologies, coupling capacitance dominate the total capacitance of a wire, thereby resulting in significant crosstalk between two signal wires that are in close proximity. As a consequence, two complications occur. First, the delay on a path varies significantly depending on the relative switching patterns of wires that are close to each other. Second, substantial glitching can occur for the same reason, resulting in significant reduction in the noise margin and a possible erroneous state in the flip-flop at the tail of the path. A proper analysis of the effect of coupling capacitors on delay and noise, requires detailed parasitic information which is best extracted after routing is complete. At this stage, the analysis is accurate, but the disadvantage is that the

corresponding fix may not be easy since the layout is completely done and committed. In this scenario, the required routing changes may not be possible due to congestion, while buffer insertion and other netlist changes may trigger a complete re-layout of the chip, thereby delaying its release to manufacture. Thus, in state-of-the-art design flows, the approach that is being currently pursued is one of avoidance. Starting early in the physical design process the software tool used for layout attempts to automatically avoid situations that can result in significant crosstalk. Additionally, during the physical design process the effect of crosstalk is estimated and the layout modified to eliminate any potential problems. This is an active area of research and development with the goal of developing a design flow in which the layout is correct-by-construction and guaranteed to be free of crosstalk problems. The Power of VLSI Circuits There is no second word on the power of VLSI circuits they enable us to build powerful systems on a single chip. There is another way the VLSI chips are becoming powerful. As the device density is increasing and the frequency of operation is going up, the power consumption of the chips is also increasing dramatically. In addition to dynamic power, the leakage power in CMOS circuits has also become a big concern in nanometer technologies. The power density in the nano-chips has already reached that of an electric hot plate, going towards that of a nuclear reactor, of a rocket nozzle, and projected to reach the power density of the core of the Sun! IR drop is a current/metal-resistance driven phenomenon. The voltage applied at the pads drops as current flows along the power rails, due to the rail resistance. Therefore, the cells do not see the full voltage, which directly translates to higher cell delay. A badly designed power grid can cause a large voltage drop in parts of the chip. If this coincides with a critical path, it would result in functionality failure at the required speed. The safe way of avoiding IR drop problems is to make sure that the power routes are sufficiently wide and have adequate decoupling capacitance. However such an approach cannot be indiscriminately applied as it may use up too much routing space on the chip. Thus, an aggressive designer might want to minimize the IR drop for only those portions of the chip that are timing critical. However, a number of issues remain to be solved, before such an approach is routinely used. First, standard timing analysis tools available today do not adequately take care of the effect of varying IR drop on timing. Second, such an approach would ideally require a dynamic (time-dependent) IR drop calculation, which is a major computational and capacity challenge for today s design flows. Manufacturing in the nanometer area presents its own set of challenges. The features of the layout do not get transferred to silicon in an identical fashion. Thus, what you see (on the screen of the workstation) is not what you get (in silicon). For example, a metal line may actually be jagged. The size parameters such as the width and length of the transistor and interconnect, are therefore reduced to statistical quantities. Similarly, the threshold voltage and other parameters associated with the transistors may vary from one transistor to another due to the vagaries of the manufacturing processes. While this variation itself is not a new phenomenon, its impact is very significant when it come to nanometer technologies. Techniques such as optical phase correction (OPC) are used to counter the problem of inaccuracy in transferring feature sizes from layout to silicon. It is also necessary to do statistical characterization of timing to take into account the variation of parameters from one transistor to another. V Test Even though one can verify a design and be confident about its correctness, things can and do go wrong during manufacturing. A dust particle, a surface imperfection in the silicon wafer, or impurities in the silicon can cause a manufactured circuit to fail. A resistive contact can result in a timing failure. Therefore, every piece of silicon that is manufactured must be tested to ensure that quality products are reaching the customer. Testing is performed by applying pre-calculated input patterns to the manufactured circuit and measuring the outputs. Since the expected values are known from the simulation of a good circuit, these measurements can reveal faults in the manufactured circuit. Functional tests are applied to check if the behavior of the manufactured circuit matches its intended functionality. However, too many functional tests will be needed to get 100% confidence on the correctness of the circuit; therefore, test engineers generate structural tests for the design. Such a test targets a possible fault and generates an input pattern that can expose this fault. Generation of the smallest number of structural tests that can uncover the highest percentage of faults is a big challenge. At one end, the fault universe is expanding stuck-at faults, delay faults, bridge faults, reliability defects and at the other end, the test pattern volume must be kept low to avoid escalation of test cost. When the volume of test data increases, the application of test becomes slow. The test cost is directly proportional to the test application time. The IC

testers used to test modern VLSI designs cost several million dollars and are expensive to maintain. In full volume production, several such testers will be required. The cost of testing is quoted in cents per chip per second of tester engagement time this value is going up due to escalation in tester costs. In fact, the test cost today is a significant percentage of the total cost of a chip, and is comparable to the design and manufacturing cost. One of the tough challenges test engineers face is to bring down test cost. The two ways to keep test cost under control are to bring down tester cost and to reduce test data volume. In either case, the design must be modified to make it testable. For example, the scan test approach eases structural testing by allowing full access to every storage cell in the design. This approach is also known as Design for Test (DFT). Similarly, built-in self-test (BIST) is a technique whereby test patterns are generated on-chip and the comparison of responses is also done on-chip. Since modern SoC devices have a large number of memories on them, BIST is the customary test methodology for memories. Logic BIST is used for testing the digital logic in the circuit; in this technique, random test patterns are generated on-chip and the circuit responses are compressed into a signature and compared with the signature of the good circuit. Insertion of test circuitry such as scan chains, logic BIST, and memory BIST is known as Test Synthesis. A major challenge in an SoC design flow is Test Closure, which aims to minimize the impact of test synthesis on the original design. The timing, area, and power dissipation of the original circuit can get altered due to the insertion of DFT and BIST. As a matter of fact, since several electronic design automation (EDA) tools are used to achieve test synthesis, even the functional behavior of the original design must be verified again after test synthesis. Formal verification becomes useful to establish that the behavior of the circuit remains same, before and after test synthesis. Test pattern generation tools assume gates and interconnects have zero delay and the input signals arrive with no delay. This of course, is far from reality. A tester which applies the patterns cannot ensure that all the inputs change in sync with the clock. A test application involves measurement of voltage after applying the test pattern, and the delay for the output signal to settle down will be important in ensuring correctness of the measurement. The measurement must be made within a time window to be confident that a decision based on the measurement is correct. The delays of gates and interconnects can be extracted after the layout is complete. The circuit is simulated using the test patterns, factoring the delays into account. Patterns that cannot guarantee safe measurement are dropped and the resulting fault coverage loss is made up using additional test patterns. Since chip-level timing simulations are very expensive, innovative methods are required to reduce the computational effort. Distributed computing is a way of cutting down the wall clock time required for these simulations. In sub-100nm technologies, the test engineer may have to deal with several unexpected roadblocks in arriving at a test plan. For example, at what frequency should the tests be run? Since test patterns exercise a chip in entirely different ways than functional patterns, the switching activity in the circuit can go up by several orders of magnitude in the test mode, causing dynamic power dissipation to go up. Similarly, it is important to keep the leakage current under control so that IDDQ testing can be meaningfully applied. IDDQ testing refers to current testing, where a test pattern is applied to a CMOS circuit and the power supply current is measured after all the switching activity has subsided and the circuit has entered a quiescent mode. In a CMOS circuit, the quiescent current is only due to leakage and if the measured current is unusually large, the circuit under test is very likely to be defective. IDDQ testing can reveal bridge faults and reliability problems in the circuit. However, in modern System-on-Chip designs with over 100 million transistors, the quiescent current of a good circuit can be quite high, making it a challenge to apply IDDQ tests. Signal integrity issues can also become a cause of concern during test pattern generation and validation. Test pattern generation must address signal crosstalk faults and IR drop problems. Similarly, since test patterns can cause excessive switching activity and resultant IR drop, there is a need to ensure that timing failures in test mode are due to genuine defects in the circuit and not due to the test patterns. When the term Design for Test was initially coined, the thought process was that logic design cannot be done independently of testability considerations. In the nanometer era, test cannot be separated from either logic design or physical design, making it a major challenge. VI Conclusions In this article we have outlined the progress of semiconductor manufacturing technology into the nanometer era. Business needs drive this continuous shrinking of the feature sizes, since it enables the fabrication of complex systems-on-chip (SoC). However as we have further described in some detail, it is not easy for designers to harness this technology and bring complex SoC s to the market in a timely manner, unless they use sophisticated design technology to create the SoC. We have highlighted some of the many challenges that are currently being addressed in order to provide SoC designers with state-of-the-art design components and design flow. With no immediate end in sight for CMOS technology scaling, the field of design technology will continue to be a challenging one for years to come.