SYSTEM-ON-A-CHIP (SOC) VERIFICATION METHODS December 6th, 2003

Similar documents
Policy-Based RTL Design

Introduction to co-simulation. What is HW-SW co-simulation?

Exploring the Basics of AC Scan

Video Enhancement Algorithms on System on Chip

Datorstödd Elektronikkonstruktion

INF3430 Clock and Synchronization

EECS150 - Digital Design Lecture 28 Course Wrap Up. Recap 1

Computer Aided Design of Electronics

Debugging a Boundary-Scan I 2 C Script Test with the BusPro - I and I2C Exerciser Software: A Case Study

Digital Systems Design

Low Power Design Methods: Design Flows and Kits

The challenges of low power design Karen Yorav

Course Outcome of M.Tech (VLSI Design)

Mohit Arora. The Art of Hardware Architecture. Design Methods and Techniques. for Digital Circuits. Springer

Timing Issues in FPGA Synchronous Circuit Design

POWER GATING. Power-gating parameters

Overview of Design Methodology. A Few Points Before We Start 11/4/2012. All About Handling The Complexity. Lecture 1. Put things into perspective

LSI and Circuit Technologies for the SX-8 Supercomputer

CHAPTER 4 GALS ARCHITECTURE

Model checking in the cloud VIGYAN SINGHAL OSKI TECHNOLOGY

Agenda. 9:30 Registration & Coffee Networking and Sponsor Table-tops Welcome and introduction

In 1951 William Shockley developed the world first junction transistor. One year later Geoffrey W. A. Dummer published the concept of the integrated

The Need for Gate-Level CDC

PE713 FPGA Based System Design

Introduction (concepts and definitions)

Top-Down Design of Mixed-Signal Circuits

VLSI System Testing. Outline

A Novel Low-Power Scan Design Technique Using Supply Gating

REVOLUTIONIZING THE COMPUTING LANDSCAPE AND BEYOND.

CMOS VLSI IC Design. A decent understanding of all tasks required to design and fabricate a chip takes years of experience

Meeting the Challenges of Formal Verification

Lies, Damned Lies and Hardware Verification. Mike Bartley, Test and Verification Solutions

Making your ISO Flow Flawless Establishing Confidence in Verification Tools

EC 1354-Principles of VLSI Design

Fall 2017 Project Proposal

ASICs Concept to Product

In the previous chapters, efficient and new methods and. algorithms have been presented in analog fault diagnosis. Also a

PROCESS-VOLTAGE-TEMPERATURE (PVT) VARIATIONS AND STATIC TIMING ANALYSIS

A Top-Down Microsystems Design Methodology and Associated Challenges

SpectraTronix C700. Modular Test & Development Platform. Ideal Solution for Cognitive Radio, DSP, Wireless Communications & Massive MIMO Applications

Lecture 3, Handouts Page 1. Introduction. EECE 353: Digital Systems Design Lecture 3: Digital Design Flows, Simulation Techniques.

Design of High-Performance Intra Prediction Circuit for H.264 Video Decoder

Design of Mixed-Signal Microsystems in Nanometer CMOS

FUNCTIONAL VERIFICATION: APPROACHES AND CHALLENGES

PRESENTATION OF THE PROJECTX-FINAL LEVEL 1.

Introduction to CMC 3D Test Chip Project

DESIGN FOR LOW-POWER USING MULTI-PHASE AND MULTI- FREQUENCY CLOCKING

Geared Oscillator Project Final Design Review. Nick Edwards Richard Wright

R Using the Virtex Delay-Locked Loop

Research in Support of the Die / Package Interface

AMS Verification for High Reliability and Safety Critical Applications by Martin Vlach, Mentor Graphics

Digital Design and System Implementation. Overview of Physical Implementations

A Case Study of Nanoscale FPGA Programmable Switches with Low Power

Ring Oscillator PUF Design and Results

EECS 427 Lecture 21: Design for Test (DFT) Reminders

Journal of Engineering Science and Technology Review 9 (5) (2016) Research Article. L. Pyrgas, A. Kalantzopoulos* and E. Zigouris.

EMT 251 Introduction to IC Design

Test & Measurement Technology goes Embedded

CS 6135 VLSI Physical Design Automation Fall 2003

DIGITAL BASEBAND PROCESSOR DESIGN OF PASSIVE RADIO FREQUENCY IDENTIFICATION TAG FOR ULTRA WIDEBAND TRANSCEIVER

Lecture 11: Clocking

Rapid FPGA Modem Design Techniques For SDRs Using Altera DSP Builder

Wave Pipelined Circuit with Self Tuning for Clock Skew and Clock Period Using BIST Approach

Challenges of in-circuit functional timing testing of System-on-a-Chip

Imaging serial interface ROM

A Review of Clock Gating Techniques in Low Power Applications

Lecture 1: Introduction to Digital System Design & Co-Design

Yet, many signal processing systems require both digital and analog circuits. To enable

VLSI Physical Design Prof. Indranil Sengupta Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

An Analog Phase-Locked Loop

Low Power Design of Successive Approximation Registers

Lecture 1. Tinoosh Mohsenin

Trends in Functional Verification: A 2014 Industry Study

Mixed Signal Virtual Components COLINE, a case study

Enabling Model-Based Design for DO-254 Compliance with MathWorks and Mentor Graphics Tools

FPGA Based System Design

Chapter 1 Introduction

UT90nHBD Hardened-by-Design (HBD) Standard Cell Data Sheet February

UNIT-II LOW POWER VLSI DESIGN APPROACHES

Online Monitoring for Automotive Sub-systems Using

A Case Study - RF ASIC validation of a satellite transceiver

Simulation Performance Optimization of Virtual Prototypes Sammidi Mounika, B S Renuka

Design of Simulcast Paging Systems using the Infostream Cypher. Document Number Revsion B 2005 Infostream Pty Ltd. All rights reserved

Run-time Power Control Scheme Using Software Feedback Loop for Low-Power Real-time Applications

Introducing Functional Qualification

Technical challenges for high-frequency wireless communication

Agenda. 9:30 Registration & Coffee Networking and Sponsor Table-tops Welcome and introduction

Lecture 4&5 CMOS Circuits

24 GHz ISM Band Silicon RF IC Capability

Dynamic Analog Testing via ATE Digital Test Channels

Evaluation of Package Properties for RF BJTs

Lecture 1: Digital Systems and VLSI

A Digital Clock Multiplier for Globally Asynchronous Locally Synchronous Designs

Available online at ScienceDirect. Procedia Computer Science 57 (2015 )

Using an FPGA based system for IEEE 1641 waveform generation

Spectrum Detector for Cognitive Radios. Andrew Tolboe

Introduction. Reading: Chapter 1. Courtesy of Dr. Dansereau, Dr. Brown, Dr. Vranesic, Dr. Harris, and Dr. Choi.

ADVANCED EMBEDDED MONITORING SYSTEM FOR ELECTROMAGNETIC RADIATION

Delay-Locked Loop Using 4 Cell Delay Line with Extended Inverters

DIGITAL INTEGRATED CIRCUITS A DESIGN PERSPECTIVE 2 N D E D I T I O N

Transcription:

SYSTEM-ON-A-CHIP (SOC) VERIFICATION METHODS December 6th, 2003 Morgan Chen E-mail: mjchen@ece.ucdavis.edu Department of Electrical and Computer Engineering, University of California at Davis, CA 95616 USA Abstract The advent of system-on-a-chip (SoC) technology is a result of ever increasing transistor density. Unfortunately, this means that verification will pose the greatest problem to design because difficulties in verification scale faster than transistor technology. This paper provides evidence of this effect by citing industry trends, as well as discusses the potential pitfalls in SoC verification. Various SoC verification methods are offered by a number of industry groups such as Cadence, Synopsis, Mentor Graphics, and Motorola. These solutions generally offer theories based on divide-and-conquer and abstraction techniques. Specifically, Cadence offers the Unified Verification Methodology, which uses abstraction to check systems as design progresses instead of after the entire design is complete. Synopsis strongly encourages intellectual property (IP) reuse to allow for quick verification and gives guidelines to follow in order to create effective macros. Mentor Graphics joins Synopsis in support of reusable IP. However, Mentor Graphics is unique because they also believe that divide-and-conquer methods and specialized hardware will be important to overcome SoC verification. Motorola provides a practical viewpoint by demonstrating successful SoC designs by their own abstraction and divide-and-conquer techniques. In addition, notable insights from the University of Tennessee, Knoxville, who model SoC verification difficulties, and TIMA Laboratory in Grenoble, France, who provide algorithms that make SoC verification faster, are presented. Introduction In keeping with Moore s law, transistor technology continues to scale down so that the creation of entire System-on-a-Chips (SoC) are now possible. Unfortunately, while design teams try to keep up with the exponential growth in complexity known as Moore s Law, verification teams are chasing a curve that rises exponentially faster [1]. G. Peterson from the University of Tennesee, Knoxville states, because of shrinking time to market, high costs in fabrication, and increasing circuit complexity, the verification task is now taking 50-80% of the overall design effort, which indicates how important it is have efficient verification processes [2]. P. Bricaud, of Mentor Graphics, refers to a table from EDA Today that states that verification consumes 40% of the design effort as shown in the table below [3]. Regardless, it is clear that verification is significant for a timely product. Table 1: Industry Ratios [3] In order to deal with the SoC verification problem, engineers will have to apply their existing skill sets to a much larger problem. The two fundamental methods available to any verification engineer are divide-and-conquer and abstraction, and it will be seen how the differing groups implement these ideas to form their own methods [4]. While shrinking transistor technology is fantastic for gate capacity on silicon, it negatively impacts the verification process as mentioned above. The problems with SoC verification include the Silicon White Space problem and the increased range of possible problems that can occur when dealing with an entire stand-alone system as compared with a single component in a larger scheme. Firstly, referred to as the Silicon White Space Hole problem, capacity increases at a faster rate than

that of new circuits are created, and as a result, many of the components will necessarily be recycled to fill the void [3]. In order to reinforce this concept, Bricaud has stated that a SoC design will have over 10 million gates, but that an HDL designer can only create 120,000 gates per year from writing code. Clearly, it is impractical for a design team to create the design from scratch due to the enormous effort needed, and this is why intellectual property (IP) must be reused [3]. Graphical data to support this claim is shown in Figure 1, where more gates can be placed on the die as process technology improves. Figure 3: SoCs with critical analog circuitry by process technology [1] Figure 1: Silicon gate capacity by process technology [1] Clearly, in the interest of efficient verification, intellectual property must be reused; this fact is widely recognized and has been advanced by the Reuse Methodology Manual for System-on-a-Chip Designs (RMM) [3]. Secondly, the result of increased capacity is an industry trend to add more functionality on chip. This has been seen in the areas of embedded software and analog circuitry as shown in Figures 2 and 3. The consequence is that this adds further complexity to the verification process. Figure 2: Relative Development Costs by Process Technology [1] Specifically, this paper will detail the issues mentioned in the articles and examine how various groups approach SoC design verification. The major groups to be mentioned are Cadence, Mentor Graphics, Synopsis, and Motorola. Insight will also be included from the University of Tennessee, Knoxville and the TIMA Laboratory in Grenoble, France. To give the reader a sense of the completeness of this paper, the author has referenced all relevant papers that appear from a quick INSPEC search at the time of research on the topic of SoC verification, as well as other articles. Governing Ideas The fundamental tools that any verification engineer has at their disposal are the concepts of divide-and-conquer and abstraction. Most verification engineers naturally implement divideand-conquer techniques, but many rarely take advantage of abstraction [4]. In fact, most verification engineers are inclined to test the entire system at the register-transfer level, where there is the least amount of verification. This may be due to poor models that force the verification engineer to look at the detailed circuit construction. Various Commercial Viewpoints In SoC verification, Cadence, Synopsis, Mentor Graphics, and Motorola have taken leading roles, at least as far as publications are concerned. Cadence offers their solution, which they call the Unified Verification Methodology. Synopsis and Mentor Graphics push for IP reuse; in fact, M. Keating of Synopsis and P. Bricaud of Mentor Graphics coauthored the Reuse Methodology Manual for System- On-a-Chip Designs. Mentor Graphics stands out because in addition to IP reuse, they believe that it is necessary to employ divide-and-conquer and implement specialized hardware for testing. Lastly, a number of practical papers have come out of Motorola, which stands to greatly benefit from SoC design to lower product costs.

Cadence Cadence provides the Unified Verification Methodology as the solution to SoC verification. The Unified Verification Methodology involves the verification engineers from the very start so that verification occurs in parallel to design. It is apparent that this method would be faster than the serial method of design, then verification. When Cadence came up with its solution to SoC verification, it considered a number of issues as shown in the table below. Table 2: Critical Functional Issues [1] Figure 4: Typical SoC verification flow with Xs marking fragmented areas [5] Now that Cadence had identified the problem, they considered the list of goals that their solution should address. The key points that were noted by Cadence in practical SoC verification are given below. Table 3: Principles in Verification Efficiency In addition to the above issues, Cadence examined current verification processes, and considered ways to improve upon the general method. Most importantly, Cadence identified fragmented areas in verification between the design stages. Design fragmentation refers to the method where each component or aspect is tested entirely from scratch along each step of the design process. Once a particular engineer is done with design and verification of their block, the design is passed to the next engineer to integrate into a larger design. From there, the design is again retested from scratch without any knowledge of the previous tests that were conducted. The fragmentation is illustrated in Figure 4. Embracing abstraction, Cadence provides the Unified Verification Methodology as the solution to the question of how to handle SoC verification in nanometer scale integrated circuits [1, 5]. At the heart of the Unified Verification Methodology is the Functional Virtual Prototype (FVP), as illustrated in Figure 5. The FVP is a model that consists of welldefined abstraction levels that describe behavior. Due to the fact that engineers appear to have a natural affinity to dividing a system into smaller subsystems, each subsystem may be abstracted and verified individually as design progresses. As design progresses, the FVP is slowly replaced in parts with physical designs. This methodology suggests a topdown approach where verification engineers plan the design in cooperation with the design engineers so that verification may occur in parallel with the design and not in serial, which is commonly done in practice. Thus, by taking advantage of abstraction, the design may progress from behavioral, to

transaction, to RTL, to gate level, and each change may be re-tested within the FVP. Figure 5: Unified Methodology FVP: FVP decomposition and recomposition [1] In fact the top-down approach is feasible with the addition of the interfacial transaction layer between functional groups in the system. Transactions are defined between levels of abstraction to allow for tests to be created faster and understood easily. An additional benefit is that any part of the system may be tested regardless of the progress in the other parts of the system because any functional part under test will be interacting with abstracted units, which are assured to be correct from specification. Using this model allows problem areas to be easily identified. A summary of the transaction level FVP roles is given in the table below. Table 4: Transaction level FVP roles [1] Synopsis M. Keating, Vice President of Engineering for the Design Reuse Group of Synopsis, Inc. in Mountain View, CA, represents synopsis s view to the SoC verication problem in the book he co-authored called the Reuse Methodology Manual for System-On-a- Chip Designs [6]. Synopsis realizes that the only way to fill the Silicon White Space Hole Problem is to reuse IP, where currently, the majority of design is done from scratch. To that end, Synopsis offers guidelines to make IP reuse practical. In order to create reusable IP, specifications must be very clear so that future engineers will understand what a particular macro, which from hereon will be referred to as a block, can do. The most challenging issue in making reusable designs is how to handle timing. The issues in designing reusable IP is in timing, the verification strategy, system interconnect and on-chip buses, low power goals, and manufacturing test strategies. Timing To address possible timing problems, there are a few rules/guidelines to follow in designing reusable IP that involve the list below. Registering inputs and outputs Using synchronous and register based design Documenting clocking Documenting the reset Floor planning to reduce wire delay Documenting the timing budget Clock Distribution The timing closure problem can be made completely local if the inputs and outputs are registered. This allows two reusable IP blocks to be designed separately, but used together. If necessary, buffers may be inserted between blocks that are impacted by long wire delay. Design should be synchronous and register based to facilitate timing. This is because latch timing is inherently ambiguous and may only be easily understood by the designer at the time the block is created. The exception to this is in designing small memories or FIFOs, but even in these designs, the interfaces to these blocks should be synchronous and registered. While this may increase the block area, the benefit is that complexity is minimized, which is important in SoC designs. For a block to be reused, the clocking and reset must be properly documented. Particular items to document include the required clock frequencies and phase locked loops (PLLs), and setup, hold and output timing requirements. The design should try to minimize the number of clocks used and allow for the PLLs to be disabled for testing purposes. In addition, the reset methods should be documented to note the following: whether a

synchronous or asynchronous reset is used, whether there is an internal or external power-on reset, how multiple resets work, and whether a specific block may be reset individually. The advantage to using a synchronous reset is that it is simple to synthesize, but that it requires a free-running clock to reset at power-up. The advantage of using an asynchronous reset is that it does not require a free-running clock, but unfortunately, it is difficult to implement because the reset must be synchronously de-asserted so that all flops exit on the same clock and also static timing analysis becomes more complicated. Once the blocks are created, they must be arranged on the physical chip. The arrangement can be important to timing because wire delay can become significant in excess of an entire clock cycle. Clearly, connected blocks should be placed as close as possible to each other to reduce delay while meeting clock distribution requirements. When selecting an appropriate block to be reused, the designer must have a clear idea of what the timing, area, and power goals are. Using a bottom-up approach, the block designer should document the timing budget so that it is clear what technology limits its use. Clock distribution today is mostly done with a balanced block tree that will hopefully be distributed low skew to prevent hold-time violations. However, this can require high-power buffers that will consume a large area on silicon. In order to achieve low power, the clock may be passed as a signal into and out of each block along the bus. Each block may then synchronize their required clock to the bus clock. Verification Strategy Since verification can consume such a significant amount of the project time, it is important to start the SoC design with a particular verification strategy. To that end, it is important to pick reusable IP that is compatible with the verification scheme meaning that the models are already in place to be inserted into the test software. Successful verification is noted to generally occur bottom-up, which furthers the importance of having appropriate models for highlevel verification. System Interconnect and On-Chip buses The issues to keep in mind when considering system interconnect and on-chip buses include whether to use a tristate or multiplexer-based bus, what bus architecture is appropriate to use, how to interface different reusable IPs, and what debugging structures will be needed to test the bus. A multiplexer-based bus is recommended over a tristate bus whenever possible because they are not as technology dependant so that they may be reused more often. Multiplexer based buses are also simpler and easy to implement. Tristate buses are more complicated because they require a bus-keeper to ensure that exactly one driver is controlling the bus, which can be difficult to establish during the poweron. In terms of actual bus architecture, the Virtual Socket Interface Alliance (VSIA) has established a number of VSI standard interfaces. A possible solution is to design each block with one of the standard interfaces and then add bus adapters between blocks as needed. The possible problems with this concept are the multiple interface layers that may degrade performance and the larger circuit size on silicon. Companies are now performing internal projects to determine whether this is a viable solution. In order to interface different IP blocks, blocks should be chosen to be compatible with each other whenever possible. Not doing so will require additional interface hardware to be inserted that may degrade performance, as alluded to above. From the start of the project, the design team must plan a strategy to debug the SoC. Each block should be able to be turned off separately for debugging control. Low Power Goals Low power is something to strive for as the portable device market grows. Clock gating, where the clock is turned off during some operations, allows for lower power, but at the expense of higher complexity. In general, circuits should not use clock gating because the complexity will generally limit the block reusability. A work-around solution is to replace the clock-gating circuit in each block with a separate clocking generation block. The separate block will then control clock gating to all the pertinent blocks, which allows clock gating to take place without inserting the clock gating circuitry into each block. In turn, this scheme allows for the blocks to remain reusable with the benefits of lower power. Manufacturing Test Strategies Manufacturing test strategies must be planned out from the start of the project to reduce the verification time. Special test structures are needed because it is unfeasible to test a million gates on a silicon chip. For the RAMs, it is recommended that there be Built- In Self-Test (BIST) structures for controllable verification. Microprocessors usually include a test controller to provide a scan chain controller and boundary scan. For the rest of the blocks, a full-scan technique should be used, which will provide high coverage for little design effort.

Motorola Mentor Graphics Mentor Graphics strongly supports IP reuse as outlined by P. Bricaud, the Director of the R & D Center for the IP Factory of Mentor Graphics Corporation in Sophia Antipolis, France. However, Mentor Graphics also believes that the SoC verification problem must use divide-and-conquer methods as well. Furthermore, Mentor Graphics believes in implementing specialized hardware to have efficient verification. The divide-and-conquer concept appears in the prototype model that Mentor Graphics. They outline their divide-and-conquer verification strategy as quoted in verbatim below: 1. Basic IP blocks need to be 100% correct as stand-alone elements. 2. Verify that the interfaces between blocks are functionally correct, first in terms of the transaction types and then in terms of data content. 3. Run a set of increasingly complex applications on the full chip. 4. Prototype the full chip and run a full set of application software for final verification. 5. Continue verification runs until you are told to stop. [7] This divide-and-conquer method is simply stated as verifying the basic block, followed by verification of the system. This is the general method that much verification follows regardless of complexity because it comes natural to most engineers. Mentor Graphics also believes that specialized hardware that performs verification will allow for timely simulations of the approach above. The three hardware options to consider are the standard part FPGA, emulation, and real silicon. Mentor Graphics believes that it is best to model as close to real silicon as possible to have reliable simulation results. They have demonstrated verification for a synchronous 2k x 8 SRAM with transistor models using a Mentor Graphics Accelerated Verification System with the Celaro emulator. With the 20K transistor design, the abstraction time took 2 minutes, the compilation took 10 minutes, and the generation of the test environment took 4 minutes. This is definitely a very timely verification process for a decently sized project that may be applied to timely SoC verification. Motorola has been able to successfully create and verify practical SoC designs. Embracing the divideand-conquer concept, the company assumes that the only way for the SoC to be marketed in a reasonable time is to check each component design individually. This means that verification is limited by the verification of each block, as well as the verification of interacting blocks. While Motorola also mentions that abstraction may be used to aggressively verify SoC designs, it is noted that most of the work takes place at a very low level. From experience, Motorola realizes that processor verification should be given the most scrutiny and thus more time because a processor is complex and has a wide variety of uses. In addition to this realworld observation, they remark on the effectiveness of the design team. Skills that make a Good Verification Designer The designers at Motorola generally verify their own unit blocks. However, processor design teams work closely with dedicated verification teams because of the complexity and importance of processor verification. When it comes to detecting errors, engineers can vary in their abilities to detect errors across abstraction levels. Often, an error at the integration layer will need to be mapped back to unitlevel behavior. Most of the time, successful verification engineers generally have a background in hardware and have expanded into software, so that they are competent with test software while understanding the physical representation. It is rare for Computer Science graduates to be comfortable with hardware designs. Verification engineers must be able to organize large amounts of data in order to separate the signal from the noise. They also must have good communication skills because many bugs are not technical issues but actually a misunderstanding or a poor explanation of expectations. Finally, the verification engineer must be motivated from within because of existing negative views towards verification. This view is generally due to simplicity in verifying smaller designs, but as times have changed, in truth, there is nothing simple about verification, especially for SoCs [4]. Use of Abstraction to Increase Productivity Abstraction can be used to efficiently verify SoC designs composed of many unit-level blocks. Similar to the above abstraction solutions, Motorola offers Flexbench [8]. Flexbench offers a stimulus layer that can send the test vectors to the device under verification (DUV), an integration layer to contain the drivers and monitors, and a service layer to

interface the stimulus layer to the integration layer. The figure below illustrates the abstraction layers. The final regression test took more than 1200 hours of CPU time by regression per run when distributed over a Motorola s worldwide computing network. As teams are distributed across the globe, problems are sometimes detected in Asia, nailed down hours later by the European team and guaranteed resolved by the US team as Asia is waking up for the next day [9]. Clearly, verification is an extremely time consuming process as indicted by the worldwide effort given from Motorola. After six months of verification, less than 30 defects and very few functional errors were detected. Other Notable Works in SoC Verification Figure 6: Flexbench architecture to raise abstraction [8] When Verification is Complete Since formal verification is not feasible for SoC design, it becomes a judgment call of when the testing process is complete. At Motorola, a few practical criterions are used as listed below [4]. Additional criteria are added based on the specific design. Table 5: Example Criterions 40 Billion random cycles without finding a bug Directed tests in verification plan completed Source and/or functional coverage goals met Diminishing bug rate A certain date on the calendar reached Successful SoC verification has been shown with a SoC baseband chip for a 3G wireless phone [9]. The 3G phones support higher data rates, multiple wireless protocols, and multimedia features. 80% to 90% of the blocks that are implemented are reused from previous designs. The design implements the following: 50-100 Million transistors, including 10-20 Million for logic 30-50 digital and analog peripherals 400-500 pads with complex muxing schemes 3000-4000 pages of IC specifications Verification is abstracted to a system layer and employs a divide-and-conquer technique to test, for instance, the RF subsystem and RF front-end separately. The accomplishment at Motorola of creating a SoC design is of particular interest because it transcends theory to a practical end. To give an idea of the difficulty of SoC design, this paper will refer to models by G. Peterson from the University of Tennessee, Knoxville [2]. Consider a 5 million gate SoC composed of 100K reusable gate blocks. With a 98% chance of implementing each block, the probability of implementing the SoC design correctly is then given by Bernoulli trials as: Prob[SoC is correct] = (0.98) 50 = 0.364 This probability does not even consider errors that may arise from interface errors between each block. In practice, the probability that the system will work is half of the calculated value above. An equation that predicts the time to complete verification is as follows: T COMP = compile time N C = number of clocks cycles executed f = clock frequency N E = number of errors T RECOMP = recompile time T DEBUG = debug time Note that the number of errors will increase as systems continue to grow, and as dictated by the above equation, the needed verification time increases proportionally by the number of errors and system size. E. Dumitrescu and D. Barrione from the TIMA Laboratory in Grenoble, France, work on simplifying simulations to functionally verify SoC designs [10].

Their theories involve simplifying the abstract model by generating tests on the fly that are reachable from one state to another. Since it is possible for systems to have reachable states that are not connected to each other through any path, this application requires the designer to start the tests with specific states in mind. Conclusions As the industry matures to when SoC design is commonplace, it will become clear which SoC verification solutions work in practice. Cadence offers their Unified Verification Methodology that employs abstraction to verify a system in parallel to design. It recognizes that most systems will have natural boundaries where abstraction may occur. Synopsis and Mentor Graphics are both heavy proponents of reusing IP and outline guidelines to follow so that reuse is practical. Mentor Graphics also is noteworthy for its role in supporting divideand-conquer methods and employing specialized hardware to reduce verification time. From a practical standpoint, Motorola demonstrates successful SoC creation by employing reusable IP, which is verified by abstraction and divide-andconquer techniques. Additional notable work comes from the University of Tennessee at Knoxville, which models difficulties in SoC verification, and the TIMA Laboratory in Grenoble, France, which offers algorithms for faster SoC verification in the future. [6] M. Keating, P. Bricaud, Reuse Methodology Manual: For System-On-a-Chip Designs, 2 nd Edition. Kluwer Academic Publishers, 1999. [7] P. Bricaud, T. Delaye, F. Poirot, A. Gorun, Do Standardized Embedded IP Transistor Views Exist for SoC IP Integration, www.mentor.com, April, 2002. [8] B. Stohr, M. Simmons, J. Geishauser, FlexBench: Reuse of Verification IP to increase Productivity, Design, Automation and Test in Europe Conference and Exhibition, 2002. [9] Y. Mathys, A. Chatelain, Verification Strategy for Integration 3G Baseband SoC, Design Automation Conference, 2003. [10] E. Dumitrescu, D. Borrione, Symbolic Simulation as a Simplifying Strategy for SoC Verification with Symbolic Model Checking, The 3 rd InternationalWorkshop on System-on-chip for Real-Time Applications, 2003. Author Biography REFERENCES [1] L. Lev, R. Razden, C. Tice, It s About Time: Requirements for the Functional Verification of Nanometer-scale ICs, Cadence Design Systems, Inc., 2003. [2] G. Peterson, Predicting the Performance of SoC Verification Technologies, IEEE, 2000, pp. 17-24. [3] P. Bricaud, IP Reuse Creation for System-on-a- Chip Design, IEEE Custom Integrated Circuits Conference, 1999, pp. 395-401. [4] K. Albin, Nuts and Bolts of Core and SoC Verification, ACM, 2001, pp. 249-252. [5] Cadence Design Systems, Inc, The Unified Verification Methodology, Whitepaper, 2003. Morgan Chen was born in Indianapolis, IN on January 9, 1981. Morgan Chen received his BS in Electrical Engineering and Computer Science from UC Berkeley in 2003. He is currently pursuing his Ph.D degree at UC Davis as a graduate student researcher in the Microwave Microsystems Laboratory. Some of his research interests include low-cost RFIC/MMIC circuit implementation using liquid-crystal polymer (LCP) structures. Mr. Chen is a member of Eta Kappa Nu, a student member of the IEEE, and an Accel fellow.

Figures and Tables Reprinted for Ease of Viewing: Figures: Figure 1: Silicon gate capacity by process technology [1] Figure 2: Relative Development Costs by Process Technology [1]

Figure 3: SoCs with critical analog circuitry by process technology [1] Figure 4: Typical SoC verification flow with Xs marking fragmented areas [5]

Figure 5: Unified Methodology FVP: FVP decomposition and recomposition [1] Figure 6: Flexbench architecture to raise abstraction [8]

Tables: Table 1: Industry Ratios [3] Table 2: Critical Functional Issues [1] Table 3: Principles in Verification Efficiency

Table 4: Transaction level FVP roles [1] Table 5: Example Criterions 40 Billion random cycles without finding a bug Directed tests in verification plan completed Source and/or functional coverage goals met Diminishing bug rate A certain date on the calendar reached