Pipelined Processor Design

Similar documents
CS 110 Computer Architecture Lecture 11: Pipelining

Lecture Topics. Announcements. Today: Pipelined Processors (P&H ) Next: continued. Milestone #4 (due 2/23) Milestone #5 (due 3/2)

7/19/2012. IF for Load (Review) CSE 2021: Computer Organization. EX for Load (Review) ID for Load (Review) WB for Load (Review) MEM for Load (Review)

CSE 2021: Computer Organization

Chapter 4. Pipelining Analogy. The Processor. Pipelined laundry: overlapping execution. Parallelism improves performance. Four loads: Non-stop:

Pipelining A B C D. Readings: Example: Doing the laundry. Ann, Brian, Cathy, & Dave. each have one load of clothes to wash, dry, and fold

A B C D. Ann, Brian, Cathy, & Dave each have one load of clothes to wash, dry, and fold. Time

7/11/2012. Single Cycle (Review) CSE 2021: Computer Organization. Multi-Cycle Implementation. Single Cycle with Jump. Pipelining Analogy

Instruction Level Parallelism. Data Dependence Static Scheduling

Lecture 4: Introduction to Pipelining

Suggested Readings! Lecture 12" Introduction to Pipelining! Example: We have to build x cars...! ...Each car takes 6 steps to build...! ! Readings!

ECE473 Computer Architecture and Organization. Pipeline: Introduction

CMSC 611: Advanced Computer Architecture

LECTURE 8. Pipelining: Datapath and Control

Asanovic/Devadas Spring Pipeline Hazards. Krste Asanovic Laboratory for Computer Science M.I.T.

IF ID EX MEM WB 400 ps 225 ps 350 ps 450 ps 300 ps

IF ID EX MEM WB 400 ps 225 ps 350 ps 450 ps 300 ps

CS420/520 Computer Architecture I

CS429: Computer Organization and Architecture

RISC Design: Pipelining

Computer Architecture

EECE 321: Computer Organiza5on

RISC Central Processing Unit

Computer Hardware. Pipeline

Selected Solutions to Problem-Set #3 COE 608: Computer Organization and Architecture Single Cycle Datapath and Control

6.S084 Tutorial Problems L19 Control Hazards in Pipelined Processors

Project 5: Optimizer Jason Ansel

Computer Science 246. Advanced Computer Architecture. Spring 2010 Harvard University. Instructor: Prof. David Brooks

Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from Hennessy & Patterson / 2003 Elsevier

Instruction Level Parallelism Part II - Scoreboard

CMP 301B Computer Architecture. Appendix C

EECS 470 Lecture 5. Intro to Dynamic Scheduling (Scoreboarding) Fall 2018 Jon Beaumont

Single vs. Mul2- cycle MIPS. Single Clock Cycle Length

Out-of-Order Execution. Register Renaming. Nima Honarmand

Pipelined Beta. Handouts: Lecture Slides. Where are the registers? Spring /10/01. L16 Pipelined Beta 1

EE 457 Homework 5 Redekopp Name: Score: / 100_

Chapter 16 - Instruction-Level Parallelism and Superscalar Processors

CS 61C: Great Ideas in Computer Architecture. Pipelining Hazards. Instructor: Senior Lecturer SOE Dan Garcia

Single-Cycle CPU The following exercises are taken from Hennessy and Patterson, CO&D 2 nd, 3 rd, and 4 th Ed.

CSE502: Computer Architecture CSE 502: Computer Architecture

CSE502: Computer Architecture CSE 502: Computer Architecture

ECE 2300 Digital Logic & Computer Organization. More Pipelined Microprocessor

EECS 470 Lecture 4. Pipelining & Hazards II. Winter Prof. Ronald Dreslinski h8p://

Pipelining and ISA Design

You are Here! Processor Design Process. Agenda. Agenda 10/25/12. CS 61C: Great Ideas in Computer Architecture Single Cycle MIPS CPU Part II

EECS 470. Tomasulo s Algorithm. Lecture 4 Winter 2018

Computer Elements and Datapath. Microarchitecture Implementation of an ISA

U. Wisconsin CS/ECE 752 Advanced Computer Architecture I

EECS150 - Digital Design Lecture 2 - Synchronous Digital Systems Review Part 1. Outline

Dynamic Scheduling I

Instruction Level Parallelism III: Dynamic Scheduling

ECE 4750 Computer Architecture, Fall 2016 T09 Advanced Processors: Superscalar Execution

Dynamic Scheduling II

EECS 470. Lecture 9. MIPS R10000 Case Study. Fall 2018 Jon Beaumont

Computer Architecture ( L), Fall 2017 HW 3: Branch handling and GPU SOLUTIONS

COSC4201. Scoreboard

CSE502: Computer Architecture CSE 502: Computer Architecture

Lecture 8-1 Vector Processors 2 A. Sohn

CSE502: Computer Architecture CSE 502: Computer Architecture

Department Computer Science and Engineering IIT Kanpur

Instructor: Dr. Mainak Chaudhuri. Instructor: Dr. S. K. Aggarwal. Instructor: Dr. Rajat Moona

Metrics How to improve performance? CPI MIPS Benchmarks CSC3501 S07 CSC3501 S07. Louisiana State University 4- Performance - 1

Multiple Predictors: BTB + Branch Direction Predictors

On the Rules of Low-Power Design

CS61C : Machine Structures

DAT105: Computer Architecture

CSEN 601: Computer System Architecture Summer 2014

CS521 CSE IITG 11/23/2012

CISC 662 Graduate Computer Architecture. Lecture 9 - Scoreboard

EECS 470 Lecture 8. P6 µarchitecture. Fall 2018 Jon Beaumont Core 2 Microarchitecture

Tomasolu s s Algorithm

CS61C : Machine Structures

Precise State Recovery. Out-of-Order Pipelines

Problem: hazards delay instruction completion & increase the CPI. Compiler scheduling (static scheduling) reduces impact of hazards

EE382V-ICS: System-on-a-Chip (SoC) Design

SATSim: A Superscalar Architecture Trace Simulator Using Interactive Animation

EN164: Design of Computing Systems Lecture 22: Processor / ILP 3

OOO Execution & Precise State MIPS R10000 (R10K)

CMSC 611: Advanced Computer Architecture

Computer Architecture and Organization:

ECOM 4311 Digital System Design using VHDL. Chapter 9 Sequential Circuit Design: Practice

Understanding Engineers #2

CS 61C: Great Ideas in Computer Architecture Finite State Machines, Functional Units

Final Report: DBmbench

CS61c: Introduction to Synchronous Digital Systems

Architectural Core Salvaging in a Multi-Core Processor for Hard-Error Tolerance

Parallel architectures Electronic Computers LM

Lecture 9: Clocking for High Performance Processors

Lecture 02: Digital Logic Review

Design of Baugh Wooley Multiplier with Adaptive Hold Logic. M.Kavia, V.Meenakshi

Compiler Optimisation

Tomasulo s Algorithm. Tomasulo s Algorithm

Topics. Low Power Techniques. Based on Penn State CSE477 Lecture Notes 2002 M.J. Irwin and adapted from Digital Integrated Circuits 2002 J.

Instructor: Randy H. Katz hap://inst.eecs.berkeley.edu/~cs61c/fa13. Fall Lecture #20. Warehouse Scale Computer

SCALCORE: DESIGNING A CORE

FMP For More Practice

Evolution of DSP Processors. Kartik Kariya EE, IIT Bombay

EECS 427 Lecture 13: Leakage Power Reduction Readings: 6.4.2, CBF Ch.3. EECS 427 F09 Lecture Reminders

CSE502: Computer Architecture Welcome to CSE 502

ICS312 Machine-level and Systems Programming

Transcription:

Pipelined Processor Design COE 38 Computer Architecture Prof. Muhamed Mudawar Computer Engineering Department King Fahd University of Petroleum and Minerals Presentation Outline Pipelining versus Serial Execution Pipelined Datapath and Control Pipeline Hazards Data Hazards and Forwarding Load Delay, Hazard Detection, and Stall Control Hazards Delayed Branch and Dynamic Branch Prediction Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 2

Pipelining Example Laundry Example: Three Stages. Wash dirty load of clothes 2. Dry wet clothes 3. Fold and put clothes into drawers Each stage takes 3 minutes to complete Four loads of clothes to wash, dry, and fold A C B D Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 3 Sequential Laundry 6 PM 7 8 9 2 AM Time 3 3 3 3 3 3 3 3 3 3 3 3 A B C D Sequential laundry takes 6 hours for 4 loads Intuitively, we can use pipelining to speed up laundry Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 4 2

Pipelined Laundry: Start Load ASAP 6 PM 3 A B C D 3 3 7 8 9 PM 3 3 3 3 3 3 3 3 3 Time Pipelined laundry takes 3 hours for 4 loads Speedup factor is 2 for 4 loads Time to wash, dry, and fold one load is still the same (9 minutes) Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 5 Serial Execution versus Pipelining Consider a task that can be divided into k subtasks The k subtasks are executed on k different stages Each subtask requires one time unit The total execution time of the task is k time units Pipelining is to overlap the execution The k stages work in parallel on k different tasks Tasks enter/leave pipeline at the rate of one task per time unit 2 k 2 k 2 k 2 k 2 k 2 k Without Pipelining One completion every k time units With Pipelining One completion every time unit Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 6 3

Synchronous Pipeline Uses clocked registers between stages Upon arrival of a clock edge All registers hold the results of previous stages simultaneously The pipeline stages are combinational logic circuits It is desirable to have balanced stages Approximately equal delay in all stages Clock period is determined by the maximum stage delay Input ister ister ister S S 2 S k ister Output Clock Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 7 Pipeline Performance Let τ i = time delay in stage S i Clock cycle τ = max(τ i ) is the maximum stage delay Clock frequency f = /τ = /max(τ i ) A pipeline can process n tasks in k + n cycles k cycles are needed to complete the first task n cycles are needed to complete the remaining n tasks Ideal speedup of a k-stage pipeline over serial execution S k Serial execution in cycles nk = Pipelined execution in cycles = k + n S k k for large n Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 8 4

MIPS Processor Pipeline Five stages, one cycle per stage. : Fetch from instruction memory 2. ID: Decode, register read, and J/Br address 3. : Execute operation or calculate load/store address 4. MEM: access for load and store 5. WB: Write Back result to register Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 9 Single-Cycle vs Pipelined Performance Consider a 5-stage instruction execution in which fetch = operation = Data memory access = 2 ps ister read = register write = 5 ps What is the clock cycle of the single-cycle processor? What is the clock cycle of the pipelined processor? What is the speedup factor of pipelined execution? Solution Single-Cycle Clock = 2+5+2+2+5 = 9 ps MEM 9 ps MEM 9 ps Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 5

Single-Cycle versus Pipelined cont d Pipelined clock cycle = max(2, 5) = 2 ps MEM 2 MEM 2 MEM CPI for pipelined execution = One instruction completes each cycle (ignoring pipeline fill) Speedup of pipelined execution = count and CPI are equal in both cases Speedup factor is less than 5 (number of pipeline stage) Because the pipeline stages are not balanced 2 2 2 2 2 9 ps / 2 ps = 4.5 Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide Pipeline Performance Summary Pipelining doesn t improve latency of a single instruction However, it improves throughput of entire workload s are initiated and completed at a higher rate In a k-stage pipeline, k instructions operate in parallel Overlapped execution using multiple hardware resources Potential speedup = number of pipeline stages k Unbalanced lengths of pipeline stages reduces speedup Pipeline rate is limited by slowest pipeline stage Unbalanced lengths of pipeline stages reduces speedup Also, time to fill and drain pipeline reduces speedup Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 2 6

Next... Pipelining versus Serial Execution Pipelined Datapath and Control Pipeline Hazards Data Hazards and Forwarding Load Delay, Hazard Detection, and Stall Control Hazards Delayed Branch and Dynamic Branch Prediction Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 3 Single-Cycle Datapath Shown below is the single-cycle datapath How to pipeline this single-cycle datapath? Answer: Introduce pipeline register at end of each stage PCSrc clk = Fetch Jump or Branch Target PC + 3 3 Imm26 ID = Decode & ister Read Rs 5 Rt 5 Rd Dst RA RB RW isters Write BusA BusB BusW Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 4 E = Execute Imm6 Next PC A L U zero ExtOp Src Ctrl J Beq MEM = Access Bne result Data Data_out Data_in Mem Mem Read Write Mem to WB = Write Back 7

Pipelined Datapath Pipeline registers are shown in green, including the PC Same clock edge updates all pipeline registers, register file, and data memory (for store instruction) = Fetch PC + NPC ID = Decode & ister Read Rs 5 RA Rt 5 RB Rd RW Imm26 ister File BusA BusB BusW B A Imm NPC2 E = Execute Imm6 Next PC A L U zero out D MEM = Access result Data Data_out Data_in WB Data WB = Write Back clk Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 5 Problem with ister Destination Is there a problem with the register destination address? in the ID stage different from the one in the WB stage in the WB stage is not writing to its destination register but to the destination of a different instruction in the ID stage ID = Decode & = Fetch ister Read = Execute MEM = Access clk PC + NPC Rs 5 RA Rt 5 RB Rd RW Imm26 ister File BusA BusB BusW B A Imm NPC2 Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 6 E Imm6 Next PC A L U zero out D result Data Data_out Data_in WB Data WB = Write Back 8

Pipelining the Destination ister Destination ister number should be pipelined Destination register number is passed from ID to WB stage The WB stage writes back data knowing the destination register ID MEM WB PC + NPC Rs 5 Rt 5 Rd RA RB RW Imm26 ister File BusA BusB BusW B A Imm NPC2 E Imm6 Next PC A L U zero out D result Data Data_out Data_in WB Data Rd2 Rd3 Rd4 clk Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 7 Graphically Representing Pipelines Multiple instruction execution over multiple clock cycles s are listed in execution order from top to bottom Clock cycles move from left to right Figure shows the use of resources at each stage and each cycle Time (in cycles) CC CC2 CC3 CC4 CC5 CC6 CC7 CC8 Program Execution Order lw $t6, 8($s5) add $s, $s2, $s3 ori $s4, $t3, 7 sub $t5, $s2, $t3 sw $s2, ($t3) Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 8 9

-Time Diagram -Time Diagram shows: Which instruction occupying what stage at each clock cycle flow is pipelined over the 5 stages Order Up to five instructions can be in the pipeline during the same cycle Level Parallelism (ILP) lw $t7, 8($s3) lw $t6, 8($s5) ori $t4, $s3, 7 sub $s5, $s2, $t3 sw $s2, ($s3) ID ID MEM WB MEM ID ID WB ID instructions skip the MEM stage. Store instructions skip the WB stage WB WB MEM CC CC2 CC3 CC4 CC5 CC6 CC7 CC8 CC9 Time Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 9 Control Signals ID MEM WB PCSrc PC + NPC Rs 5 Rt 5 Rd RA RB RW Imm26 ister File BusA BusB BusW B A Imm NPC2 E Imm6 Next PC A L U zero J Beq Bne out D result Data Data_out Data_in WB Data Rd2 Rd3 Rd4 clk Dst Write Ext Op Src Ctrl Mem Read Mem Write Mem to Same control signals used in the single-cycle datapath Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 2

Pipelined Control PCSrc PC + NPC Op Rs 5 Rt 5 Rd RA RB RW Imm26 ister File BusA BusB BusW B A Imm NPC2 E Imm6 Next PC A L U zero J Beq Bne out D result Data Data_out Data_in WB Data Rd2 Rd3 Rd4 clk Pass control signals along pipeline just like the data func Dst Main & Control Write Ext Op Src Ctrl J Beq Bne MEM Mem Read Mem Write Mem to WB Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 2 Pipelined Control Cont'd ID stage generates all the control signals Pipeline the control signals as the instruction moves Extend the pipeline registers to include the control signals Each stage uses some of the control signals Decode and ister Read Control signals are generated Dst is used in this stage Execution Stage => ExtOp, Src, and Ctrl Next PC uses J, Beq, Bne, and zero signals for branch control Stage => MemRead, MemWrite, and Memto Write Back Stage => Write is used in this stage Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 22

Control Signals Summary Op Decode Stage Execute Stage Control Signals Stage Control Signals Write Back Dst Src ExtOp J Beq Bne Ctrl MemRd MemWr Mem Write R-Type =Rd = x func addi =Rt =Imm =sign ADD slti =Rt =Imm =sign SLT andi =Rt =Imm =zero AND ori =Rt =Imm =zero OR lw =Rt =Imm =sign ADD sw x =Imm =sign ADD x beq x = x SUB x bne x = x SUB x j x x x x x Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 23 Next... Pipelining versus Serial Execution Pipelined Datapath and Control Pipeline Hazards Data Hazards and Forwarding Load Delay, Hazard Detection, and Stall Control Hazards Delayed Branch and Dynamic Branch Prediction Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 24 2

Hazards: situations that would cause incorrect execution If next instruction were launched during its designated clock cycle. Structural hazards Caused by resource contention Using same resource by two instructions during the same cycle 2. Data hazards An instruction may compute a result needed by next instruction Hardware can detect dependencies between instructions 3. Control hazards Pipeline Hazards Caused by instructions that change control flow (branches/jumps) Delays in changing the flow of control Hazards complicate pipeline control and limit performance Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 25 Problem Structural Hazards Attempt to use the same hardware resource by two different Example instructions during the same cycle Writing back result in stage 4 Conflict with writing load data in stage 5 Structural Hazard Two instructions are attempting to write the register file during same cycle s lw $t6, 8($s5) ori $t4, $s3, 7 sub $t5, $s2, $s3 sw $s2, ($s3) ID ID MEM ID WB WB ID WB MEM CC CC2 CC3 CC4 CC5 CC6 CC7 CC8 CC9 Time Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 26 3

Resolving Structural Hazards Serious Hazard: Hazard cannot be ignored Solution : Delay Access to Resource Must have mechanism to delay instruction access to resource Delay all write backs to the register file to stage 5 instructions bypass stage 4 (memory) without doing anything Solution 2: Add more hardware resources (more costly) Add more hardware to eliminate the structural hazard Redesign the register file to have two write ports First write port can be used to write back results in stage 4 Second write port can be used to write back load data in stage 5 Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 27 Next... Pipelining versus Serial Execution Pipelined Datapath and Control Pipeline Hazards Data Hazards and Forwarding Load Delay, Hazard Detection, and Stall Control Hazards Delayed Branch and Dynamic Branch Prediction Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 28 4

Data Hazards Dependency between instructions causes a data hazard The dependent instructions are close to each other Pipelined execution might change the order of operand access Read After Write RAW Hazard Given two instructions I and J, where I comes before J J should read an operand after it is written by I Called a data dependence in compiler terminology I: add $s, $s2, $s3 # $s is written J: sub $s4, $s, $s3 # $s is read Hazard occurs when J reads the operand before I writes it Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 29 Example of a RAW Data Hazard Time (cycles) value of $s2 CC CC2 CC3 CC4 CC5 CC6 2 CC7 2 CC8 2 Program Execution Order sub $s2, $t, $t3 add $s4, $s2, $t5 or $s6, $t3, $s2 and $s7, $t4, $s2 sw $t8, ($s2) Result of sub is needed by add, or, and, & sw instructions s add & or will read old value of $s2 from reg file During CC5, $s2 is written at end of cycle, old value is read Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 3 5

Order Solution : Stalling the Pipeline Time (in cycles) value of $s2 add $s4, $s2, $t5 CC CC2 CC3 stall stall stall or $s6, $t3, $s2 CC4 CC5 sub $s2, $t, $t3 CC6 2 CC7 2 CC8 2 CC9 2 Three stall cycles during CC3 thru CC5 (wasting 3 cycles) Stall cycles delay execution of add & fetching of or instruction The add instruction cannot read $s2 until beginning of CC6 The add instruction remains in the register until CC6 The PC register is not modified until beginning of CC6 Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 3 Solution 2: Forwarding Result The result is forwarded (fed back) to the input No bubbles are inserted into the pipeline and no cycles are wasted result is forwarded from, MEM, and WB stages Time (cycles) value of $s2 CC CC2 CC3 CC4 CC5 CC6 2 CC7 2 CC8 2 Program Execution Order sub $s2, $t, $t3 add $s4, $s2, $t5 or $s6, $t3, $s2 and $s7, $s6, $s2 sw $t8, ($s2) Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 6

Implementing Forwarding Two multiplexers added at the inputs of A & B registers Data from stage, MEM stage, and WB stage is fed back Two signals: ForwardA and ForwardB control forwarding ForwardA Imm26 Im26 Imm6 result Rd Rs Rt RA RB RW ister File BusA BusB BusW 2 3 2 3 Rd2 B A E A L U Result D Rd3 Data Data_out Data_in WData Rd4 clk ForwardB Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 33 Forwarding Control Signals Signal ForwardA = ForwardA = ForwardA = 2 ForwardA = 3 ForwardB = ForwardB = ForwardB = 2 ForwardB = 3 Explanation First operand comes from register file = Value of (Rs) Forward result of previous instruction to A (from stage) Forward result of 2 nd previous instruction to A (from MEM stage) Forward result of 3 rd previous instruction to A (from WB stage) Second operand comes from register file = Value of (Rt) Forward result of previous instruction to B (from stage) Forward result of 2 nd previous instruction to B (from MEM stage) Forward result of 3 rd previous instruction to B (from WB stage) Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 34 7

Forwarding Example sequence: lw $t4, 4($t) ori $t7, $t, 2 sub $t3, $t4, $t7 ForwardA = 2 from MEM stage When sub instruction is fetched ori will be in the stage lw will be in the MEM stage ForwardB = from stage Imm26 Rd sub $t3,$t4,$t7 Imm6 Rs Rt RA RB RW ext ister File BusA BusB BusW 2 2 3 2 3 Imm Rd2 B A ori $t7,$t,2 A L U Result D Rd3 lw $t4,4($t) result Data Data_out Data_in WData Rd4 clk Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 35 RAW Hazard Detection Current instruction being decoded is in Decode stage Previous instruction is in the Execute stage Second previous instruction is in the stage Third previous instruction in the Write Back stage If ((Rs!= ) and (Rs == Rd2) and (.Write)) ForwardA Else if ((Rs!= ) and (Rs == Rd3) and (MEM.Write)) ForwardA 2 Else if ((Rs!= ) and (Rs == Rd4) and (WB.Write)) ForwardA 3 Else ForwardA If ((Rt!= ) and (Rt == Rd2) and (.Write)) ForwardB Else if ((Rt!= ) and (Rt == Rd3) and (MEM.Write)) ForwardB 2 Else if ((Rt!= ) and (Rt == Rd4) and (WB.Write)) ForwardB 3 Else ForwardB Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 36 8

Hazard Detect and Forward Logic Rd Rs Rt Imm26 RA RB RW ister File BusA BusB BusW 2 3 2 3 Im26 Rd2 B A E A L U Ctrl Result D Rd3 result Data Data_out Data_in WData Rd4 clk Dst ForwardB ForwardA Hazard Detect func and Forward Write Write Write Op Main & Control MEM WB Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 37 Next... Pipelining versus Serial Execution Pipelined Datapath and Control Pipeline Hazards Data Hazards and Forwarding Load Delay, Hazard Detection, and Pipeline Stall Control Hazards Delayed Branch and Dynamic Branch Prediction Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 38 9

Load Delay Unfortunately, not all data hazards can be forwarded Load has a delay that cannot be eliminated by forwarding In the example shown below The LW instruction does not read data until end of CC4 Cannot forward data to ADD at end of CC3 - NOT possible Time (cycles) CC CC2 CC3 CC4 CC5 CC6 CC7 CC8 Program Order lw $s2, 2($t) add $s4, $s2, $t5 or $t6, $t3, $s2 However, load can forward data to 2nd next and later instructions and $t7, $s2, $t4 Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 39 Detecting RAW Hazard after Load Detecting a RAW hazard after a Load instruction: The load instruction will be in the stage that depends on the load data is in the decode stage Condition for stalling the pipeline if ((.MemRead == ) // Detect Load in stage and (ForwardA== or ForwardB==)) Stall // RAW Hazard Insert a bubble into the stage after a load instruction Bubble is a no-op that wastes one clock cycle Delays the dependent instruction after load by once cycle Because of RAW hazard Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 4 2

Stall the Pipeline for one Cycle ADD instruction depends on LW stall at CC3 Allow Load instruction in stage to proceed Freeze PC and registers (NO instruction is fetched) Introduce a bubble into the stage (bubble is a NO-OP) Load can forward data to next instruction after delaying it Time (cycles) CC CC2 CC3 CC4 CC5 CC6 CC7 CC8 Program Order lw $s2, 2($s) add $s4, $s2, $t5 stall bubble bubble bubble or $t6, $s3, $s2 Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 4 Showing Stall Cycles Stall cycles can be shown on instruction-time diagram Hazard is detected in the Decode stage Stall indicates that instruction is delayed fetching is also delayed after a stall Example: Data forwarding is shown using green arrows lw $s, ($t5) ID MEM WB lw $s2, 8($s) Stall ID MEM WB add $v, $s2, $t3 Stall ID MEM WB sub $v, $s2, $v ID MEM WB CC CC2 CC3 CC4 CC5 CC6 CC7 CC8 CC9 CC Time Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 42 2

Hazard Detect, Forward, and Stall Disable PC PC clk Rd Rs Rt Imm26 RA RB RW Dst ister File BusA BusB BusW func ForwardB 2 3 2 3 Im26 Rd2 B A E ForwardA Hazard Detect Forward, & Stall A L U Result D Rd3 result Data Data_out Data_in WData Rd4 Op Main & Control Stall Control Signals Bubble = Write MemRead MEM Write WB Write Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 43 Code Scheduling to Avoid Stalls Compilers reorder code in a way to avoid load stalls Consider the translation of the following statements: A = B + C; D = E F; // A thru F are in Slow code: lw $t, 4($s) # &B = 4($s) lw $t, 8($s) # &C = 8($s) add $t2,$t, $t # stall cycle sw $t2, ($s) # &A = ($s) lw $t3, 6($s) # &E = 6($s) lw $t4, 2($s) # &F = 2($s) sub $t5,$t3, $t4 # stall cycle sw $t5, 2($) # &D = 2($) Fast code: No Stalls lw $t, 4($s) lw $t, 8($s) lw $t3, 6($s) lw $t4, 2($s) add $t2, $t, $t sw $t2, ($s) sub $t5, $t3, $t4 sw $t5, 2($s) Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 44 22

Name Dependence: Write After Read J should write its result after it is read by I Called anti-dependence by compiler writers I: sub $t4, $t, $t3 # $t is read J: add $t, $t2, $t3 # $t is written Results from reuse of the name $t NOT a data hazard in the 5-stage pipeline because: Reads are always in stage 2 Writes are always in stage 5, and s are processed in order Anti-dependence can be eliminated by renaming Use a different destination register for add (eg, $t5) Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 45 Name Dependence: Write After Write Same destination register is written by two instructions Called output-dependence in compiler terminology I: sub $t, $t4, $t3 # $t is written J: add $t, $t2, $t3 # $t is written again Not a data hazard in the 5-stage pipeline because: All writes are ordered and always take place in stage 5 However, can be a hazard in more complex pipelines If instructions are allowed to complete out of order, and J completes and writes $t before instruction I Output dependence can be eliminated by renaming $t Read After Read is NOT a name dependence Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 46 23

Next... Pipelining versus Serial Execution Pipelined Datapath and Control Pipeline Hazards Data Hazards and Forwarding Load Delay, Hazard Detection, and Stall Control Hazards Delayed Branch and Dynamic Branch Prediction Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 47 Control Hazards Jump and Branch can cause great performance loss Jump instruction needs only the jump target address Branch instruction needs two things: Branch Result Taken or Not Taken Branch Target PC + 4 If Branch is NOT taken PC + 4 + 4 immediate If Branch is Taken Jump and Branch targets are computed in the ID stage At which point a new instruction is already being fetched Jump : -cycle delay Branch: 2-cycle delay for branch result (taken or not taken) Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 48 24

2-Cycle Branch Delay Control logic detects a Branch instruction in the 2 nd Stage computes the Branch outcome in the 3 rd Stage Next and Next2 instructions will be fetched anyway Convert Next and Next2 into bubbles if branch is taken cc cc2 cc3 cc4 cc5 cc6 cc7 Beq $t,$t2,l Next Bubble Bubble Bubble Next2 Bubble Bubble Bubble Bubble L: target instruction Branch Target Addr Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 49 Implementing Jump and Branch Jump or Branch Target PCSrc PC + NPC Op ister File Imm26 B A Im26 NPC2 E Imm6 Rs 5 BusA RA 2 Rt 5 3 RB BusB RW BusW 2 Rd 3 Rd2 Next PC A L U zero J Beq Bne out D Rd3 clk Branch Delay = 2 cycles Branch target & outcome are computed in stage func Dst Main & Control Control Signals Bubble = J, Beq, Bne MEM Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 5 25

Predict Branch NOT Taken Branches can be predicted to be NOT taken If branch outcome is NOT taken then Next and Next2 instructions can be executed Do not convert Next & Next2 into bubbles No wasted cycles cc cc2 cc3 cc4 cc5 cc6 cc7 Beq $t,$t2,l NOT Taken Next Next2 Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 5 Reducing the Delay of Branches Branch delay can be reduced from 2 cycles to just cycle Branches can be determined earlier in the Decode stage A comparator is used in the decode stage to determine branch decision, whether the branch is taken or not Because of forwarding the delay in the second stage will be increased and this will also increase the clock cycle Only one instruction that follows the branch is fetched If the branch is taken then only one instruction is flushed We should insert a bubble after jump or taken branch This will convert the next instruction into a NOP Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 52 26

Reducing Branch Delay to Cycle Jump or Branch Target PCSrc clk PC + Reset Op Next PC ister File Zero J Beq Bne = Imm6 B A Im6 E Rs 5 BusA RA 2 Rt 5 3 RB BusB RW BusW 2 Rd 3 Rd2 Longer Cycle Data forwarded then compared A L U out D Rd3 Reset signal converts next instruction after jump or taken branch into a bubble func Dst Main & Control J, Beq, Bne Control Signals Bubble = Ctrl MEM Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 53 Next... Pipelining versus Serial Execution Pipelined Datapath and Control Pipeline Hazards Data Hazards and Forwarding Load Delay, Hazard Detection, and Stall Control Hazards Delayed Branch and Dynamic Branch Prediction Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 54 27

Branch Hazard Alternatives Predict Branch Not Taken (previously discussed) Successor instruction is already fetched Do NOT Flush instruction after branch if branch is NOT taken Flush only instructions appearing after Jump or taken branch Delayed Branch Define branch to take place AFTER the next instruction Compiler/assembler fills the branch delay slot (for delay cycle) Dynamic Branch Prediction Loop branches are taken most of time Must reduce branch delay to, but how? How to predict branch behavior at runtime? Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 55 Delayed Branch Define branch to take place after the next instruction For a -cycle branch delay, we have one delay slot branch instruction branch delay slot (next instruction) branch target (if branch taken) Compiler fills the branch delay slot By selecting an independent instruction From before the branch If no independent instruction is found Compiler fills delay slot with a NO-OP label:... add $t2,$t3,$t4 beq $s,$s,label Delay Slot label:... beq $s,$s,label add $t2,$t3,$t4 Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 56 28

Drawback of Delayed Branching New meaning for branch instruction Branching takes place after next instruction (Not immediately!) Impacts software and compiler Compiler is responsible to fill the branch delay slot For a -cycle branch delay One branch delay slot However, modern processors and deeply pipelined Branch penalty is multiple cycles in deeper pipelines Multiple delay slots are difficult to fill with useful instructions MIPS used delayed branching in earlier pipelines However, delayed branching is not useful in recent processors Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 57 Zero-Delayed Branching How to achieve zero delay for a jump or a taken branch? Jump or branch target address is computed in the ID stage Next instruction has already been fetched in the stage Solution Introduce a Branch Target Buffer (BTB) in the stage Store the target address of recent branch and jump instructions Use the lower bits of the PC to index the BTB Each BTB entry stores Branch/Jump address & Target Check the PC to see if the instruction being fetched is a branch Update the PC using the target address stored in the BTB Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 58 29

Branch Target Buffer The branch target buffer is implemented as a small cache Stores the target address of recent branches and jumps We must also have prediction bits To predict whether branches are taken or not taken The prediction bits are dynamically determined by the hardware Branch Target & Prediction Buffer Inc es of Recent Branches Target es Predict Bits mux PC low-order bits used as index = predict_taken Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 59 Dynamic Branch Prediction Prediction of branches at runtime using prediction bits Prediction bits are associated with each entry in the BTB Prediction bits reflect the recent history of a branch instruction Typically few prediction bits ( or 2) are used per entry We don t know if the prediction is correct or not If correct prediction Continue normal execution no wasted cycles If incorrect prediction (misprediction) Flush the instructions that were incorrectly fetched wasted cycles Update prediction bits and target address for future use Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 6 3

Dynamic Branch Prediction Cont d Use PC to address and Branch Target Buffer Increment PC No Found BTB entry with predict taken? Yes PC = target address ID No Jump or taken branch? Yes No Jump or taken branch? Yes Normal Execution Correct Prediction No stall cycles Mispredicted Jump/branch Enter jump/branch address, target address, and set prediction in BTB entry. Flush fetched instructions Restart PC at target address Mispredicted branch Branch not taken Update prediction bits Flush fetched instructions Restart PC after branch Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 6 -bit Prediction Scheme Prediction is just a hint that is assumed to be correct If incorrect then fetched instructions are flushed -bit prediction scheme is simplest to implement bit per branch instruction (associated with BTB entry) Record last outcome of a branch instruction (Taken/Not taken) Use last outcome to predict future behavior of a branch Not Taken Predict Not Taken Taken Not Taken Predict Taken Taken Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 62 3

-Bit Predictor: Shortcoming Inner loop branch mispredicted twice! Mispredict as taken on last iteration of inner loop Then mispredict as not taken on first iteration of inner loop next time around outer: inner: bne,, inner bne,, outer Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 63 2-bit Prediction Scheme -bit prediction scheme has a performance shortcoming 2-bit prediction scheme works better and is often used 4 states: strong and weak predict taken / predict not taken Implemented as a saturating counter Counter is incremented to max=3 when branch outcome is taken Counter is decremented to min= when branch is not taken Not Taken Taken Strong Predict Not Taken Taken Weak Predict Taken Not Taken Not Taken Not Taken Weak Predict Taken Taken Not Taken Strong Predict Taken Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 64

Fallacies and Pitfalls Pipelining is easy! The basic idea is easy The devil is in the details Detecting data hazards and stalling pipeline Poor ISA design can make pipelining harder Complex instruction sets (Intel IA-) Significant overhead to make pipelining work IA- micro-op approach Complex addressing modes ister update side effects, memory indirection Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 65 Pipeline Hazards Summary Three types of pipeline hazards Structural hazards: conflicts using a resource during same cycle Data hazards: due to data dependencies between instructions Control hazards: due to branch and jump instructions Hazards limit the performance and complicate the design Structural hazards: eliminated by careful design or more hardware Data hazards are eliminated by forwarding However, load delay cannot be eliminated and stalls the pipeline Delayed branching can be a solution when branch delay = cycle BTB with branch prediction can reduce branch delay to zero Branch misprediction should flush the wrongly fetched instructions Pipelined Processor Design COE 38 Computer Architecture Muhamed Mudawar slide 66 33