IF ID EX MEM WB 400 ps 225 ps 350 ps 450 ps 300 ps

Similar documents
IF ID EX MEM WB 400 ps 225 ps 350 ps 450 ps 300 ps

Chapter 4. Pipelining Analogy. The Processor. Pipelined laundry: overlapping execution. Parallelism improves performance. Four loads: Non-stop:

7/11/2012. Single Cycle (Review) CSE 2021: Computer Organization. Multi-Cycle Implementation. Single Cycle with Jump. Pipelining Analogy

EECE 321: Computer Organiza5on

7/19/2012. IF for Load (Review) CSE 2021: Computer Organization. EX for Load (Review) ID for Load (Review) WB for Load (Review) MEM for Load (Review)

CSE 2021: Computer Organization

Lecture Topics. Announcements. Today: Pipelined Processors (P&H ) Next: continued. Milestone #4 (due 2/23) Milestone #5 (due 3/2)

ECE473 Computer Architecture and Organization. Pipeline: Introduction

CS 110 Computer Architecture Lecture 11: Pipelining

Suggested Readings! Lecture 12" Introduction to Pipelining! Example: We have to build x cars...! ...Each car takes 6 steps to build...! ! Readings!

LECTURE 8. Pipelining: Datapath and Control

Pipelined Processor Design

Pipelining A B C D. Readings: Example: Doing the laundry. Ann, Brian, Cathy, & Dave. each have one load of clothes to wash, dry, and fold

Single-Cycle CPU The following exercises are taken from Hennessy and Patterson, CO&D 2 nd, 3 rd, and 4 th Ed.

A B C D. Ann, Brian, Cathy, & Dave each have one load of clothes to wash, dry, and fold. Time

EE 457 Homework 5 Redekopp Name: Score: / 100_

6.S084 Tutorial Problems L19 Control Hazards in Pipelined Processors

CSEN 601: Computer System Architecture Summer 2014

RISC Design: Pipelining

Computer Architecture

ECE 2300 Digital Logic & Computer Organization. More Pipelined Microprocessor

Lecture 4: Introduction to Pipelining

CMSC 611: Advanced Computer Architecture

Computer Hardware. Pipeline

Chapter 16 - Instruction-Level Parallelism and Superscalar Processors

Pipelining and ISA Design

RISC Central Processing Unit

CS 61C: Great Ideas in Computer Architecture. Pipelining Hazards. Instructor: Senior Lecturer SOE Dan Garcia

EN164: Design of Computing Systems Lecture 22: Processor / ILP 3

ECE 4750 Computer Architecture, Fall 2016 T09 Advanced Processors: Superscalar Execution

Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from Hennessy & Patterson / 2003 Elsevier

Instruction Level Parallelism. Data Dependence Static Scheduling

Asanovic/Devadas Spring Pipeline Hazards. Krste Asanovic Laboratory for Computer Science M.I.T.

Instruction Level Parallelism Part II - Scoreboard

CISC 662 Graduate Computer Architecture. Lecture 9 - Scoreboard

CMP 301B Computer Architecture. Appendix C

You are Here! Processor Design Process. Agenda. Agenda 10/25/12. CS 61C: Great Ideas in Computer Architecture Single Cycle MIPS CPU Part II

CS 61C: Great Ideas in Computer Architecture Finite State Machines, Functional Units

CMSC 611: Advanced Computer Architecture

Outline Single Cycle Processor Design Multi cycle Processor. Pipelined Processor Design. Overall clock period. Analyzing performance 3/18/2015

CS429: Computer Organization and Architecture

Pipelined Beta. Handouts: Lecture Slides. Where are the registers? Spring /10/01. L16 Pipelined Beta 1

FMP For More Practice

Computer Architecture ( L), Fall 2017 HW 3: Branch handling and GPU SOLUTIONS

Computer Science 246. Advanced Computer Architecture. Spring 2010 Harvard University. Instructor: Prof. David Brooks

COSC4201. Scoreboard

Problem: hazards delay instruction completion & increase the CPI. Compiler scheduling (static scheduling) reduces impact of hazards

CS61C : Machine Structures

EECS 470 Lecture 4. Pipelining & Hazards II. Winter Prof. Ronald Dreslinski h8p://

Selected Solutions to Problem-Set #3 COE 608: Computer Organization and Architecture Single Cycle Datapath and Control

Dynamic Scheduling I

Metrics How to improve performance? CPI MIPS Benchmarks CSC3501 S07 CSC3501 S07. Louisiana State University 4- Performance - 1

Out-of-Order Execution. Register Renaming. Nima Honarmand

CS420/520 Computer Architecture I

Dynamic Scheduling II

OOO Execution & Precise State MIPS R10000 (R10K)

DAT105: Computer Architecture

CS61C : Machine Structures

CS61C : Machine Structures

Project 5: Optimizer Jason Ansel

Topic: Use Parallel Lines and Transversals. and transversals?

EE382V-ICS: System-on-a-Chip (SoC) Design

Generics AGEN Assessment Tool. 005 Drawings 001 Basic Drawings. q Competent q Not Yet Competent. Signed: Learner Name: Date: Telephone No.

Single vs. Mul2- cycle MIPS. Single Clock Cycle Length

Introduction to PLC and Ladder Logic Programming

CSE P567 Homework #4 Winter 2010

CS521 CSE IITG 11/23/2012

CSE502: Computer Architecture Welcome to CSE 502

CS Computer Architecture Spring Lecture 04: Understanding Performance

SATSim: A Superscalar Architecture Trace Simulator Using Interactive Animation

Tomasulo s Algorithm. Tomasulo s Algorithm

CSE502: Computer Architecture CSE 502: Computer Architecture

HIGH-PERFORMANCE HYBRID WAVE-PIPELINE SCHEME AS IT APPLIES TO ADDER MICRO-ARCHITECTURES

Administrative Issues

Parallel architectures Electronic Computers LM

Low Power Design Part I Introduction and VHDL design. Ricardo Santos LSCAD/FACOM/UFMS

CIDM 2315 Final Project: Hunt the Wumpus

Lecture 8-1 Vector Processors 2 A. Sohn

Lecture Topics. Announcements. Today: Memory Management (Stallings, chapter ) Next: continued. Self-Study Exercise #6. Project #4 (due 10/11)

Performance Metrics, Amdahl s Law

First Name: Last Name: Lab Cover Page. Teaching Assistant to whom you are submitting

Department Computer Science and Engineering IIT Kanpur

EECS 470 Lecture 8. P6 µarchitecture. Fall 2018 Jon Beaumont Core 2 Microarchitecture

A LOW POWER DESIGN FOR ARITHMETIC AND LOGIC UNIT

Computer Elements and Datapath. Microarchitecture Implementation of an ISA

Lecture 1. Tinoosh Mohsenin

Lesson 7. Digital Signal Processors

Lab Report #7 Alex Styborski, Daniel Telesman, and Josh Kauffman Group 12 March 22, 2013 Abstract

Topics. Low Power Techniques. Based on Penn State CSE477 Lecture Notes 2002 M.J. Irwin and adapted from Digital Integrated Circuits 2002 J.

Multi-Channel FIR Filters

Instructor: Dr. Mainak Chaudhuri. Instructor: Dr. S. K. Aggarwal. Instructor: Dr. Rajat Moona

Spring 06 Assignment 2: Constraint Satisfaction Problems

COTSon: Infrastructure for system-level simulation

EECS 470. Lecture 9. MIPS R10000 Case Study. Fall 2018 Jon Beaumont

Lec 24: Parallel Processors. Announcements

Best Instruction Per Cycle Formula >>>CLICK HERE<<<

CSE502: Computer Architecture CSE 502: Computer Architecture

0 A. Review. Lecture #16. Pipeline big-delay CL for faster clock Finite State Machines extremely useful You!ll see them again in 150, 152 & 164

CS152 Computer Architecture and Engineering Lecture 3: ReviewTechnology & Delay Modeling. September 3, 1997

Computer Architecture Laboratory

Transcription:

CSE 30321 Computer Architecture I Fall 2011 Homework 06 Pipelined Processors 75 points Assigned: November 1, 2011 Due: November 8, 2011 PLEASE DO THE ASSIGNMENT ON THIS HANDOUT!!! Problem 1: (15 points) In this exercise, we examine how pipelining affects the clock cycle time of the processor. Problems in this exercise assume that individual stages of the datapath take the following amounts of time to complete: Questions: IF ID EX MEM WB 400 ps 225 ps 350 ps 450 ps 300 ps Part A: (5 points) - How long will it take to completely process a single sw instruction in a pipelined processor? - What about in a non-pipelined, multi-cycle processor? - Comment on the difference in execution times. Part B: (5 points) - How long will it take to completely process 100 add instructions in a pipelined processor? (The add instructions may be dependent on one another, but you can assume that any dependencies can be resolved by forwarding so no stalls are needed.) - What about in a non-pipelined, multi-cycle processor? - Comment on the difference in execution times.

Part C: (5 points) To try to improve performance further, you have decided to break up 2 of the above stages into 2 shorter stages. (In other words, you are going to increase the depth of your pipeline.) - What current stages would you recommend breaking up? - How would this affect your answer to part B? (Note: Your decision about the second stage to break up should be based on the pipeline after you break up the first stage.)

Problem 2: (25 points) In this question, we examine how data dependencies affect execution in the basic 5-stage pipeline discussed in class. Problems in this exercise refer to the following sequence of instructions: Questions: A: sub $1, $2, $3 B: add $4, $1, $1 C: lw $5, 0($1) D: add $6, $5, $2 E: add $7, $5, $3 F: sub $8, $6, $7 G: sw $7, 0($8) Part A: (10 points) Assume there is full forwarding. Show how the above sequence of instructions would flow through the pipeline. Identify data dependencies and note when/if they become data hazards. A: sub $1, $2, $3 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 B: add $4, $1, $1 C: lw $5, 0($1) D: add $6, $5, $2 E: add $7, $5, $3 F: sub $8, $6, $7 G: sw $7, 0($8) List Dependencies/Hazards Here:

Note: - Refer to the following table to answer the remaining 2 questions. Without forwarding With full forwarding 350 ps 400 ps Part B: (10 points) What is the execution time of this instruction sequence without forwarding and with full forwarding? (A register may be read and written in the same CC.) What is the speedup achieved by adding full forwarding to a pipeline that had no forwarding? Complete the trace to help you answer this question. A: sub $1, $2, $3 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 B: add $4, $1, $1 C: lw $5, 0($1) D: add $6, $5, $2 E: add $7, $5, $3 F: sub $8, $6, $7 G: sw $7, 0($8) Part C: (5 points) Is there anything that you or the compiler could do to further improve performance of the design with full forwarding?

Problem 3: (20 points) This problem will help you understand how certain code sequences can degrade the performance of pipelined code, and how hardware can help avoid performance degradation. Part A: (10 points) Identify all of the data dependencies in the following code. Which dependencies are data hazards that can be resolved via forwarding? Which dependencies are data hazards that will cause a stall? (Hint: always consider what clock cycle a piece of data is produced in, and in what clock cycle that data is ultimately needed by another instruction.) sub $6, $8, $10 add $7, $6, $8 sw $9, 0($7) 1 2 3 4 5 6 7 8 9 10 11 sub $6, $8, $10 add $7, $6, $8 sw $9, 0($7) List Dependencies/Hazards Here:

Part B: (10 points) Identify all of the data dependencies in the following code. Which dependencies are data hazards that can be resolved via forwarding? Which dependencies are data hazards that will cause a stall? (Hint: always consider what clock cycle a piece of data is produced in and in what clock cycle that data is ultimately needed by another instruction.) sub $7, $8, $9 sub $10, $8, $6 sw $10, 0($8) 1 2 3 4 5 6 7 8 9 10 11 sub $7, $8, $9 sub $10, $8, $6 sw $10, 0($8) List Dependencies/Hazards Here:

Problem 4: (15 points) The two parts of this question will help you see how branch instructions affect how a sequence of instructions moves through a pipelined datapath. Part A: (10 points) - Assume that forwarding has been implemented. - Assume that you can read and write a register in the same clock cycle - Assume that a given register contains its number o E.g. $1 = 1, $2 = 2, $3 = 3, $4 = 4, $5 = 5, $6 = 6, etc. - You may assume that each branch is predicted correctly. - Show the instruction trace. Instruction 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Comment (if needed) Y: add $5,$1,$1 beq $5,$2,X add $5,$5,$5 add $6,$2,$2 X: beq $4,$6,Y sll $7,$1,4 Part B: (5 points) What if branch prediction was not perfect and you instead just predicted that a branch was not taken? What would happen given the instruction sequence above? (You don t have to show a pipe trace, just briefly explain.)