Chapter 16 - Instruction-Level Parallelism and Superscalar Processors

Similar documents
Out-of-Order Execution. Register Renaming. Nima Honarmand

CMP 301B Computer Architecture. Appendix C

Chapter 4. Pipelining Analogy. The Processor. Pipelined laundry: overlapping execution. Parallelism improves performance. Four loads: Non-stop:

7/11/2012. Single Cycle (Review) CSE 2021: Computer Organization. Multi-Cycle Implementation. Single Cycle with Jump. Pipelining Analogy

Dynamic Scheduling I

Lecture 8-1 Vector Processors 2 A. Sohn

Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from Hennessy & Patterson / 2003 Elsevier

CSE502: Computer Architecture CSE 502: Computer Architecture

CSE502: Computer Architecture CSE 502: Computer Architecture

Computer Science 246. Advanced Computer Architecture. Spring 2010 Harvard University. Instructor: Prof. David Brooks

Project 5: Optimizer Jason Ansel

Instruction Level Parallelism Part II - Scoreboard

IF ID EX MEM WB 400 ps 225 ps 350 ps 450 ps 300 ps

IF ID EX MEM WB 400 ps 225 ps 350 ps 450 ps 300 ps

Suggested Readings! Lecture 12" Introduction to Pipelining! Example: We have to build x cars...! ...Each car takes 6 steps to build...! ! Readings!

ECE 4750 Computer Architecture, Fall 2016 T09 Advanced Processors: Superscalar Execution

EN164: Design of Computing Systems Lecture 22: Processor / ILP 3

SATSim: A Superscalar Architecture Trace Simulator Using Interactive Animation

Instruction Level Parallelism III: Dynamic Scheduling

Tomasolu s s Algorithm

EECS 470. Tomasulo s Algorithm. Lecture 4 Winter 2018

U. Wisconsin CS/ECE 752 Advanced Computer Architecture I

Parallel architectures Electronic Computers LM

EECS 470 Lecture 5. Intro to Dynamic Scheduling (Scoreboarding) Fall 2018 Jon Beaumont

Dynamic Scheduling II

CS521 CSE IITG 11/23/2012

COSC4201. Scoreboard

Problem: hazards delay instruction completion & increase the CPI. Compiler scheduling (static scheduling) reduces impact of hazards

EECE 321: Computer Organiza5on

7/19/2012. IF for Load (Review) CSE 2021: Computer Organization. EX for Load (Review) ID for Load (Review) WB for Load (Review) MEM for Load (Review)

Lecture Topics. Announcements. Today: Pipelined Processors (P&H ) Next: continued. Milestone #4 (due 2/23) Milestone #5 (due 3/2)

CSE 2021: Computer Organization

Instruction Level Parallelism. Data Dependence Static Scheduling

Tomasulo s Algorithm. Tomasulo s Algorithm

Pipelined Processor Design

Lecture 4: Introduction to Pipelining

Precise State Recovery. Out-of-Order Pipelines

CISC 662 Graduate Computer Architecture. Lecture 9 - Scoreboard

CS 110 Computer Architecture Lecture 11: Pipelining

Pipelining A B C D. Readings: Example: Doing the laundry. Ann, Brian, Cathy, & Dave. each have one load of clothes to wash, dry, and fold

CS429: Computer Organization and Architecture

Evolution of DSP Processors. Kartik Kariya EE, IIT Bombay

EECS 470. Lecture 9. MIPS R10000 Case Study. Fall 2018 Jon Beaumont

CSE502: Computer Architecture CSE 502: Computer Architecture

ECE473 Computer Architecture and Organization. Pipeline: Introduction

A B C D. Ann, Brian, Cathy, & Dave each have one load of clothes to wash, dry, and fold. Time

Instructor: Dr. Mainak Chaudhuri. Instructor: Dr. S. K. Aggarwal. Instructor: Dr. Rajat Moona

Asanovic/Devadas Spring Pipeline Hazards. Krste Asanovic Laboratory for Computer Science M.I.T.

Department Computer Science and Engineering IIT Kanpur

Computer Hardware. Pipeline

OOO Execution & Precise State MIPS R10000 (R10K)

RISC Central Processing Unit

Best Instruction Per Cycle Formula >>>CLICK HERE<<<

CSE502: Computer Architecture CSE 502: Computer Architecture

Compiler Optimisation

Final Report: DBmbench

Improving GPU Performance via Large Warps and Two-Level Warp Scheduling

LECTURE 8. Pipelining: Datapath and Control

EECS 470 Lecture 8. P6 µarchitecture. Fall 2018 Jon Beaumont Core 2 Microarchitecture

Multi-Channel FIR Filters

CS61c: Introduction to Synchronous Digital Systems

Computer Arithmetic (2)

A LOW POWER DESIGN FOR ARITHMETIC AND LOGIC UNIT

Overview. 1 Trends in Microprocessor Architecture. Computer architecture. Computer architecture

6.S084 Tutorial Problems L19 Control Hazards in Pipelined Processors

Multi-Robot Coordination. Chapter 11

Computer Elements and Datapath. Microarchitecture Implementation of an ISA

An Optimized Implementation of CSLA and CLLA for 32-bit Unsigned Multiplier Using Verilog

CS Computer Architecture Spring Lecture 04: Understanding Performance

DAT105: Computer Architecture

High performance Radix-16 Booth Partial Product Generator for 64-bit Binary Multipliers

ANLAN203. KSZ84xx GPIO Pin Output Functionality. Introduction. Overview of GPIO and TOU

Performance Metrics, Amdahl s Law

[Krishna, 2(9): September, 2013] ISSN: Impact Factor: INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY

LabVIEW Day 2: Other loops, Other graphs

An Evaluation of Speculative Instruction Execution on Simultaneous Multithreaded Processors

Performance Evaluation of Recently Proposed Cache Replacement Policies

Pre-Silicon Validation of Hyper-Threading Technology

R Using the Virtex Delay-Locked Loop

Digital Integrated CircuitDesign

Pipelined Beta. Handouts: Lecture Slides. Where are the registers? Spring /10/01. L16 Pipelined Beta 1

ECE 2300 Digital Logic & Computer Organization. More Pipelined Microprocessor

Lecture 13 Register Allocation: Coalescing

EE382V-ICS: System-on-a-Chip (SoC) Design

CSE502: Computer Architecture Welcome to CSE 502

COTSon: Infrastructure for system-level simulation

Advances in Antenna Measurement Instrumentation and Systems

Vector Arithmetic Logic Unit Amit Kumar Dutta JIS College of Engineering, Kalyani, WB, India

EECS 470 Lecture 4. Pipelining & Hazards II. Winter Prof. Ronald Dreslinski h8p://

CHAPTER 3 ANALYSIS OF LOW POWER, AREA EFFICIENT AND HIGH SPEED ADDER TOPOLOGIES

Pipelined Architecture (2A) Young Won Lim 4/7/18

Pipelined Architecture (2A) Young Won Lim 4/10/18

Early Adopter : Multiprocessor Programming in the Undergraduate Program. NSF/TCPP Curriculum: Early Adoption at the University of Central Florida

Computer Architecture and Organization:

FIR_NTAP_MUX. N-Channel Multiplexed FIR Filter Rev Key Design Features. Block Diagram. Applications. Pin-out Description. Generic Parameters

LeCroy UWBSpekChek WiMedia Compliance Test Suite User Guide. Introduction

5. (Adapted from 3.25)

Design and Implementation of Orthogonal Frequency Division Multiplexing (OFDM) Signaling

Application Note, V1.0, March 2008 AP XC2000 Family. DSP Examples for C166S V2 Lib. Microcontrollers

Issue. Execute. Finish

Transcription:

Chapter 16 - Instruction-Level Parallelism and Superscalar Processors Luis Tarrataca luis.tarrataca@gmail.com CEFET-RJ L. Tarrataca Chapter 16 - Superscalar Processors 1 / 78

Table of Contents I 1 Overview Scalar Processor Superscalar Processor Superscalar vs. Superpipelined Constraints 2 Design Issues Machine Parallelism Instruction Issue Policy In-order issue with in-order completion In-order issue with out-of-order completion Out-of-Order issue with Out-Of-Order Completion L. Tarrataca Chapter 16 - Superscalar Processors 2 / 78

Table of Contents II Register Renaming L. Tarrataca Chapter 16 - Superscalar Processors 3 / 78

Table of Contents I 3 Superscalar Execution Overview 4 References L. Tarrataca Chapter 16 - Superscalar Processors 4 / 78

Overview Scalar Processor The first processors were known as scalar: What is a scalar processor? Any ideas? L. Tarrataca Chapter 16 - Superscalar Processors 5 / 78

Overview Scalar Processor Scalar Processor The first processors were known as scalar: What is a scalar processor? Any ideas? In a scalar organization, a single pipelined functional unit exists for: integer operations; and one for floating-point operations; L. Tarrataca Chapter 16 - Superscalar Processors 6 / 78

Overview Scalar Processor Scalar Processor In a scalar organization, a single pipelined functional unit exists for: integer operations; and one for floating-point operations; Figure: Scalar Organization (Source: [Stallings, 2015]) Parallelism is achieved by: enabling multiple instructions to be at different stages of the pipeline L. Tarrataca Chapter 16 - Superscalar Processors 7 / 78

Overview Superscalar Processor Superscalar Processor The term superscalar refers to a processor that is designed to: Improve the performance of the execution of scalar instructions; Represents the next evolution step; L. Tarrataca Chapter 16 - Superscalar Processors 8 / 78

Overview Superscalar Processor Superscalar Processor The term superscalar refers to a processor that is designed to: Improve the performance of the execution of scalar instructions; Represents the next evolution step; How do you think this next evolution step is obtained? Any ideas? L. Tarrataca Chapter 16 - Superscalar Processors 9 / 78

Overview Superscalar Processor Superscalar Processor The term superscalar refers to a processor that is designed to: Improve the performance of the execution of scalar instructions; Represents the next evolution step; How do you think this next evolution step is obtained? Any ideas? Simple idea: increase number of pipelines ;) L. Tarrataca Chapter 16 - Superscalar Processors 10 / 78

Overview Superscalar Processor Superscalar processor ability to execute instructions in different pipelines: independently and concurrently; Figure: Superscalar Organization (Source: [Stallings, 2015]) L. Tarrataca Chapter 16 - Superscalar Processors 11 / 78

Overview Superscalar Processor However... Pipeline concept already introduced some problems. Remember which? L. Tarrataca Chapter 16 - Superscalar Processors 12 / 78

Overview Superscalar Processor However... Pipeline concept already introduced some problems. Remember which? Resource Hazards; Data Hazards: RAW WAR WAW Control Hazards; L. Tarrataca Chapter 16 - Superscalar Processors 13 / 78

Overview Superscalar Processor Accordingly, how do we avoid some of the known pipeline issues? L. Tarrataca Chapter 16 - Superscalar Processors 14 / 78

Overview Superscalar Processor But how do we avoid some of the known pipeline issues? Responsibility of the hardware and the compiler to: Assure that parallel execution does not violate program intent; Tradeoff between performance and complexity; L. Tarrataca Chapter 16 - Superscalar Processors 15 / 78

Overview Superscalar vs. Superpipelined Superscalar vs. Superpipelined Superpipelining is an alternative performance method to superscalar: Many pipeline stages require less than half a clock cycle; A pipeline clock is used instead of the overall system clock: To advance between the different pipeline stages; L. Tarrataca Chapter 16 - Superscalar Processors 16 / 78

Overview Superscalar vs. Superpipelined Figure: Comparison of superscalar and superpipeline approaches (Source: [Stallings, 2015]) L. Tarrataca Chapter 16 - Superscalar Processors 17 / 78

Overview Superscalar vs. Superpipelined From the previous figure, the base pipeline: Issues one instruction per clock cycle; Can perform one pipeline stage per clock cycle; Although several instructions are executing concurrently: Only one instruction is in its execution stage at any one time. Total time to execute 6 instructions: 9 cycles. L. Tarrataca Chapter 16 - Superscalar Processors 18 / 78

Overview Superscalar vs. Superpipelined From the previous figure, the superpipelined implementation: Capable of performing two pipeline stages per clock cycle; Each stage can be split into two nonoverlapping parts: With each executing in half a clock cycle; Total time to execute 6 instructions: 6.5 cycles. Theoretical speedup: 9 38% 6.5 L. Tarrataca Chapter 16 - Superscalar Processors 19 / 78

Overview Superscalar vs. Superpipelined From the previous figure, the superscalar implementation: capable of executing two instances of each stage in parallel; Total time to execute 6 instructions: 6 cycles Theoretical speedup: 9 50% 6 L. Tarrataca Chapter 16 - Superscalar Processors 20 / 78

Overview Superscalar vs. Superpipelined From the previous figure: Both the superpipeline and the superscalar implementations: Have the same number of instructions executing at the same time; However, superpipelined processor falls behind the superscalar processor: Parallelism empowers greater performance; L. Tarrataca Chapter 16 - Superscalar Processors 21 / 78

Overview Constraints Constraints Superscalar approach depends on: Ability to execute multiple instructions in parallel; True instruction-level parallelism However, parallelism creates additional issues: Fundamental limitations to parallelism L. Tarrataca Chapter 16 - Superscalar Processors 22 / 78

Overview Constraints What are some of the limitations to parallelism? Any ideas? L. Tarrataca Chapter 16 - Superscalar Processors 23 / 78

Overview Constraints What are some of the limitations to parallelism? Any ideas? Data dependency; Procedural dependency; Resource conflicts; Lets have a look at these. L. Tarrataca Chapter 16 - Superscalar Processors 24 / 78

Overview Constraints Data dependency Consider the following sequence: Figure: True Data Dependency (Source: [Stallings, 2015]) Can you see any problems with the code above? L. Tarrataca Chapter 16 - Superscalar Processors 25 / 78

Overview Constraints Consider the following sequence: Figure: True Data Dependency (Source: [Stallings, 2015]) Can you see any problems with the code above? Second instruction can be fetched and decoded but cannot executed: until the first instruction executes; Second instruction needs data produced by the first instruction; A.k.a. read after write RAW dependency; L. Tarrataca Chapter 16 - Superscalar Processors 26 / 78

Overview Constraints Example with a superscalar machine of degree 2: Figure: Effect of dependencies (Source: [Stallings, 2015]) L. Tarrataca Chapter 16 - Superscalar Processors 27 / 78

Overview Constraints From the previous figure: With no dependency: two instructions can be fetched and executed in parallel; Data dependency between the 1 st and 2 nd instructions: In general: 2 nd instruction is delayed as many clock cycles as required to remove the dependency Instructions must be delayed until its input values have been produced. L. Tarrataca Chapter 16 - Superscalar Processors 28 / 78

Overview Constraints Procedural Dependencies Presence of branches complicates pipeline operation: Instructions following a branch: Depend on whether the branch was taken or not taken; This cannot be determined until the branch is executed; This type of procedural dependency also affects a scalar pipeline: More severe because a greater magnitude of opportunity is lost; L. Tarrataca Chapter 16 - Superscalar Processors 29 / 78

Overview Constraints Figure: Effect of dependencies (Source: [Stallings, 2015]) L. Tarrataca Chapter 16 - Superscalar Processors 30 / 78

Overview Constraints Resource Conflict Instruction competition for the same resource at the same time: Resource examples: Bus; Memory; Registers; ALU; Resource conflict exhibits similar behavior to a data dependency: Resource conflicts can be overcome by duplication of resources: whereas a true data dependency cannot be eliminated L. Tarrataca Chapter 16 - Superscalar Processors 31 / 78

Overview Constraints Figure: Effect of dependencies (Source: [Stallings, 2015]) L. Tarrataca Chapter 16 - Superscalar Processors 32 / 78

Design Issues Design Issues Next, lets have a look at the different design issues to consider: Instruction-Level Parallelism and Machine Parallelism; Instruction Issue Policy; Register Renaming; Machine Parallelism; Branch Prediction Superscalar Execution Superscalar Implementation L. Tarrataca Chapter 16 - Superscalar Processors 33 / 78

Design Issues Instruction-level parallelism Instruction-level parallelism exists when instructions in a sequence: are independent and thus can be executed in parallel; As an example consider the following two code fragments: Figure: Instruction level parallelism (Source: [Stallings, 2015]) Instructions on the: Left are independent, and could be executed in parallel. Right cannot be executed in parallel due to data dependency; L. Tarrataca Chapter 16 - Superscalar Processors 34 / 78

Design Issues Degree of instruction-level parallelism is determined by the Frequency of true data dependencies; Procedural dependencies in the code; These depend on the instruction set and the application. L. Tarrataca Chapter 16 - Superscalar Processors 35 / 78

Design Issues Machine Parallelism Machine Parallelism Machine parallelism is a measure of the ability of the processor to: Take advantage of instruction-level parallelism; Determined by: number of instructions that can be fetched at the same time; number of instructions that can be executed at the same time; ability to find independent instructions. L. Tarrataca Chapter 16 - Superscalar Processors 36 / 78

Design Issues Instruction Issue Policy Instruction Issue Policy Processor must also be able to identify instruction-level parallelism: This is required in order to orchestrate: fetching, decoding, and execution of instructions in parallel; In essence: Processor needs to locate instructions that can be pipelined and executed Goal: optimize pipeline usage; L. Tarrataca Chapter 16 - Superscalar Processors 37 / 78

Design Issues Instruction Issue Policy In essence: Processor needs to locate instructions that can be pipelined and executed Goal: optimize pipeline usage; What factors influence this ability to locate these instructions? Any ideas? L. Tarrataca Chapter 16 - Superscalar Processors 38 / 78

Design Issues Instruction Issue Policy In essence: Processor needs to locate instructions that can be pipelined and executed Goal: optimize pipeline usage; What factors influence this ability to locate these instructions? Any ideas? Hint: Do we always need to execute instructions in the original sequential order? L. Tarrataca Chapter 16 - Superscalar Processors 39 / 78

Design Issues Instruction Issue Policy In essence: Processor needs to locate instructions that can be pipelined and executed Goal: optimize pipeline usage; What factors influence this ability to locate these instructions? Any ideas? Hint: Do we always need to execute instructions in the original sequential order? No! As long the final result is correct! L. Tarrataca Chapter 16 - Superscalar Processors 40 / 78

Design Issues Instruction Issue Policy Three types of orderings are important in this regard: Order in which instructions are fetched; Order in which instructions are executed; Order in which instructions update register/memory contents; L. Tarrataca Chapter 16 - Superscalar Processors 41 / 78

Design Issues Instruction Issue Policy To optimize utilization of the various pipeline elements: processor may need to alter one or more of these orderings: regarding the original sequential execution. This can be done as long as the final result is correct; L. Tarrataca Chapter 16 - Superscalar Processors 42 / 78

Design Issues Instruction Issue Policy In general terms, instruction issue policies fall into the following categories: In-order issue with in-order completion; In-order issue with out-of-order completion; Out-of-order issue with out-of-order completion Lets have a look at these L. Tarrataca Chapter 16 - Superscalar Processors 43 / 78

Design Issues Instruction Issue Policy In-order issue with in-order completion Simplest instruction issue policy: Issue instructions respecting original sequential execution: A.k.a. in-order issue And write the results in the same order: A.k.a. in-order completion This instruction policy can be used as a baseline: for comparing more sophisticated approaches. L. Tarrataca Chapter 16 - Superscalar Processors 44 / 78

Design Issues Instruction Issue Policy Consider the following example: Figure: In-order issue with in-order completion (Source: [Stallings, 2015]) Assume a superscalar pipeline capable of: Fetching and decoding two instructions at a time; Having three separate functional units: E.g.: two integer arithmetic and one floating-point arithmetic; Having two instances of the write-back pipeline stage; L. Tarrataca Chapter 16 - Superscalar Processors 45 / 78

Design Issues Instruction Issue Policy Example assumes the following constraints on a six-instruction code: I1 requires two cycles to execute. I3 and I4 conflict for a functional unit. I5 depends on the value produced by I4. I5 and I6 conflict for a functional unit. L. Tarrataca Chapter 16 - Superscalar Processors 46 / 78

Design Issues Instruction Issue Policy From the previous example: Instructions are fetched two at a time and passed to the decode unit; Because instructions are fetched in pairs: Next two instructions waits until the pair of decode stages has cleared. To guarantee in-order completion: when there is a conflict for a functional unit: issuing of instructions temporarily stalls. Total time required is eight cycles. L. Tarrataca Chapter 16 - Superscalar Processors 47 / 78

Design Issues Instruction Issue Policy In-order issue with out-of-order completion Figure: In-order issue with out-of-order completion (Source: [Stallings, 2015]) Instruction I2 is allowed to run to completion prior to I1; allows I3 to be completed earlier, saving one cycle. Total time required is seven cycles. L. Tarrataca Chapter 16 - Superscalar Processors 48 / 78

Design Issues Instruction Issue Policy With out-of-order completion: Any number of instructions may be in the execution stage at any one time: Up to the maximum degree of machine parallelism across all functional units. Instruction issuing is stalled by: resource conflict; data dependency; procedural dependency. L. Tarrataca Chapter 16 - Superscalar Processors 49 / 78

Design Issues Instruction Issue Policy Out-of-Order issue with Out-Of-Order Completion With in-order issue: Processor will only decode instructions up to a dependency or conflict; No additional instructions are decoded until the conflict is resolved; As a result: processor cannot look ahead of the point of conflict; subsequent independent instructions that: could be useful will not be introduced into the pipeline. L. Tarrataca Chapter 16 - Superscalar Processors 50 / 78

Design Issues Instruction Issue Policy To allow out-of-order issue: Necessary to decouple the decode and execute stages of the pipeline; This is done with a buffer referred to as an instruction window; L. Tarrataca Chapter 16 - Superscalar Processors 51 / 78

Design Issues Instruction Issue Policy With this organization: Processor places instruction in window after decoding it; As long as the window is not full: processor will continue to fetch and decode new instructions; When a functional unit becomes available in the execute stage: Instruction from the window may be issued to the execute stage; Any instruction may be issued, provided that: it needs the particular functional unit that is available; no conflicts or dependencies block this instruction; L. Tarrataca Chapter 16 - Superscalar Processors 52 / 78

Design Issues Instruction Issue Policy The result of this organization is that: Processor has a lookahead capability: Independent instructions that can be brought into the execute stage. Instructions are issued from the window with little regard for original order: no conflicts or dependencies must exist! then the program execution will behave correctly; L. Tarrataca Chapter 16 - Superscalar Processors 53 / 78

Design Issues Instruction Issue Policy Lets consider the following example: Figure: Out-of-Order issue with Out-Of-Order Completion (Source: [Stallings, 2015]) L. Tarrataca Chapter 16 - Superscalar Processors 54 / 78

Design Issues Instruction Issue Policy From the previous figure: During each of the first three cycles: two instructions are fetched into the decode stage; subject to the constraint of the buffer size: two instructions move from the decode stage to the instruction window. In this example: possible to issue instruction I6 ahead of I5: Recall that I5 depends on I4, but I6 does not Total execution time: 6 cycles! L. Tarrataca Chapter 16 - Superscalar Processors 55 / 78

Design Issues Instruction Issue Policy Out-of-order issue out-of-order completion needs to respect constrains: Instruction cannot be issued if it violates a dependency or conflict; The difference is that more instructions are available for issuing: Reducing the probability that a pipeline stage will have to stall; L. Tarrataca Chapter 16 - Superscalar Processors 56 / 78

Design Issues Instruction Issue Policy In addition, a new dependency arises (Write after read): Figure: Write after read (Source: [Stallings, 2015]) Instruction I3 cannot complete execution before: Instruction I2 begins execution; Instruction I2 has fetched its operands; This is so because I3 updates R3, which is a source operand for I2. L. Tarrataca Chapter 16 - Superscalar Processors 57 / 78

Design Issues Register Renaming When instructions are issued in sequence and complete in sequence: The contents of each register are known at each point in the execution; When out-of-order techniques are used: Values in registers cannot be fully known at each point in time; This causes WAR, WAR, RAW problems... L. Tarrataca Chapter 16 - Superscalar Processors 58 / 78

Design Issues Register Renaming What would be an obvious method for dealing with this problem? L. Tarrataca Chapter 16 - Superscalar Processors 59 / 78

Design Issues Register Renaming What would be an obvious method for dealing with this problem? We could try to rename the registers ;) Essentially we are trying to resolve the issue by duplicating the resource; L. Tarrataca Chapter 16 - Superscalar Processors 60 / 78

Design Issues Register Renaming Register Renaming In essence, processor registers are: allocated dynamically by the processor hardware; associated with the values needed by instructions at points in time; When an instruction executes that has a register as a destination: A new register is allocated for that value; Subsequent instructions that access the value in the register: Register references are revised to refer to the allocated register; L. Tarrataca Chapter 16 - Superscalar Processors 61 / 78

Design Issues Register Renaming Example Figure: Register Renaming (Source: [Stallings, 2015]) Register reference without the subscript refers to the original register; Register reference with the subscript refers to an allocated register; Subsequent instructions reference the most recently allocated register; L. Tarrataca Chapter 16 - Superscalar Processors 62 / 78

Design Issues Register Renaming Example Figure: Register Renaming (Source: [Stallings, 2015]) Creation of register R3 c in instruction I3 avoids: WAR dependency on I2; WAW dependency on I1; Interfering the correct value being accessed by I4; As a result I3 can be issued immediately; Without renaming I3 cannot be issued until I1 is complete and I2 is issued. L. Tarrataca Chapter 16 - Superscalar Processors 63 / 78

Design Issues Register Renaming But how can we gain a sense of how much performance is gained with such strategies? L. Tarrataca Chapter 16 - Superscalar Processors 64 / 78

Design Issues Register Renaming But how can we gain a sense of how much performance is gained with such strategies? Use one scalar processor devoid of these strategies as a base system; Start adding various superscalar features; Comparison need to be performed against different programs. L. Tarrataca Chapter 16 - Superscalar Processors 65 / 78

Design Issues Register Renaming Figure: Speedups of various machine organizations without procedural dependencies (Source: [Stallings, 2015]) L. Tarrataca Chapter 16 - Superscalar Processors 66 / 78

Design Issues Register Renaming From the previous Figure: Y-axis is the mean speedup of the superscalar over the scalar machine; X-axis shows the results for four alternative processor organizations: 1 st : no duplication of functional units, can issue instructions out of order; 2 nd : duplicates the load/store functional unit that accesses a data cache; 3 rd : duplicates the ALU; 4 th : duplicates both load/store and ALU Window sizes of 8, 16, 32 instructions are shown. 1 st graph, no register naming is allowed, whilst in the 2 nd graph it is; L. Tarrataca Chapter 16 - Superscalar Processors 67 / 78

Design Issues Register Renaming What conclusions can you derive from the previous picture? L. Tarrataca Chapter 16 - Superscalar Processors 68 / 78

Design Issues Register Renaming Some conclusions (1/2): Probably not worthwhile to add functional units without register renaming. Performance improvement at the cost of hardware complexity. Register renaming gains are achieved by adding more functional units. L. Tarrataca Chapter 16 - Superscalar Processors 69 / 78

Design Issues Register Renaming Some conclusions (2/2): Significant difference in the amount of gain regarding instruction window: Small window will prevent effective utilization of the extra functional units; Processor needs to look far ahead to find independent instructions capable of using the hardware more fully. L. Tarrataca Chapter 16 - Superscalar Processors 70 / 78

Superscalar Execution Overview Superscalar Execution Overview (1/7) Lets review how all these concepts work together: Figure: Conceptual depiction of superscalar processing (Source: [Stallings, 2015]) L. Tarrataca Chapter 16 - Superscalar Processors 71 / 78

Superscalar Execution Overview Superscalar Execution Overview (2/7) Figure: Conceptual depiction of superscalar processing (Source: [Stallings, 2015]) 1 Program to be executed consists of a linear sequence of instructions; 2 This is the original sequential program generated by the compiler; L. Tarrataca Chapter 16 - Superscalar Processors 72 / 78

Superscalar Execution Overview Superscalar Execution Overview (3/7) Figure: Conceptual depiction of superscalar processing (Source: [Stallings, 2015]) 3 Instruction fetch stage generates a dynamic stream of instructions; 4 Processor attempts to remove dependencies from stream; L. Tarrataca Chapter 16 - Superscalar Processors 73 / 78

Superscalar Execution Overview Superscalar Execution Overview (4/7) Figure: Conceptual depiction of superscalar processing (Source: [Stallings, 2015]) 5 Processor dispatches instructions into an execution window; 6 In this window: Instructions no longer form a sequential stream; Instead instructions are structured according to data dependencies; L. Tarrataca Chapter 16 - Superscalar Processors 74 / 78

Superscalar Execution Overview Superscalar Execution Overview (5/7) Figure: Conceptual depiction of superscalar processing (Source: [Stallings, 2015]) 7 Processor executes each instruction in an order determined by: data dependencies; hardware resource availability; L. Tarrataca Chapter 16 - Superscalar Processors 75 / 78

Superscalar Execution Overview Superscalar Execution Overview (6/7) Figure: Conceptual depiction of superscalar processing (Source: [Stallings, 2015]) 8 Instructions are put back into sequential order and their results recorded. L. Tarrataca Chapter 16 - Superscalar Processors 76 / 78

Superscalar Execution Overview Superscalar Execution Overview (7/7) With superscalar architecture: Instructions may complete in order from the one specified in program. Branch prediction and speculative execution means that: some instructions may complete execution and then must be abandoned; Therefore memory and registers: Cannot be updated immediately when instructions complete execution; Results must be held in temporary storage that is made permanent when: it is determined that the instruction executed in the sequential model; L. Tarrataca Chapter 16 - Superscalar Processors 77 / 78

References References I Stallings, W. (2015). Computer Organization and Architecture. Pearson Education. L. Tarrataca Chapter 16 - Superscalar Processors 78 / 78