Overview 1 Trends in Microprocessor Architecture R05 Robert Mullins Computer architecture Scaling performance and CMOS Where have performance gains come from? Modern superscalar processors The limits of superscalar processors This course 2 Computer architecture Computer architecture Computer architecture is the interface between what technology can provide and what the marketplace demands Computer architecture is the science and art of selecting and interconnecting hardware components to create computers that meet functional, performance and cost goals Mark Hill Computer architecture is a science of trade-offs Computer architecture forms the bridge between application need and the capabilities of the underlying technology Tilak Agerwala and Siddhartha Chatterjee Yale Patt 3 4
Computer architecture We cannot architect a new computer without defining performance, power and cost goals. The design process is all about understanding and making tradeoffs What is our target market and what applications will we be running? The best architecture is a moving target The needs of the marketplace change Shifting fabrication technology characteristics New technologies memory, packaging, compiler, languages,... 5 Computer architecture Computer architect's often err by preparing for yesterday's computations Bill Dally (Easy to make the same error during a PhD!) Tomorrow's applications and technologies are not easy to predict! 6 Burger's end of the road paper suggested performance would be limited to 12.5%/annum Predicted: 1997-2014 7.4x Actual: ~36x If at historical rate: 1720x Reproduced from Computer architecture: A quantitative approach, Hennessy/Patterson 7 8
Microprocessor trends Microprocessor performance increased at a rate of ~52%/year between 1986-2002 ~800X improvement over 16 years How was such an improvement in performance achieved? Is this a reasonable rate of performance growth given the advances in fabrication technology? Exe. time = Instr. count x CPI x Clock Period https://github.com/karlrupp/microprocessor-trend-data 9 10 Technology scaling Gates per clock 7 process generations Scaling provides ~1.4x transistor performance improvment per generation 10.5X (careful, this doesn't automatically translate directly into performance gains) Less logic between pipeline registers Reduction from ~100 to 10 gate delays 10X How? Pipelining 5 to 20 stages (~4X) Circuit-level advances e.g. new logic families ~2.5X Reproduced with kind permission of Mark Horowitz 11 Reproduced with kind permission of Mark Horowitz 12
IPC & instr. count ~105X ~5-8X improvement in SPECint/MHz This is despite clock frequency improvements Includes advances in compiler technology and impact of increased bus widths Improvement in SPECint95/Mhz over time Reproduced with kind permission of Mark Horowitz Reproduced from CMOS VLSI Design Weste/Harris (2005) 13 14 How was it possible to maintain and even decrease CPI (improve IPC) Moore's law! How were the additional transistors exploited? Intel 386 to Pentium 4 386: 275K transistors (die size = 43mm2) P4: 42M transistors (die size = 217mm2) 5X from increased die size 27X from technology scaling Today's (2017) largest chips contain > 10 billion transistors Reproduced from CMOS VLSI Design, Weste and Harris (2005) 15 16
The future of Moore's Law: 2D to 3D Moore s Law Beyond 2021 it won't be economically desirable to shrink transistor dimensions Recently introduced vertical transistors (e.g. dual-gate and tri-gate) Monolithic 3D predicted by 2024 Roadmap to consider applications in future (more of an end-to-end view vs. bottom-up) The latest ITRS Roadmap (2015) predicts that physical gate length will not shrink beyond 2021. Earlier predictions (2013) were more optimistic. Modern superscalar processors 18 Modern superscalar processors Revision (See Hennessy/Patterson) Significant hardware support for Instruction Level Parallelism (ILP) in most commercial microprocessors Multiple-issue architectures Deep pipelines, branch prediction, speculative execution Large on-chip caches (L1/L2/L3) Out-of-order execution, register renaming Dynamic memory address disambiguation SIMD instructions... 19 20
Cost and complexity of extracting ILP Pipeline depth limits Diminishing returns Increased complexity limits ability to optimise design The underlying fabrication technology characteristics are becoming more challenging too Increases verification complexity and time Increases time-to-market Interruptions to the pipeline (branches) Performance of the memory system Clocking overheads (registers/clock skew) Need to balance stages and maintain the atomicity of some operations Limited ILP Power cost (See also Optimal Pipeline Depth link on Seminar 1 wiki page) 21 22 Interconnect versus transistor scaling Smaller transistors = faster/lower power Wires don't scale in the same way Centralised structures don't scale well Pressure to decentralise Consider bypass network between FUs Clustered implementations "Coming challenges in microarchitecture and architecture", Ronen et al, 2001 23 24
Voltage scaling and power limits Accept we can make little progress with single-thread performance Look towards thread-level parallelism Voltage scaling has slowed 5V to 1V - gave us 25X power savings 1V to 0.7V (limit at end of CMOS around 2020) Only 2X power savings left from voltage scaling! Sensible power limits already reached Pressure to reduce power consumption Process variation complications Achieve our performance gains in a new way: Rapidly increase the number of cores 2X-3X per generation Don't scale the clock frequency Create simpler more power efficient cores instead Fault tolerance requirements in the longer term 25 26 Pawlowski (Intel) 2007 It is now 2018... Numbers of cores has scaled less agressively than this. In 2017 @ 14nm, is simple? Replicate existing processor designs to ease design process Many applications already exist where thread-level parallelism is plentiful We've had 30+ years of experience writing parallel programs High-end server part: 28 Core, Xeon (Skylake) 56 threads Clock frequency 2.5GHz (max turbo freq. 3.8GHz) TDP (power) = 205 W 27 28
Many new challenges: Power is a first order design constraint On-chip and off-chip communication Simpler cores and Amdahl's law Power constrained design Support for the shared-memory paradigm? Synchronization and thread-scheduling support? Everyone must now write scalable and correct parallel programs! 29 Power consumption is already at a sensible limit (for many applications we would like to reduce it) We are going to increase the number of cores by 2-3X per generation Power savings? Core shrink (<1.4X) Simpler cores (1.4-2X?) Some VDD savings Need to add uncore logic too! Techniques for adaptive EPI? Future of multicore? Beyond homogenous multicore Power consumption is a limiting factor in the design of multicore processors For many designs this has prompted the integration of many specialized accelerators An ASIC implementation of an algorithm may be 101000X more energy efficient that a software implementation e.g. Apple A8 SoC: NAVIGO, [Hempstead, Wei and Brooks, 2011] Examined throughput orientated workload Suggest gains limited to 35% per year due to power constraints ~50% custom accelerators ~25% CPUs (2) ~25% GPU 30 31
Future gains This course Need for applications to be approximation/fault tolerant Introduction to the challenges of building and programming chip multiprocessors Lots to learn from traditional parallel computers, but many problems and trade-offs are new New applications The trade-offs on-chip are very different to those when designing physically larger parallel machines Power and energy constraints Parallel programming for the masses IRDS Roadmap (2016) Node: 2nm/1.5nm Vertical Gate-All-Around-Device (GAA) Monolithic-3D (stacking of devices) VDD = 0.4V This course 1. Trends in microprocessor architecture 2. Introduction to parallel computing 3. Parallel algorithms 4. Chip Multiprocessors (I) 5. Chip Multiprocessors (II) 6. Transactional memory 7. On-chip interconnection networks 8. Manycore research issues 2017: Rune Holm ARM (Machine Learning Group) 2016: Gavin Stark, Netronome (CTO) 2014: David Moloney, Movidius (CTO) 2012: Matt Horsnell, ARM 2011: Eben Upton, Broadcom... 35 34