Architectural Core Salvaging in a Multi-Core Processor for Hard-Error Tolerance Michael D. Powell, Arijit Biswas, Shantanu Gupta, and Shubu Mukherjee SPEARS Group, Intel Massachusetts EECS, University of Michigan 1 SPEARS-FACT
Motivation Hard errors in logic are an increasing risk Errors manifest at manufacture time or in field: Manufacture: more cores, bigger die -> lower yield Field: wearout failure Large SRAMs (cache) regular, easily protected Manufacture: spare lines, field: line disable Remainder of die (cores) not as easily protected Focus on manufacture, but same applies to field How do we tolerate core defects? 2 SPEARS-FACT
Tolerating defective cores Defective core options: disable or salvage Disabling wastes entire core even for minor defect Salvaging uses redundancy to maintain correctness Salvage by using redundancy to tolerate a defect µarchitectural: use another resource in the core Architectural: use another core Architectural salvaging covers against more defects 3 SPEARS-FACT
µarchitectural salvaging: Natural method of defect-tolerance Both others and we have studied it Protects only resources w/ intra-core redundancy Small-array entry: other entries Execution logic: other logic w/ same function perceived µarch µarch salvaging area coverage limit salvaging area coverage Core area x x issue d-code fetch fp x ld/st rob 4 SPEARS-FACT 10% 90% Difficult to cover >> 10% of core area issue d-code fetch Coverage not as much as might be expected x x fp x ld/st rob
0 Architectural core salvaging Key observation In CMP, die must support all instructions, but individual cores need not support all instructions Architectural salvaging Threads can be dynamically migrated (swapped) between cores to guarantee progress 1 On demand context switch (CS) Cores with critical defects in exec. can still be used Assuming uncore hardware for context state transfer 2 0 1 2 0 4 2 3 4 3 4 3 1 5 6 7 5 6 7 5 6 7 Low overhead if defective units used infrequently 5 SPEARS-FACT
Architectural Core Salvaging Contributions Better performance than core disabling Most workloads get useful work from defective core Exploit architectural redundancy exceed limitations of µarch. redundancy Cover > portion of core w/ less invasive technique µarch salvaging: max exec-unit coverage arch salvaging: demonstrated exec-unit coverage 16% 46% 6 SPEARS-FACT
Outline Introduction Limitations of µarchitectural salvaging Architectural salvaging Methodology and performance results Conclusion 7 SPEARS-FACT
µarch salvaging: small arrays Small RAMs, CAMs occupy substantial core area Buffers, queues, regfiles: too small for spare arrays May protect using spare entries or by reducing size Covers only memory cells; not decoder, mux, sense amp memory-cell fraction decreases w/ array size Decode : 32 64-bit Reorder buffer: 96 72-bit Cache array: 64KB 2-2 entries. 4 r, 4 w entries. 8 r, 4 w way. 1 r/w (reference) 13% 87% 83% 17% 40% 60% memory cells (redundant) Support logic/wires Area truly covered can be deceptively low 8 SPEARS-FACT
µarch salvaging: execution units Many instruction classes are replicated Canonical redundancy example; superscalar hallmark But less redundancy than might be expected in IA Non-replicated instruction may share structure Instruction replication!= structure redundancy 16% of exec area µarch. redundant 16% 84% Exec-unit area Most exec area is for non-replicated instructions 9 SPEARS-FACT
µarch salvaging summary Small-array + exec coverage: ~10% of non-cache core Not enough µarch redundancy for high coverage Each structure requires its own salvaging hardware Other redundancy needed to obtain high coverage 10 SPEARS-FACT
Outline Introduction Limitations of µarchitectural salvaging Architectural salvaging Methodology and performance results Conclusion 11 SPEARS-FACT
Architectural Salvaging Other cores provide redundancy, cover defects Each core needs to know its defects If thread needs defective resource: Trap and migrate to another core O/S and user transparency APIC ID swapped between cores along with thread Migration occurs using h/w C6 power-state array (few KB) Overhead and performance If defective resource used frequently by all threads Fall back to core disabling to avoid migration thrashing Places upper bound on performance loss What is design space (# of cores) for arch. salvaging? 12 SPEARS-FACT
Core salvaging: simple perf. model 1.00 0.90 Throughput 0.80 0.70 Throughput loss 5% or less 1 core disabled (-100%) 0.60 1 core loses 25% 1 core loses 10% 0.50 # of cores 0 4 8 12 16 20 24 Salvaging makes sense for CMPs >5 cores 13 SPEARS-FACT
Arch. salvaging: targeted instructions Infrequent instructions Those used by only some applications Or used in most applications, but only a few times E.g., certain floating point & SIMD instructions Disallow salvaging critical instructions Load, store, simple int ALU, branch Defect in executing critical inst. -> disable core Structures used by only infrequent instructions are large fraction of execution-unit area Are there enough infrequent instructions? 14 SPEARS-FACT
Mean 1.0 0.8 0.6 0.4 0.2 Instruction Occurrence Fraction of non-overlapping overlapping 100K-inst. windows that do not contain an instruction class for 5 workloads i: spec int 2K f: spec fp 2K 6: spec 2006 s: server m : multimedia 0.0 i f 6 s m fp div fp mul fp rom i mul i div i shuf sishift i slow Many (large-area) instructions quite infrequent 15 SPEARS-FACT
Outline Introduction Limitations of µarchitectural salvaging Architectural salvaging Methodology and performance results Conclusion 16 SPEARS-FACT
Methodology Modeled architecture: Intel Core-2 like 8 cores; 8 MB shared last-level-cache 4-issue out-of-order Each core: 64KB i-cache, 64KB d-cache, 1MB L2 Assume 1000-cycle thread migration overhead Fall back to disabling for 150K cycles if > 2 migrations in 40K cycles Workloads: spec00, spec06, server, multimedia 17 SPEARS-FACT
Mean relative throughput 1.2 1.1 1.0 0.9 0.8 0.7 Core salvaging performance (8 core die) i: spec int 2K f: spec fp 2K 6: spec 2006 s: server m: multimedia i f 6 s m Relative to defect-free Relative to disabling 1 core fp div fp mul fp rom i mul i div si-shift i slow i,si-shuf alldiv allfp 1.5 5.1 2.1 2.5 0.0 2.2 0.3 6.2 6.9 34.7 % of execution unit area covered Average 5-7% better throughput than disabling 18 SPEARS-FACT
Architectural salvaging coverage Execution-unit case study: uarch covered @ max 16% of execution-unit area We show proof-of-concepts for arch. covering 46% Accounts for 9% of vulnerable core area vs 3% Core level: µarch covered max 10.6% of core Arch covers nearly that much in exec. units Combine exe w/ hybrid h/w salvaging (shown in paper), cover 21% of vulnerable core area 9% 12% Core area issue d-code fetch rob ld/st x x 19 SPEARS-FACT fp x
Outline Introduction Limitations of µarchitectural salvaging Architectural salvaging Methodology and performance Results Conclusion 20 SPEARS-FACT
Conclusions Hard errors in logic are an increasing risk Architectural vs µarch. core salvaging Cover > portion of core w/ less invasive technique Cover 46% of execution units vs 16% for µarch Covered exec units: 9% of vulnerable core area Apply salvaging at manufacture or in field Better performance than core disabling Core with minor defect -> nearly full performance 21 SPEARS-FACT
Architectural Core Salvaging in a Multi-Core Processor for Hard-Error Tolerance Michael D. Powell, Arijit Biswas, Shantanu Gupta, and Shubu Mukherjee SPEARS Group, Intel Massachusetts EECS, University of Michigan 22 SPEARS-FACT