Movatterモバイル変換


[0]ホーム

URL:


PPT, PDF2,215 views

Pipelining in computer architecture

Pipelining is a technique used in modern processors to improve performance. It allows multiple instructions to be processed simultaneously using different processor components. This increases throughput compared to sequential processing. However, pipeline stalls can occur due to data hazards when instructions depend on each other, instruction hazards from branches or cache misses, or structural hazards when resources are needed simultaneously. Various techniques like forwarding, reordering, and branch prediction aim to reduce the impact of hazards on pipeline performance.

In this document
Powered by AI

Introduction by Ramakrishna Reddy on Chapter 8 of Pipelining in computer architecture.

Pipelining enhances performance and throughput, needing sophisticated compilation techniques.

Introduction to basic concepts related to pipelining operations.

Improving execution speed through faster circuits and concurrent operations in processing.

Pipelining decomposes processes into sub-operations, akin to an assembly line in manufacturing.

Laundry analogy depicting sequential vs. pipelined processes with time calculations.

Pipelining improves throughput not latency, with focus on instruction fetch and execution stages.

Describes hardware for fetching and executing instructions, including buffer usage.

Detailed operations of fetching, decoding, executing, and writing in four-stage pipeline.

Fast cache memory mitigates main memory latency impacts on pipeline performance.

Growth potential and factors influencing pipeline execution efficiency, such as stalls and hazards.

Interactive quiz question on calculating cycles for a 4-stage pipeline scenario.

Definition of data hazards, their examples, and implications in pipelined processors.

Concept of instruction hazards including branch interruptions causing pipeline stalls.

Effects of unconditional branches on pipeline execution and strategies to minimize penalties.

Utilization of instruction queues and the role of dispatch units in reducing stalls.

Challenges of conditional branches in pipelining and using delays for instruction efficiency.

Predictions to manage branch instructions and maintain instruction flow without stalls.

The influence of addressing modes on pipelining, including advantages and trade-offs.

Conditional codes effects on instruction reordering and maintaining computational outcomes.

Alterations in the datapath for effective pipelined execution and hardware organization.

Superscalar processing enhancing concurrency with multiple instruction executions.

Strategies for managing out-of-order execution and handling exceptions to preserve order.

Factors affecting pipeline execution time and the limits of throughput enhancement.

Embed presentation

Downloaded 36 times
Chapter 8. PipeliningRamakrishna Reddy bijjam9966484777Asst. ProfessorAvanthi group of colleges
Overview Pipelining is widely used in modernprocessors. Pipelining improves system performance interms of throughput.[No of work done at agiven time] Pipelined organization requires sophisticatedcompilation techniques.
Basic Concepts
Making the Execution ofPrograms Faster Use faster circuit technology to build theprocessor and the main memory. Arrange the hardware so that more than oneoperation can be performed at the same time. In the latter way, the number of operationsperformed per second is increased eventhough the elapsed time needed to performany one operation is not changed.
pipeline It is technique of decomposing a sequentialprocess into suboperation, with eachsuboperation completed in dedicatedsegment. Pipeline is commonly known as an assemblyline operation. It is similar like assembly line of carmanufacturing. First station in an assembly line set up achasis, next station is installing the engine,another group of workers fitting the body.
Traditional Pipeline ConceptLaundry ExampleAnn, Brian, Cathy, Daveeach have one load of clothesto wash, dry, and foldWasher takes 30 minutesDryer takes 40 minutes“Folder” takes 20 minutesA B C D
Traditional Pipeline Concept Sequential laundry takes 6hours for 4 loads If they learned pipelining,how long would laundrytake?ABCD30 40 20 30 40 20 30 40 20 30 40 206 PM 7 8 9 10 11 MidnightTime
Traditional Pipeline ConceptPipelined laundry takes3.5 hours for 4 loadsABCD6 PM 7 8 9 10 11 MidnightTaskOrderTime30 40 40 40 40 20
Traditional Pipeline Concept Pipelining doesn’t helplatency of single task, ithelps throughput of entireworkload Pipeline rate limited byslowest pipeline stage Multiple tasks operatingsimultaneously usingdifferent resources Potential speedup = Numberpipe stages Unbalanced lengths of pipestages reduces speedup Time to “fill” pipeline andtime to “drain” it reducesspeedup Stall for DependencesABCD6 PM 7 8 9TaskOrderTime30 40 40 40 40 20
Idea of pipelining in computer The processor execute the program byfetching and executing instructions. One afterthe other. Let Fi and Ei refer to the fetch and executesteps for instruction Ii
Use the Idea of Pipelining in aComputerF1E1F2E2F3E3I1 I2 I3(a) Sequential executionInstructionfetchunitExecutionunitInterstage bufferB1(b) Hardware organizationTimeF1 E1F2 E2F3 E3I1I2I3Instruction(c) Pipelined executionFigure 8.1. Basic idea of instruction pipelining.Clock cycle 1 2 3 4TimeFetch + Execution
Contd., Computer that has two separate hardwareunits, one for fetching and another forexecuting them. the instruction fetched by the fetch unit isdeposited in an intermediate buffer B1. This buffer needed to enable the executionunit while fetch unit fetching the nextinstruction.
8.1(c) The computer is controlled by a clock. Any instruction fetch and execute stepscompleted in one clock cycle.
Use the Idea of Pipelining in aComputerF4I4F1F2F3I1I2I3D1D2D3D4E1E2E3E4W1W2W3W4InstructionFigure 8.2. A 4­stage pipeline.Clock cycle 1 2 3 4 5 6 7(a) Instruction execution divided into four stepsF : FetchinstructionD : Decodeinstructionand fetchoperandsE: ExecuteoperationW : WriteresultsInterstage buffers(b) Hardware organizationB1 B2 B3TimeFetch + Decode+ Execution + WriteTextbook page: 457
 Fetch(F)- read the instruction from the memory Decode(D)- Decode the instruction and fetchthe source operand Execute(E)- perform the operation specified bythe instruction Write(W)- store the result in the destinationlocation
Role of Cache Memory Each pipeline stage is expected to complete in oneclock cycle. The clock period should be long enough to let theslowest pipeline stage to complete. Faster stages can only wait for the slowest one tocomplete. Since main memory is very slow compared to theexecution, if each instruction needs to be fetchedfrom main memory, pipeline is almost useless.[tentimes greater than the time needed to performpipeline stage] Fortunately, we have cache.
Pipeline Performance The potential increase in performanceresulting from pipelining is proportional to thenumber of pipeline stages. However, this increase would be achievedonly if all pipeline stages require the sametime to complete, and there is no interruptionthroughout program execution. Unfortunately, this is not true. Floating point may involve many clock cycle
Pipeline PerformanceF1F2F3I1I2I3E1E2E3D1D2D3W1W2W3InstructionF4 D4I4Clock cycle 1 2 3 4 5 6 7 8 9Figure 8.3. Effect of an execution operation taking more than one clock cycle.E4F5I5 D5TimeE5W4
Pipeline Performance The previous pipeline is said to have been stalled for two clockcycles. Any condition that causes a pipeline to stall is called a hazard. Data hazard – any condition in which either the source or thedestination operands of an instruction are not available at thetime expected in the pipeline. So some operation has to bedelayed, and the pipeline stalls. Instruction (control) hazard – a delay in the availability of aninstruction causes the pipeline to stall.[cache miss] Structural hazard – the situation when two instructions requirethe use of a given hardware resource at the same time.
Pipeline PerformanceF1F2F3I1I2I3D1D2D3E1E2E3W1W2W3InstructionFigure 8.4. Pipeline stall caused by a cache miss in F2.1 2 3 4 5 6 7 8 9Clock cycle(a) Instruction execution steps in successive clock cycles1 2 3 4 5 6 7 8Clock cycleStageF: FetchD: DecodeE: ExecuteW: WriteF1 F2 F3D1 D2 D3idle idle idleE1 E2 E3idle idle idleW1 W2idle idle idle(b) Function performed by each processor stage in successive clock cycles9W3F2 F2 F2TimeTimeIdle periods –stalls (bubbles)Instructionhazard(Cachemiss)Decode unit is idlein cycles 3 through5, Execute unit idlein cycle 4 through 6and write unit is idlein cycle 5 through 7such idle period iscalled stalls.
Pipeline PerformanceF1F2F3I1I2 (Load)I3E1M2D1D2D3W1W2InstructionF4I4Clock cycle 1 2 3 4 5 6 7Figure 8.5. Effect of a Load instruction on pipeline timing.F5I5 D5TimeE2E3 W3E4D4Load X(R1), R2StructuralhazardThe memory address, X+(R1) is computed in step E2in cycle4, then memoryaccess takes place in cycle5. the operand read from memory is written intoregister R2 in cycle 6[Execution takes 2 cycles] it stalls pipeline to stall for onecycle. Bcox both instruction I2 and I3 require access of register file in cycle 6.
Pipeline Performance Again, pipelining does not result in individualinstructions being executed faster; rather, it is thethroughput that increases. Throughput is measured by the rate at whichinstruction execution is completed. Pipeline stall causes degradation in pipelineperformance. We need to identify all hazards that may cause thepipeline to stall and to find ways to minimize theirimpact.
Quiz Four instructions, the I2 takes two clockcycles for execution. Pls draw the figure for 4-stage pipeline, and figure out the total cyclesneeded for the four instructions to complete.
Data Hazards
Data Hazards We must ensure that the results obtained when instructions areexecuted in a pipelined processor are identical to those obtainedwhen the same instructions are executed sequentially. Hazard occursA ← 3 + AB ← 4 × A No hazardA ← 5 × CB ← 20 + C When two operations depend on each other, they must beexecuted sequentially in the correct order. Another example:Mul R2, R3, R4Add R5, R4, R6
Data HazardsF1F2F3I1 (Mul)I2 (Add)I3D1D3E1E3E2W3InstructionFigure 8.6. Pipeline stalled by data dependency between D2 and W1.1 2 3 4 5 6 7 8 9Clock cycleW1D2A W2F4 D4 E4 W4I4D2TimeFigure 8.6. Pipeline stalled by data dependency between D2 and W1.
Operand Forwarding Instead of from the register file, the secondinstruction can get data directly from theoutput of ALU after the previous instruction iscompleted. A special arrangement needs to be made to“forward” the output of ALU to the input ofALU.
RegisterfileSRC1 SRC2RSLTDestinationSource 1Source 2(a) DatapathALUE: Execute(ALU)W: Write(Register file)SRC1,SRC2 RSLT(b) P osition of the source and result registers in the processor pipelineFigure 8.7. Operand forw arding in a pipelined processor.Forwarding path
Handling Data Hazards inSoftware Let the compiler detect and handle thehazard:I1: Mul R2, R3, R4NOPNOPI2: Add R5, R4, R6 The compiler can reorder the instructions toperform some useful work during the NOPslots.
Side Effects The previous example is explicit and easily detected. Sometimes an instruction changes the contents of a registerother than the one named as the destination. When a location other than one explicitly named in an instructionas a destination operand is affected, the instruction is said tohave a side effect. (Example?) Example: conditional code flags:Add R1, R3AddWithCarry R2, R4 Instructions designed for execution on pipelined hardware shouldhave few side effects.
Instruction Hazards
Overview Whenever the stream of instructions suppliedby the instruction fetch unit is interrupted, thepipeline stalls. Cache miss Branch
Unconditional BranchesF2I2 (Branch)I3IkE2F3Fk EkFk+1 Ek+1Ik+1InstructionFigure 8.8. An idle cycle caused by a branch instruction.Execution unit idle1 2 3 4 5Clock cycleTimeF1I1 E16X
Unconditional Branches The time lost as a result of a branchinstruction is referred to as the branchpenalty. The previous example instruction I3 iswrongly fetched and branch target address kwill discard the i3. Reducing the branch penalty requires thebranch address to be computed earlier in thepipeline. Typically the Fetch unit has dedicated h/wwhich will identify the branch target addressas quick as possible after an instruction isfetched.
Branch TimingXFigure 8.9. Branch timing.F1 D1 E1 W1I2 (Branch)I11 2 3 4 5 6 7Clock cycleF2 D2F3 XFk Dk EkFk+1 Dk+1I3IkIk+1WkEk+1(b) Branch address computed in Decode stageF1 D1 E1 W1I2 (Branch)I11 2 3 4 5 6 7Clock cycleF2 D2F3Fk Dk EkFk+1 Dk+1I3IkIk+1WkEk+1(a) Branch address computed in Execute stageE2D3F4 XI48TimeTime- Branch penalty- Reducing the penalty
Instruction Queue and Prefetching Either cache (or) branch instruction stalls thepipeline. Many processor employs dedicated fetch unitwhich will fetch the instruction and put theminto a queue. It can store several instruction at a time. A separate unit called dispatch unit, takesinstructions from the front of the queue andsend them to the execution unit.
Instruction Queue andPrefetchingF : FetchinstructionE : ExecuteinstructionW : WriteresultsD : Dispatch/DecodeInstruction queueInstruction fetch unitFigure 8.10. Use of an instruction queue in the hardware organization of Figure 8.2b.unit
Conditional Braches A conditional branch instruction introducesthe added hazard caused by the dependencyof the branch condition on the result of apreceding instruction. The decision to branch cannot be made untilthe execution of that instruction has beencompleted. Branch instructions represent about 20% ofthe dynamic instruction count of mostprograms.
Delayed Branch The instructions in the delay slots are alwaysfetched. Therefore, we would like to arrangefor them to be fully executed whether or notthe branch is taken. The objective is to place useful instructions inthese slots. The effectiveness of the delayed branchapproach depends on how often it is possibleto reorder instructions.
Delayed BranchAddLOOP Shift_left R1DecrementBranch=0R2LOOPNEXT(a) Original program loopLOOP Decrement R2Branch=0Shift_leftLOOPR1NEXT(b) Reordered instructionsFigure 8.12. Reordering of instructions for a delayed branch.AddR1,R3R1,R3
Delayed BranchF EF EF EF EF EF EF EInstructionDecrementBranchShift (delay slot)Figure 8.13. Execution timing showing the delay slot being filledduring the last two passes through the loop in Figure 8.12.Decrement (Branch taken)BranchShift (delay slot)Add (Branch not taken)1 2 3 4 5 6 7 8Clock cycleTime
Branch Prediction To predict whether or not a particular branch will be taken. Simplest form: assume branch will not take place and continue tofetch instructions in sequential address order. Until the branch is evaluated, instruction execution along thepredicted path must be done on a speculative basis. Speculative execution: instructions are executed before theprocessor is certain that they are in the correct executionsequence. Need to be careful so that no processor registers or memorylocations are updated until it is confirmed that these instructionsshould indeed be executed.
Incorrectly Predicted BranchF1F2I1  (Compare)I2 (Branch>0)I3D1 E1 W1F3F4Fk DkD3 XXI4IkInstructionFigure 8.14. Timing when a branch decision has been incorrectly predictedas not taken.E2Clock cycle 1 2 3 4 5 6D2/P2Time
Branch Prediction Better performance can be achieved if we arrangefor some branch instructions to be predicted astaken and others as not taken. Use hardware to observe whether the targetaddress is lower or higher than that of the branchinstruction. Let compiler include a branch prediction bit. So far the branch prediction decision is always thesame every time a given instruction is executed –static branch prediction.
Influence onInstruction Sets
Overview Some instructions are much better suited topipeline execution than others. Addressing modes Conditional code flags
Addressing Modes Addressing modes include simple ones andcomplex ones. In choosing the addressing modes to beimplemented in a pipelined processor, wemust consider the effect of each addressingmode on instruction flow in the pipeline: Side effects The extent to which complex addressing modes causethe pipeline to stall Whether a given mode is likely to be used by compilers
RecallF1F2F3I1I2 (Load)I3E1M2D1D2D3W1W2InstructionF4I4Clock cycle 1 2 3 4 5 6 7Figure 8.5. Effect of a Load instruction on pipeline timing.F5I5 D5TimeE2E3 W3E4D4Load X(R1), R2Load (R1), R2
Complex Addressing ModeFF DD EX +[R1] [X +[R1]] [[X +[R1]]]LoadNext instruction(a) Complex addressing modeW1 2 3 4 5 6 7Clock cycleTimeWForwardLoad (X(R1)), R2
Simple Addressing ModeX + [R1]F DFFF DDDE[X +[R1]][[X +[R1]]]AddLoadLoadNext instruction(b) Simple addressing modeWWWWAdd #X, R1, R2Load (R2), R2Load (R2), R2
Addressing Modes In a pipelined processor, complex addressingmodes do not necessarily lead to faster execution. Advantage: reducing the number of instructions /program space Disadvantage: cause pipeline to stall / morehardware to decode / not convenient for compiler towork with Conclusion: complex addressing modes are notsuitable for pipelined execution.
Addressing Modes Good addressing modes should have: Access to an operand does not require more than oneaccess to the memory Only load and store instruction access memory operands The addressing modes used do not have side effects Register, register indirect, index
Conditional Codes If an optimizing compiler attempts to reorderinstruction to avoid stalling the pipeline whenbranches or data dependencies betweensuccessive instructions occur, it must ensurethat reordering does not cause a change inthe outcome of a computation. The dependency introduced by the condition-code flags reduces the flexibility available forthe compiler to reorder instructions.
Conditional CodesAddCompareBranch=0R1,R2R3,R4. . .CompareAddBranch=0R3,R4R1,R2. . .(a) A program fragment(b) Instructions reorderedFigure 8.17. Instruction reordering.
Conditional Codes Two conclusion: To provide flexibility in reordering instructions, thecondition-code flags should be affected by as fewinstruction as possible. The compiler should be able to specify in whichinstructions of a program the condition codes areaffected and in which they are not.
Datapath and ControlConsiderations
Original DesignMemory busdata linesFigure 7.8. Three­b us organization of the datapath.Bus A Bus B Bus CInstructiondecoderPCRegisterfileConstant 4ALUMDRABRMUXIncrementerAddresslinesMARIR
Instruction cacheFigure 8.18. Datapath modified for pipelined execution, withBus ABus BControl signal pipelineIMARPCRegisterfileALUInstructionABRdecoderIncrementerMDR/WriteInstructionqueueBus CData cacheMemory addressMDR/ReadDMARMemory address(Instruction fetches)(Data access)interstage buffers at the input and output of the ALU.Pipelined Design- Separate instruction and data caches- PC is connected to IMAR- DMAR- Separate MDR- Buffers for ALU- Instruction queue- Instruction decoder output- Reading an instruction from the instruction cache- Incrementing the PC- Decoding an instruction- Reading from or writing into the data cache- Reading the contents of up to two regs- Writing into one register in the reg file- Performing an ALU operation
Superscalar Operation
 Pipeline architecture can executes severalinstruction concurrently. Many instructions are present in the pipeline atthe same time,but they are in different stages oftheir execution. While instruction being fetched at the same timeanother instruction being decoded stage (or)execution. One instruction completes execution in eachclock cycle.
Overview The maximum throughput of a pipelined processoris one instruction per clock cycle. If we equip the processor with multiple processingunits to handle several instructions in parallel ineach processing stage, several instructions startexecution in the same clock cycle – multiple-issue. Processors are capable of achieving an instructionexecution throughput of more than one instructionper cycle – superscalar processors. Multiple-issue requires a wider path to the cacheand multiple execution units to keep the instructionqueue to be filled.
 The superscalar operation have multipleexecution units.
SuperscalarW : WriteresultsDispatchunitInstruction queueFloating­pointunitIntegerunitFigure 8.19. A processor with two execution units.F : Instructionfetch unit
Architecture The above fig. shows the superscalar processorwith two execution unit. The Fetch unit capable of reading two instructionat a time and store it in the queue. The dispatch unit decodes upto two instructionfrom the front of queue(one is integer and anotherone is floating point) dispatched in the same clockcycle. Processor’s program control unit – capable offetching and decoding several instructionconcurrently. It can issue multiple instructionssimultaneously.
TimingI1 (Fadd) D1D2D3D4E1A E1B E1CE2E3 E3 E3E4W1W2W3W4I2 (Add)I3  (Fsub)I4  (Sub)Figure 8.20. An example of instruction execution flow in the processor of Figure 8.19,assuming no hazards are encountered.  1 2 3 4 5 6Clock cycleTimeF1F2F3F47
Assume Floating point takes 3 clock cycles5. Assume floating point unit as a 3 stage pipeline. Soit can accept a new instruction for execution in eachclock cycle. During cycle 4 the execution of I1 inprogress but it will enter the different stages insidethe execution pipleline, so this unit accept for I3 forexecution.6. The integer unit can accept new instruction forexecution because I2 has entered into the writestage.
Out-of-Order Execution Hazards Exceptions Imprecise exceptions Precise exceptionsI1 (Fadd) D1D2D3D4E1A E1B E1CE2E3A E3B E3CE4W1W2W3W4I2 (Add)I3 (Fsub)I4 (Sub)1 2 3 4 5 6Clock cycleTime(a) Delayed writeF1F2F3F47
Execution Completion It is desirable to used out-of-order execution, so that anexecution unit is freed to execute other instructions as soon aspossible. At the same time, instructions must be completed in programorder to allow precise exceptions. The use of temporary registers Commitment unitI1 (Fadd) D1D2D3D4E1A E1B E1CE2E3A E3B E3CE4W1W2W3W4I2 (Add)I3 (Fsub)I4 (Sub)1 2 3 4 5 6Clock cycleTime(b) Using temporary registersTW2TW47F1F2F3F4
 If I2 depends on the result of I1, the execution ofI2 will be delayed. These dependencies are handledcorrectly,as long as the execution of the instructionwill not be delayed. one more reason which delays the execution is, ExceptionTwo causes: Bus error Illegal operation(divide by zero)
Two types of Exception* Imprecise * Precise Imprecise- the result of I2 written into RF during cycle4. suppose I1 causes an exception, the processorallows to complete execution of succeedinginstruction(I2) is said to have imprecise exception. Precise-in imprecise exception, consistent state is notguaranteed, when an exception occurs.(Theprocessor will not allow to write the succeedinginstruction) In consistent state, the result of the instruction mustbe written into the program order(ie. The I2 must beallow to written till cycle 6.
 The integer execution unit has to retain theresult until cycle 6 and it cannot acceptinstruction I4 until cycle 6. Thus, in this method the output is partiallyexecuted (or) discarded.I1 (Fadd) D1D2E1A E1B E1CE2W1W2I2 (Add)1 2 3 4 5 6Clock cycleTimeF1F27
 If an external interrupt is received, the dispatchunit stops reading new instructions from instructionqueue, and the instruction remaining in the queueare discarded.
Execution Completion In precise exception, the results are temporarily stored into thetemp register and later they are transferred to the permanentregisters in correct program order. Thus, two write operations TW and W are carried out. The step W are called commitment step(temp reg to permregister) TW- write into a temp registers W- Transfer the contents back to the permanent regI1 (Fadd) D1D2E1A E1B E1CE2W1W2I2 (Add)1 2 3 4 5 6Clock cycleTW27F1F2
Register renaming A temporary register assumes the role of thepermanent register whose data it is holdingand is given the same name.
PerformanceConsiderations
Overview The execution time T of a program that has adynamic instruction count N is given by:where S is the average number of clock cycles ittakes to fetch and execute one instruction, andR is the clock rate. Instruction throughput is defined as the numberof instructions executed per second.RSNT×=SRPs =
Overview An n-stage pipeline has the potential to increase thethroughput by n times. However, the only real measure of performance isthe total execution time of a program. Higher instruction throughput will not necessarilylead to higher performance. Two questions regarding pipelining How much of this potential increase in instruction throughput can berealized in practice? What is good value of n?
Number of Pipeline Stages Since an n-stage pipeline has the potential toincrease the throughput by n times, how about weuse a 10,000-stage pipeline? As the number of stages increase, the probability ofthe pipeline being stalled increases. The inherent delay in the basic operationsincreases. Hardware considerations (area, power, complexity,…)

Recommended

PPTX
memory system notes.pptx
PPT
Control Memory
PPTX
Von-Neumann machine and IAS architecture
PPT
Types of instructions
PPTX
Pipelining And Vector Processing
PPTX
Data Hazard and Solution for Data Hazard
PPT
Computer organisation Module 1.ppt
PPTX
Micro Programmed Control Unit
PPT
Computer architecture pipelining
PPT
pipelining
PDF
Pipelining
PPT
RTOS Basic Concepts
PPT
Pipelining
PPTX
Booth’s algorithm.(a014& a015)
PPT
pin-diagram-details-of-8086-microprocessor
PPTX
Ripple Carry Adder
PPT
Pipelining & All Hazards Solution
PPTX
Addressing modes of 8086
PPTX
Arithmatic pipline
PPT
Pipeline hazards in computer Architecture ppt
PDF
Chapter 3 instruction level parallelism and its exploitation
PPTX
Interrupts of 8085
PPTX
instruction cycle ppt
PPT
13 superscalar
PPTX
Hardware Multi-Threading
PPTX
Advanced Pipelining in ARM Processors.pptx
PDF
Microcontroller pic 16f877 addressing modes instructions and programming
PPTX
Exception handling in Pipelining in COA
PDF
Pipeline Organization Overview and Performance.pdf
PPT
Chapter6 pipelining

More Related Content

PPTX
memory system notes.pptx
PPT
Control Memory
PPTX
Von-Neumann machine and IAS architecture
PPT
Types of instructions
PPTX
Pipelining And Vector Processing
PPTX
Data Hazard and Solution for Data Hazard
PPT
Computer organisation Module 1.ppt
PPTX
Micro Programmed Control Unit
memory system notes.pptx
Control Memory
Von-Neumann machine and IAS architecture
Types of instructions
Pipelining And Vector Processing
Data Hazard and Solution for Data Hazard
Computer organisation Module 1.ppt
Micro Programmed Control Unit

What's hot

PPT
Computer architecture pipelining
PPT
pipelining
PDF
Pipelining
PPT
RTOS Basic Concepts
PPT
Pipelining
PPTX
Booth’s algorithm.(a014& a015)
PPT
pin-diagram-details-of-8086-microprocessor
PPTX
Ripple Carry Adder
PPT
Pipelining & All Hazards Solution
PPTX
Addressing modes of 8086
PPTX
Arithmatic pipline
PPT
Pipeline hazards in computer Architecture ppt
PDF
Chapter 3 instruction level parallelism and its exploitation
PPTX
Interrupts of 8085
PPTX
instruction cycle ppt
PPT
13 superscalar
PPTX
Hardware Multi-Threading
PPTX
Advanced Pipelining in ARM Processors.pptx
PDF
Microcontroller pic 16f877 addressing modes instructions and programming
PPTX
Exception handling in Pipelining in COA
Computer architecture pipelining
pipelining
Pipelining
RTOS Basic Concepts
Pipelining
Booth’s algorithm.(a014& a015)
pin-diagram-details-of-8086-microprocessor
Ripple Carry Adder
Pipelining & All Hazards Solution
Addressing modes of 8086
Arithmatic pipline
Pipeline hazards in computer Architecture ppt
Chapter 3 instruction level parallelism and its exploitation
Interrupts of 8085
instruction cycle ppt
13 superscalar
Hardware Multi-Threading
Advanced Pipelining in ARM Processors.pptx
Microcontroller pic 16f877 addressing modes instructions and programming
Exception handling in Pipelining in COA

Similar to Pipelining in computer architecture

PDF
Pipeline Organization Overview and Performance.pdf
PPT
Chapter6 pipelining
PPSX
Concept of Pipelining
PPT
Pipelining in COA, traditional pipelining in computer architecture
PPT
chapter6- Pipelining.ppt chaptPipelining
PPT
Instruction pipelining
PPTX
Core pipelining
PPTX
pipelining.pptx
PPTX
print.pptx
PPTX
3 Pipelining
PPTX
Assembly p1
PPT
Pipeline hazard
PDF
COA_Unit-3_slides_Pipeline Processing .pdf
PDF
Computer SAarchitecture Lecture 6_Pip.pdf
PDF
Module 2 of apj Abdul kablam university hpc.pdf
PPT
Pipelining In computer
PPTX
pipeline in computer architecture design
PPTX
COA Unit-5.pptx
PDF
Pipelining in computer organization and hazards
 
PPT
Performance Enhancement with Pipelining
Pipeline Organization Overview and Performance.pdf
Chapter6 pipelining
Concept of Pipelining
Pipelining in COA, traditional pipelining in computer architecture
chapter6- Pipelining.ppt chaptPipelining
Instruction pipelining
Core pipelining
pipelining.pptx
print.pptx
3 Pipelining
Assembly p1
Pipeline hazard
COA_Unit-3_slides_Pipeline Processing .pdf
Computer SAarchitecture Lecture 6_Pip.pdf
Module 2 of apj Abdul kablam university hpc.pdf
Pipelining In computer
pipeline in computer architecture design
COA Unit-5.pptx
Pipelining in computer organization and hazards
 
Performance Enhancement with Pipelining

More from Ramakrishna Reddy Bijjam

PPTX
Probability Distribution Reviewing Probability Distributions.pptx
PPTX
Combining data and Customizing the Header NamesSorting.pptx
PPTX
python plotting's and its types with examples.pptx
PPTX
Statistics and its measures with Python.pptx
PPTX
DataStructures in Pyhton Pandas and numpy.pptx
PPTX
Pyhton with Mysql to perform CRUD operations.pptx
PPTX
Regular expressions,function and glob module.pptx
PPTX
Natural Language processing using nltk.pptx
PPTX
Parsing HTML read and write operations and OS Module.pptx
PPTX
JSON, XML and Data Science introduction.pptx
PPTX
What is FIle and explanation of text files.pptx
PPTX
BINARY files CSV files JSON files with example.pptx
DOCX
VBS control structures for if do whilw.docx
DOCX
Builtinfunctions in vbscript and its types.docx
DOCX
VBScript Functions procedures and arrays.docx
DOCX
VBScript datatypes and control structures.docx
PPTX
Numbers and global functions conversions .pptx
DOCX
Structured Graphics in dhtml and active controls.docx
DOCX
Filters and its types as wave shadow.docx
PPTX
JavaScript Arrays and its types .pptx
Probability Distribution Reviewing Probability Distributions.pptx
Combining data and Customizing the Header NamesSorting.pptx
python plotting's and its types with examples.pptx
Statistics and its measures with Python.pptx
DataStructures in Pyhton Pandas and numpy.pptx
Pyhton with Mysql to perform CRUD operations.pptx
Regular expressions,function and glob module.pptx
Natural Language processing using nltk.pptx
Parsing HTML read and write operations and OS Module.pptx
JSON, XML and Data Science introduction.pptx
What is FIle and explanation of text files.pptx
BINARY files CSV files JSON files with example.pptx
VBS control structures for if do whilw.docx
Builtinfunctions in vbscript and its types.docx
VBScript Functions procedures and arrays.docx
VBScript datatypes and control structures.docx
Numbers and global functions conversions .pptx
Structured Graphics in dhtml and active controls.docx
Filters and its types as wave shadow.docx
JavaScript Arrays and its types .pptx

Recently uploaded

PPTX
Elderly in India: The Changing Scenario.pptx
 
PPTX
Quarter 3 lesson 2 of English Grade 8.pptx
PDF
“Step-by-Step Fabrication of Bipolar Junction Transistors (BJTs)”
PPTX
Time Series Analysis - Method of Simple Moving Average 3 Year and 4 Year Movi...
PDF
ASRB NET 2025 Paper GENETICS AND PLANT BREEDING ARS, SMS & STODiscussion | Co...
PDF
Digital Journalism Ethics 2025 materi for Regulation & Ethic Media
PDF
Deep Research and Analysis - by Ms. Oceana Wong
PPTX
General Wellness & Restorative Tonic: Draksharishta
PDF
Risk Management and Regulatory Compliance - by Ms. Oceana Wong
PPTX
G-Protein-Coupled Receptors (GPCRs): Structure, Mechanism, and Functions
PPTX
Photography Pillar 1 The Subject PowerPoint
PPTX
What are New Features in Purchase _Odoo 18
PDF
AI Workflows and Workflow Rhetoric - by Ms. Oceana Wong
PPTX
ATTENTION - PART 1.pptx cognitive processes -For B.Sc I Sem By Mrs.Shilpa Hot...
PDF
Cattolica University - Lab Generative and Agentic AI - Mario Bencivinni
PDF
Past Memories and a New World: Photographs of Stoke Newington from the 70s, 8...
PPTX
Screening and Selecting Studies for Systematic Review Dr Reginald Quansah
PDF
UGC NET Paper 1 Syllabus | 10 Units Complete Guide for NTA JRF
PDF
1. Doing Academic Research: Problems and Issues, 2. Academic Research Writing...
PDF
বাংলাদেশ অর্থনৈতিক সমীক্ষা - ২০২৫ with Bookmark.pdf
Elderly in India: The Changing Scenario.pptx
 
Quarter 3 lesson 2 of English Grade 8.pptx
“Step-by-Step Fabrication of Bipolar Junction Transistors (BJTs)”
Time Series Analysis - Method of Simple Moving Average 3 Year and 4 Year Movi...
ASRB NET 2025 Paper GENETICS AND PLANT BREEDING ARS, SMS & STODiscussion | Co...
Digital Journalism Ethics 2025 materi for Regulation & Ethic Media
Deep Research and Analysis - by Ms. Oceana Wong
General Wellness & Restorative Tonic: Draksharishta
Risk Management and Regulatory Compliance - by Ms. Oceana Wong
G-Protein-Coupled Receptors (GPCRs): Structure, Mechanism, and Functions
Photography Pillar 1 The Subject PowerPoint
What are New Features in Purchase _Odoo 18
AI Workflows and Workflow Rhetoric - by Ms. Oceana Wong
ATTENTION - PART 1.pptx cognitive processes -For B.Sc I Sem By Mrs.Shilpa Hot...
Cattolica University - Lab Generative and Agentic AI - Mario Bencivinni
Past Memories and a New World: Photographs of Stoke Newington from the 70s, 8...
Screening and Selecting Studies for Systematic Review Dr Reginald Quansah
UGC NET Paper 1 Syllabus | 10 Units Complete Guide for NTA JRF
1. Doing Academic Research: Problems and Issues, 2. Academic Research Writing...
বাংলাদেশ অর্থনৈতিক সমীক্ষা - ২০২৫ with Bookmark.pdf

Pipelining in computer architecture

  • 1.
    Chapter 8. PipeliningRamakrishnaReddy bijjam9966484777Asst. ProfessorAvanthi group of colleges
  • 2.
    Overview Pipelining iswidely used in modernprocessors. Pipelining improves system performance interms of throughput.[No of work done at agiven time] Pipelined organization requires sophisticatedcompilation techniques.
  • 3.
  • 4.
    Making the ExecutionofPrograms Faster Use faster circuit technology to build theprocessor and the main memory. Arrange the hardware so that more than oneoperation can be performed at the same time. In the latter way, the number of operationsperformed per second is increased eventhough the elapsed time needed to performany one operation is not changed.
  • 5.
    pipeline It istechnique of decomposing a sequentialprocess into suboperation, with eachsuboperation completed in dedicatedsegment. Pipeline is commonly known as an assemblyline operation. It is similar like assembly line of carmanufacturing. First station in an assembly line set up achasis, next station is installing the engine,another group of workers fitting the body.
  • 6.
    Traditional Pipeline ConceptLaundryExampleAnn, Brian, Cathy, Daveeach have one load of clothesto wash, dry, and foldWasher takes 30 minutesDryer takes 40 minutes“Folder” takes 20 minutesA B C D
  • 7.
    Traditional Pipeline ConceptSequential laundry takes 6hours for 4 loads If they learned pipelining,how long would laundrytake?ABCD30 40 20 30 40 20 30 40 20 30 40 206 PM 7 8 9 10 11 MidnightTime
  • 8.
    Traditional Pipeline ConceptPipelinedlaundry takes3.5 hours for 4 loadsABCD6 PM 7 8 9 10 11 MidnightTaskOrderTime30 40 40 40 40 20
  • 9.
    Traditional Pipeline ConceptPipelining doesn’t helplatency of single task, ithelps throughput of entireworkload Pipeline rate limited byslowest pipeline stage Multiple tasks operatingsimultaneously usingdifferent resources Potential speedup = Numberpipe stages Unbalanced lengths of pipestages reduces speedup Time to “fill” pipeline andtime to “drain” it reducesspeedup Stall for DependencesABCD6 PM 7 8 9TaskOrderTime30 40 40 40 40 20
  • 10.
    Idea of pipeliningin computer The processor execute the program byfetching and executing instructions. One afterthe other. Let Fi and Ei refer to the fetch and executesteps for instruction Ii
  • 11.
    Use the Ideaof Pipelining in aComputerF1E1F2E2F3E3I1 I2 I3(a) Sequential executionInstructionfetchunitExecutionunitInterstage bufferB1(b) Hardware organizationTimeF1 E1F2 E2F3 E3I1I2I3Instruction(c) Pipelined executionFigure 8.1. Basic idea of instruction pipelining.Clock cycle 1 2 3 4TimeFetch + Execution
  • 12.
    Contd., Computer thathas two separate hardwareunits, one for fetching and another forexecuting them. the instruction fetched by the fetch unit isdeposited in an intermediate buffer B1. This buffer needed to enable the executionunit while fetch unit fetching the nextinstruction.
  • 13.
    8.1(c) The computeris controlled by a clock. Any instruction fetch and execute stepscompleted in one clock cycle.
  • 14.
    Use the Ideaof Pipelining in aComputerF4I4F1F2F3I1I2I3D1D2D3D4E1E2E3E4W1W2W3W4InstructionFigure 8.2. A 4­stage pipeline.Clock cycle 1 2 3 4 5 6 7(a) Instruction execution divided into four stepsF : FetchinstructionD : Decodeinstructionand fetchoperandsE: ExecuteoperationW : WriteresultsInterstage buffers(b) Hardware organizationB1 B2 B3TimeFetch + Decode+ Execution + WriteTextbook page: 457
  • 15.
     Fetch(F)- readthe instruction from the memory Decode(D)- Decode the instruction and fetchthe source operand Execute(E)- perform the operation specified bythe instruction Write(W)- store the result in the destinationlocation
  • 16.
    Role of CacheMemory Each pipeline stage is expected to complete in oneclock cycle. The clock period should be long enough to let theslowest pipeline stage to complete. Faster stages can only wait for the slowest one tocomplete. Since main memory is very slow compared to theexecution, if each instruction needs to be fetchedfrom main memory, pipeline is almost useless.[tentimes greater than the time needed to performpipeline stage] Fortunately, we have cache.
  • 17.
    Pipeline Performance Thepotential increase in performanceresulting from pipelining is proportional to thenumber of pipeline stages. However, this increase would be achievedonly if all pipeline stages require the sametime to complete, and there is no interruptionthroughout program execution. Unfortunately, this is not true. Floating point may involve many clock cycle
  • 18.
    Pipeline PerformanceF1F2F3I1I2I3E1E2E3D1D2D3W1W2W3InstructionF4 D4I4Clock cycle1 2 3 4 5 6 7 8 9Figure 8.3. Effect of an execution operation taking more than one clock cycle.E4F5I5 D5TimeE5W4
  • 19.
    Pipeline Performance Theprevious pipeline is said to have been stalled for two clockcycles. Any condition that causes a pipeline to stall is called a hazard. Data hazard – any condition in which either the source or thedestination operands of an instruction are not available at thetime expected in the pipeline. So some operation has to bedelayed, and the pipeline stalls. Instruction (control) hazard – a delay in the availability of aninstruction causes the pipeline to stall.[cache miss] Structural hazard – the situation when two instructions requirethe use of a given hardware resource at the same time.
  • 20.
    Pipeline PerformanceF1F2F3I1I2I3D1D2D3E1E2E3W1W2W3InstructionFigure 8.4. Pipeline stall caused by a cache miss in F2.12 3 4 5 6 7 8 9Clock cycle(a) Instruction execution steps in successive clock cycles1 2 3 4 5 6 7 8Clock cycleStageF: FetchD: DecodeE: ExecuteW: WriteF1 F2 F3D1 D2 D3idle idle idleE1 E2 E3idle idle idleW1 W2idle idle idle(b) Function performed by each processor stage in successive clock cycles9W3F2 F2 F2TimeTimeIdle periods –stalls (bubbles)Instructionhazard(Cachemiss)Decode unit is idlein cycles 3 through5, Execute unit idlein cycle 4 through 6and write unit is idlein cycle 5 through 7such idle period iscalled stalls.
  • 21.
    Pipeline PerformanceF1F2F3I1I2 (Load)I3E1M2D1D2D3W1W2InstructionF4I4Clock cycle 12 3 4 5 6 7Figure 8.5. Effect of a Load instruction on pipeline timing.F5I5 D5TimeE2E3 W3E4D4Load X(R1), R2StructuralhazardThe memory address, X+(R1) is computed in step E2in cycle4, then memoryaccess takes place in cycle5. the operand read from memory is written intoregister R2 in cycle 6[Execution takes 2 cycles] it stalls pipeline to stall for onecycle. Bcox both instruction I2 and I3 require access of register file in cycle 6.
  • 22.
    Pipeline Performance Again,pipelining does not result in individualinstructions being executed faster; rather, it is thethroughput that increases. Throughput is measured by the rate at whichinstruction execution is completed. Pipeline stall causes degradation in pipelineperformance. We need to identify all hazards that may cause thepipeline to stall and to find ways to minimize theirimpact.
  • 23.
    Quiz Four instructions,the I2 takes two clockcycles for execution. Pls draw the figure for 4-stage pipeline, and figure out the total cyclesneeded for the four instructions to complete.
  • 24.
  • 25.
    Data Hazards Wemust ensure that the results obtained when instructions areexecuted in a pipelined processor are identical to those obtainedwhen the same instructions are executed sequentially. Hazard occursA ← 3 + AB ← 4 × A No hazardA ← 5 × CB ← 20 + C When two operations depend on each other, they must beexecuted sequentially in the correct order. Another example:Mul R2, R3, R4Add R5, R4, R6
  • 26.
    Data HazardsF1F2F3I1 (Mul)I2 (Add)I3D1D3E1E3E2W3InstructionFigure 8.6. Pipeline stalled by data dependency between D2 and W1.12 3 4 5 6 7 8 9Clock cycleW1D2A W2F4 D4 E4 W4I4D2TimeFigure 8.6. Pipeline stalled by data dependency between D2 and W1.
  • 27.
    Operand Forwarding Insteadof from the register file, the secondinstruction can get data directly from theoutput of ALU after the previous instruction iscompleted. A special arrangement needs to be made to“forward” the output of ALU to the input ofALU.
  • 28.
    RegisterfileSRC1 SRC2RSLTDestinationSource 1Source 2(a) DatapathALUE: Execute(ALU)W: Write(Register file)SRC1,SRC2 RSLT(b) Position of the source and result registers in the processor pipelineFigure 8.7. Operand forw arding in a pipelined processor.Forwarding path
  • 29.
    Handling Data HazardsinSoftware Let the compiler detect and handle thehazard:I1: Mul R2, R3, R4NOPNOPI2: Add R5, R4, R6 The compiler can reorder the instructions toperform some useful work during the NOPslots.
  • 30.
    Side Effects Theprevious example is explicit and easily detected. Sometimes an instruction changes the contents of a registerother than the one named as the destination. When a location other than one explicitly named in an instructionas a destination operand is affected, the instruction is said tohave a side effect. (Example?) Example: conditional code flags:Add R1, R3AddWithCarry R2, R4 Instructions designed for execution on pipelined hardware shouldhave few side effects.
  • 31.
  • 32.
    Overview Whenever thestream of instructions suppliedby the instruction fetch unit is interrupted, thepipeline stalls. Cache miss Branch
  • 33.
    Unconditional BranchesF2I2 (Branch)I3IkE2F3Fk EkFk+1Ek+1Ik+1InstructionFigure 8.8. An idle cycle caused by a branch instruction.Execution unit idle1 2 3 4 5Clock cycleTimeF1I1 E16X
  • 34.
    Unconditional Branches Thetime lost as a result of a branchinstruction is referred to as the branchpenalty. The previous example instruction I3 iswrongly fetched and branch target address kwill discard the i3. Reducing the branch penalty requires thebranch address to be computed earlier in thepipeline. Typically the Fetch unit has dedicated h/wwhich will identify the branch target addressas quick as possible after an instruction isfetched.
  • 35.
    Branch TimingXFigure 8.9. Branch timing.F1D1 E1 W1I2 (Branch)I11 2 3 4 5 6 7Clock cycleF2 D2F3 XFk Dk EkFk+1 Dk+1I3IkIk+1WkEk+1(b) Branch address computed in Decode stageF1 D1 E1 W1I2 (Branch)I11 2 3 4 5 6 7Clock cycleF2 D2F3Fk Dk EkFk+1 Dk+1I3IkIk+1WkEk+1(a) Branch address computed in Execute stageE2D3F4 XI48TimeTime- Branch penalty- Reducing the penalty
  • 36.
    Instruction Queue andPrefetching Either cache (or) branch instruction stalls thepipeline. Many processor employs dedicated fetch unitwhich will fetch the instruction and put theminto a queue. It can store several instruction at a time. A separate unit called dispatch unit, takesinstructions from the front of the queue andsend them to the execution unit.
  • 37.
  • 38.
    Conditional Braches Aconditional branch instruction introducesthe added hazard caused by the dependencyof the branch condition on the result of apreceding instruction. The decision to branch cannot be made untilthe execution of that instruction has beencompleted. Branch instructions represent about 20% ofthe dynamic instruction count of mostprograms.
  • 39.
    Delayed Branch Theinstructions in the delay slots are alwaysfetched. Therefore, we would like to arrangefor them to be fully executed whether or notthe branch is taken. The objective is to place useful instructions inthese slots. The effectiveness of the delayed branchapproach depends on how often it is possibleto reorder instructions.
  • 40.
    Delayed BranchAddLOOP Shift_leftR1DecrementBranch=0R2LOOPNEXT(a) Original program loopLOOP Decrement R2Branch=0Shift_leftLOOPR1NEXT(b) Reordered instructionsFigure 8.12. Reordering of instructions for a delayed branch.AddR1,R3R1,R3
  • 41.
    Delayed BranchF EFEF EF EF EF EF EInstructionDecrementBranchShift (delay slot)Figure 8.13. Execution timing showing the delay slot being filledduring the last two passes through the loop in Figure 8.12.Decrement (Branch taken)BranchShift (delay slot)Add (Branch not taken)1 2 3 4 5 6 7 8Clock cycleTime
  • 42.
    Branch Prediction Topredict whether or not a particular branch will be taken. Simplest form: assume branch will not take place and continue tofetch instructions in sequential address order. Until the branch is evaluated, instruction execution along thepredicted path must be done on a speculative basis. Speculative execution: instructions are executed before theprocessor is certain that they are in the correct executionsequence. Need to be careful so that no processor registers or memorylocations are updated until it is confirmed that these instructionsshould indeed be executed.
  • 43.
    Incorrectly Predicted BranchF1F2I1 (Compare)I2 (Branch>0)I3D1 E1 W1F3F4Fk DkD3 XXI4IkInstructionFigure 8.14. Timing when a branch decision has been incorrectly predictedas not taken.E2Clock cycle 1 2 3 4 5 6D2/P2Time
  • 44.
    Branch Prediction Betterperformance can be achieved if we arrangefor some branch instructions to be predicted astaken and others as not taken. Use hardware to observe whether the targetaddress is lower or higher than that of the branchinstruction. Let compiler include a branch prediction bit. So far the branch prediction decision is always thesame every time a given instruction is executed –static branch prediction.
  • 45.
  • 46.
    Overview Some instructionsare much better suited topipeline execution than others. Addressing modes Conditional code flags
  • 47.
    Addressing Modes Addressingmodes include simple ones andcomplex ones. In choosing the addressing modes to beimplemented in a pipelined processor, wemust consider the effect of each addressingmode on instruction flow in the pipeline: Side effects The extent to which complex addressing modes causethe pipeline to stall Whether a given mode is likely to be used by compilers
  • 48.
    RecallF1F2F3I1I2 (Load)I3E1M2D1D2D3W1W2InstructionF4I4Clock cycle 1 23 4 5 6 7Figure 8.5. Effect of a Load instruction on pipeline timing.F5I5 D5TimeE2E3 W3E4D4Load X(R1), R2Load (R1), R2
  • 49.
    Complex Addressing ModeFFDD EX +[R1] [X +[R1]] [[X +[R1]]]LoadNext instruction(a) Complex addressing modeW1 2 3 4 5 6 7Clock cycleTimeWForwardLoad (X(R1)), R2
  • 50.
    Simple Addressing ModeX +[R1]F DFFF DDDE[X +[R1]][[X +[R1]]]AddLoadLoadNext instruction(b) Simple addressing modeWWWWAdd #X, R1, R2Load (R2), R2Load (R2), R2
  • 51.
    Addressing Modes Ina pipelined processor, complex addressingmodes do not necessarily lead to faster execution. Advantage: reducing the number of instructions /program space Disadvantage: cause pipeline to stall / morehardware to decode / not convenient for compiler towork with Conclusion: complex addressing modes are notsuitable for pipelined execution.
  • 52.
    Addressing Modes Goodaddressing modes should have: Access to an operand does not require more than oneaccess to the memory Only load and store instruction access memory operands The addressing modes used do not have side effects Register, register indirect, index
  • 53.
    Conditional Codes Ifan optimizing compiler attempts to reorderinstruction to avoid stalling the pipeline whenbranches or data dependencies betweensuccessive instructions occur, it must ensurethat reordering does not cause a change inthe outcome of a computation. The dependency introduced by the condition-code flags reduces the flexibility available forthe compiler to reorder instructions.
  • 54.
  • 55.
    Conditional Codes Twoconclusion: To provide flexibility in reordering instructions, thecondition-code flags should be affected by as fewinstruction as possible. The compiler should be able to specify in whichinstructions of a program the condition codes areaffected and in which they are not.
  • 56.
  • 57.
    Original DesignMemory busdata linesFigure 7.8. Three­bus organization of the datapath.Bus A Bus B Bus CInstructiondecoderPCRegisterfileConstant 4ALUMDRABRMUXIncrementerAddresslinesMARIR
  • 58.
    Instruction cacheFigure 8.18. Datapath modified for pipelined execution, withBus ABus BControl signal pipelineIMARPCRegisterfileALUInstructionABRdecoderIncrementerMDR/WriteInstructionqueueBus CData cacheMemory addressMDR/ReadDMARMemory address(Instruction fetches)(Data access)interstage buffers at the input and output of the ALU.Pipelined Design-Separate instruction and data caches- PC is connected to IMAR- DMAR- Separate MDR- Buffers for ALU- Instruction queue- Instruction decoder output- Reading an instruction from the instruction cache- Incrementing the PC- Decoding an instruction- Reading from or writing into the data cache- Reading the contents of up to two regs- Writing into one register in the reg file- Performing an ALU operation
  • 59.
  • 60.
     Pipeline architecturecan executes severalinstruction concurrently. Many instructions are present in the pipeline atthe same time,but they are in different stages oftheir execution. While instruction being fetched at the same timeanother instruction being decoded stage (or)execution. One instruction completes execution in eachclock cycle.
  • 61.
    Overview The maximumthroughput of a pipelined processoris one instruction per clock cycle. If we equip the processor with multiple processingunits to handle several instructions in parallel ineach processing stage, several instructions startexecution in the same clock cycle – multiple-issue. Processors are capable of achieving an instructionexecution throughput of more than one instructionper cycle – superscalar processors. Multiple-issue requires a wider path to the cacheand multiple execution units to keep the instructionqueue to be filled.
  • 62.
     The superscalaroperation have multipleexecution units.
  • 63.
  • 64.
    Architecture The abovefig. shows the superscalar processorwith two execution unit. The Fetch unit capable of reading two instructionat a time and store it in the queue. The dispatch unit decodes upto two instructionfrom the front of queue(one is integer and anotherone is floating point) dispatched in the same clockcycle. Processor’s program control unit – capable offetching and decoding several instructionconcurrently. It can issue multiple instructionssimultaneously.
  • 65.
    TimingI1 (Fadd) D1D2D3D4E1A E1BE1CE2E3 E3 E3E4W1W2W3W4I2 (Add)I3  (Fsub)I4  (Sub)Figure 8.20. An example of instruction execution flow in the processor of Figure 8.19,assuming no hazards are encountered.  1 2 3 4 5 6Clock cycleTimeF1F2F3F47
  • 66.
    Assume Floating pointtakes 3 clock cycles5. Assume floating point unit as a 3 stage pipeline. Soit can accept a new instruction for execution in eachclock cycle. During cycle 4 the execution of I1 inprogress but it will enter the different stages insidethe execution pipleline, so this unit accept for I3 forexecution.6. The integer unit can accept new instruction forexecution because I2 has entered into the writestage.
  • 67.
    Out-of-Order Execution HazardsExceptions Imprecise exceptions Precise exceptionsI1 (Fadd) D1D2D3D4E1A E1B E1CE2E3A E3B E3CE4W1W2W3W4I2 (Add)I3 (Fsub)I4 (Sub)1 2 3 4 5 6Clock cycleTime(a) Delayed writeF1F2F3F47
  • 68.
    Execution Completion Itis desirable to used out-of-order execution, so that anexecution unit is freed to execute other instructions as soon aspossible. At the same time, instructions must be completed in programorder to allow precise exceptions. The use of temporary registers Commitment unitI1 (Fadd) D1D2D3D4E1A E1B E1CE2E3A E3B E3CE4W1W2W3W4I2 (Add)I3 (Fsub)I4 (Sub)1 2 3 4 5 6Clock cycleTime(b) Using temporary registersTW2TW47F1F2F3F4
  • 69.
     If I2depends on the result of I1, the execution ofI2 will be delayed. These dependencies are handledcorrectly,as long as the execution of the instructionwill not be delayed. one more reason which delays the execution is, ExceptionTwo causes: Bus error Illegal operation(divide by zero)
  • 70.
    Two types ofException* Imprecise * Precise Imprecise- the result of I2 written into RF during cycle4. suppose I1 causes an exception, the processorallows to complete execution of succeedinginstruction(I2) is said to have imprecise exception. Precise-in imprecise exception, consistent state is notguaranteed, when an exception occurs.(Theprocessor will not allow to write the succeedinginstruction) In consistent state, the result of the instruction mustbe written into the program order(ie. The I2 must beallow to written till cycle 6.
  • 71.
     The integerexecution unit has to retain theresult until cycle 6 and it cannot acceptinstruction I4 until cycle 6. Thus, in this method the output is partiallyexecuted (or) discarded.I1 (Fadd) D1D2E1A E1B E1CE2W1W2I2 (Add)1 2 3 4 5 6Clock cycleTimeF1F27
  • 72.
     If anexternal interrupt is received, the dispatchunit stops reading new instructions from instructionqueue, and the instruction remaining in the queueare discarded.
  • 73.
    Execution Completion Inprecise exception, the results are temporarily stored into thetemp register and later they are transferred to the permanentregisters in correct program order. Thus, two write operations TW and W are carried out. The step W are called commitment step(temp reg to permregister) TW- write into a temp registers W- Transfer the contents back to the permanent regI1 (Fadd) D1D2E1A E1B E1CE2W1W2I2 (Add)1 2 3 4 5 6Clock cycleTW27F1F2
  • 74.
    Register renaming Atemporary register assumes the role of thepermanent register whose data it is holdingand is given the same name.
  • 75.
  • 76.
    Overview The executiontime T of a program that has adynamic instruction count N is given by:where S is the average number of clock cycles ittakes to fetch and execute one instruction, andR is the clock rate. Instruction throughput is defined as the numberof instructions executed per second.RSNT×=SRPs =
  • 77.
    Overview An n-stagepipeline has the potential to increase thethroughput by n times. However, the only real measure of performance isthe total execution time of a program. Higher instruction throughput will not necessarilylead to higher performance. Two questions regarding pipelining How much of this potential increase in instruction throughput can berealized in practice? What is good value of n?
  • 78.
    Number of PipelineStages Since an n-stage pipeline has the potential toincrease the throughput by n times, how about weuse a 10,000-stage pipeline? As the number of stages increase, the probability ofthe pipeline being stalled increases. The inherent delay in the basic operationsincreases. Hardware considerations (area, power, complexity,…)

[8]ページ先頭

©2009-2025 Movatter.jp