Difference between revisions of "InOrder ToDo List"
From gem5
(Created page with ''''Python Configurability''' InOrderCPU configuration (uniprocessor options) * MTApproach - Multi-Threading (MT) Approach * Resource - Resource Configuration (will eve…') |
(No difference)
|
Revision as of 17:42, 12 January 2010
Python Configurability
InOrderCPU configuration (uniprocessor options)
* MTApproach - Multi-Threading (MT) Approach * Resource - Resource Configuration (will eventually be used to autogenerate C++)
* ResourceType - Type of resource (Enum type)
* ResourceParams - Parameters for this type of resource
* Request - List of requests for this type of resource (Enum type)
* Latency - operation latency and issue latency (intra/inter thread)
* Count - Number of such resource type
* PipelineDesc - Pipeline Description
* InstSchedule - Instruction schedule specified as a vector of InstClassSchedule
* InstClassSchedule - Vector of schedules per instruction class - load/store, Int execute, FP execute, specialized inst, etc. (do we still want a distinction between front end and back end schedules?)
* ResourceRequestList - Vector of ResourceRequest (per stage, right?
* ResourceRequest - Vector of requests for resources
* Other params
MTApproach options
* None (single threaded) * Fine-grained (switch context every cycle or every few cycles, like Ultrasparc T2) * Coarse-grained (switch context on thread stalls, like 'SwitchOnCacheMiss' currently) * SMT (all contexts active, like 'SMT' currently)
ResourceType/Request options
* FetchUnit
o AssignNextPC
o UpdateTargetPC
o SelectThread (for fine-grained and coarse grained MT approaches; may also be done in a separate thread selection unit or pick unit as in Ultrasparc T2)
o Any other fetch address generation?
* MemPort (DataReadPort, DataWritePort, FetchPort)
o Access
o InitiateAccess
o CompleteAccess
* DecodeUnit
o DecodeInst
* BPredUnit
o Predict
o Update
* UseDefUnit
o ReadReg
o WriteReg
* AgenUnit
o GenerateAddr
* ExecUnit
o Exec
o InitiateExec
o CompleteExec
o GenerateAddr
* GradUnit
o Graduate
* Interstage buffers (only for SMT?!)
Simulation Speed
* Instruction Schedule Work
o Use Vector of Vectors instead of Priority Queue
o Identify Instruction Schedule Types (via Tuple)
o Cache Instruction Schedule, Generate On-Demand
o Instructions walk through schedule by incrementing pointer instead of popping from queue
o If dynamic schedule is needed, then copy the remaining part of schedule and let the instruction add/remove as it pleases
+ Better solution here?
* Event-Sleeping Work
o Sleep instructions waiting for an long-delay event
o Sleep CPU w/no activity (partially implemented)