Difference between revisions of "InOrder ToDo List"

From gem5
Jump to: navigation, search
(Created page with ''''Python Configurability''' InOrderCPU configuration (uniprocessor options) * MTApproach - Multi-Threading (MT) Approach * Resource - Resource Configuration (will eve…')
 
Line 1: Line 1:
'''Python Configurability'''
+
*Move some of these to Flyspray?
  
 +
==Python Configurability==
 +
* Resource Configuration - How can we specify what resources are instantiated via the Python config files?
 +
** ResourceType - Type of resource (Enum type)
 +
**** ResourceParams - Parameters for this type of resource
 +
*** Request - List of requests for this type of resource (Enum type)
 +
**** Latency - operation latency and issue latency (intra/inter thread)
 +
*** Count - Number of such resource type
  
InOrderCPU configuration (uniprocessor options)
+
* Pipeline Description
 +
** InstSchedule - Instruction schedule specified as a vector of InstClassSchedule
 +
*** InstClassSchedule - Vector of schedules per instruction class - load/store, Int execute, FP execute, specialized inst, etc. (do we still want a distinction between front end and back end schedules?)
 +
** ResourceRequestList - Vector of ResourceRequest (per stage, right?
 +
***  ResourceRequest - Vector of requests for resources
 +
** ResourceType/Request options
  
    * MTApproach - Multi-Threading (MT) Approach
+
* Multithreading Models?
    * Resource - Resource Configuration (will eventually be used to autogenerate C++)
+
** None (single threaded)
 +
** Fine-grained (switch context every cycle or every few cycles, like Ultrasparc T2)
 +
** Coarse-grained (switch context on thread stalls, like 'SwitchOnCacheMiss' currently)
 +
** SMT (all contexts active, like 'SMT' currently)
  
          * ResourceType - Type of resource (Enum type)
+
==Simulation Speed==
                * ResourceParams - Parameters for this type of resource
+
* Instruction Schedule Work
                * Request - List of requests for this type of resource (Enum type)
+
** Use Vector of Vectors instead of Priority Queue
                      * Latency - operation latency and issue latency (intra/inter thread)
+
** Identify Instruction Schedule Types (via Tuple)
                * Count - Number of such resource type
+
** Cache Instruction Schedule, Generate On-Demand
 +
** Instructions walk through schedule by incrementing pointer instead of popping from queue
 +
*** If dynamic schedule is needed, then copy the remaining part of schedule and let the instruction add/remove as it pleases
 +
*** Can we cache dynamic schedules? Is there a better solution here?
 +
* Event-Sleeping Work
 +
** Sleep instructions waiting for an long-delay event
 +
** Sleep CPU w/no activity (partially implemented)
  
    * PipelineDesc - Pipeline Description
+
==ISA Support==
 +
*ALPHA - completed
 +
*MIPS - completed
 +
*X86 - not completed
 +
*SPARC - not completed
  
          * InstSchedule - Instruction schedule specified as a vector of InstClassSchedule
+
==Full System Support ==
                * InstClassSchedule - Vector of schedules per instruction class - load/store, Int execute, FP execute, specialized inst, etc. (do we still want a distinction between front end and back end schedules?)
+
* TBD
                      * ResourceRequestList - Vector of ResourceRequest (per stage, right?
 
                            * ResourceRequest - Vector of requests for resources
 
  
    * Other params
+
== Checkpointing ==
 
+
* TBD
 
 
MTApproach options
 
 
 
    * None (single threaded)
 
    * Fine-grained (switch context every cycle or every few cycles, like Ultrasparc T2)
 
    * Coarse-grained (switch context on thread stalls, like 'SwitchOnCacheMiss' currently)
 
    * SMT (all contexts active, like 'SMT' currently)
 
 
 
 
 
ResourceType/Request options
 
 
 
    * FetchUnit
 
          o AssignNextPC
 
          o UpdateTargetPC
 
          o SelectThread (for fine-grained and coarse grained MT approaches; may also be done in a separate thread selection unit or pick unit as in Ultrasparc T2)
 
          o Any other fetch address generation?
 
    * MemPort (DataReadPort, DataWritePort, FetchPort)
 
          o Access
 
          o InitiateAccess
 
          o CompleteAccess
 
    * DecodeUnit
 
          o DecodeInst
 
    * BPredUnit
 
          o Predict
 
          o Update
 
    * UseDefUnit
 
          o ReadReg
 
          o WriteReg
 
    * AgenUnit
 
          o GenerateAddr
 
    * ExecUnit
 
          o Exec
 
          o InitiateExec
 
          o CompleteExec
 
          o GenerateAddr
 
    * GradUnit
 
          o Graduate
 
    * Interstage buffers (only for SMT?!)
 
 
 
 
 
Simulation Speed
 
 
 
    * Instruction Schedule Work
 
          o Use Vector of Vectors instead of Priority Queue
 
          o Identify Instruction Schedule Types (via Tuple)
 
          o Cache Instruction Schedule, Generate On-Demand
 
          o Instructions walk through schedule by incrementing pointer instead of popping from queue
 
          o If dynamic schedule is needed, then copy the remaining part of schedule and let the instruction add/remove as it pleases
 
                + Better solution here?
 
    * Event-Sleeping Work
 
          o Sleep instructions waiting for an long-delay event
 
          o Sleep CPU w/no activity (partially implemented)
 

Revision as of 12:56, 12 January 2010

  • Move some of these to Flyspray?

Python Configurability

  • Resource Configuration - How can we specify what resources are instantiated via the Python config files?
    • ResourceType - Type of resource (Enum type)
        • ResourceParams - Parameters for this type of resource
      • Request - List of requests for this type of resource (Enum type)
        • Latency - operation latency and issue latency (intra/inter thread)
      • Count - Number of such resource type
  • Pipeline Description
    • InstSchedule - Instruction schedule specified as a vector of InstClassSchedule
      • InstClassSchedule - Vector of schedules per instruction class - load/store, Int execute, FP execute, specialized inst, etc. (do we still want a distinction between front end and back end schedules?)
    • ResourceRequestList - Vector of ResourceRequest (per stage, right?
      • ResourceRequest - Vector of requests for resources
    • ResourceType/Request options
  • Multithreading Models?
    • None (single threaded)
    • Fine-grained (switch context every cycle or every few cycles, like Ultrasparc T2)
    • Coarse-grained (switch context on thread stalls, like 'SwitchOnCacheMiss' currently)
    • SMT (all contexts active, like 'SMT' currently)

Simulation Speed

  • Instruction Schedule Work
    • Use Vector of Vectors instead of Priority Queue
    • Identify Instruction Schedule Types (via Tuple)
    • Cache Instruction Schedule, Generate On-Demand
    • Instructions walk through schedule by incrementing pointer instead of popping from queue
      • If dynamic schedule is needed, then copy the remaining part of schedule and let the instruction add/remove as it pleases
      • Can we cache dynamic schedules? Is there a better solution here?
  • Event-Sleeping Work
    • Sleep instructions waiting for an long-delay event
    • Sleep CPU w/no activity (partially implemented)

ISA Support

  • ALPHA - completed
  • MIPS - completed
  • X86 - not completed
  • SPARC - not completed

Full System Support

  • TBD

Checkpointing

  • TBD