Discussion:
[gem5-dev] Review Request: Ruby: Resurrect Cache Warmup Capability
(too old to reply)
Nilay Vaish
2011-12-05 10:02:00 UTC
Permalink
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.m5sim.org/r/927/
-----------------------------------------------------------

Review request for Default, Ali Saidi, Gabe Black, Steve Reinhardt, and Nathan Binkert.


Summary
-------

Ruby: Resurrect Cache Warmup Capability
This patch resurrects ruby's cache warmup capability.


Diffs
-----

configs/ruby/MOESI_hammer.py c1ab57ea8805
configs/ruby/Ruby.py c1ab57ea8805
src/mem/SConscript c1ab57ea8805
src/mem/protocol/MOESI_CMP_token-L1cache.sm c1ab57ea8805
src/mem/protocol/MOESI_CMP_token-L2cache.sm c1ab57ea8805
src/mem/protocol/MOESI_CMP_token-dir.sm c1ab57ea8805
src/mem/protocol/MOESI_hammer-cache.sm c1ab57ea8805
src/mem/protocol/MOESI_hammer-dir.sm c1ab57ea8805
src/mem/ruby/buffers/MessageBuffer.cc c1ab57ea8805
src/mem/ruby/common/Global.hh c1ab57ea8805
src/mem/ruby/common/Global.cc c1ab57ea8805
src/mem/ruby/eventqueue/RubyEventQueue.hh c1ab57ea8805
src/mem/ruby/eventqueue/RubyEventQueue.cc c1ab57ea8805
src/mem/ruby/network/Network.hh c1ab57ea8805
src/mem/ruby/network/Network.cc c1ab57ea8805
src/mem/ruby/network/Topology.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/GarnetNetwork_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/NetworkInterface_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/OutVcState_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/RoutingUnit_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/flexible-pipeline/GarnetNetwork.cc c1ab57ea8805
src/mem/ruby/network/garnet/flexible-pipeline/NetworkInterface.cc c1ab57ea8805
src/mem/ruby/network/garnet/flexible-pipeline/Router.cc c1ab57ea8805
src/mem/ruby/network/simple/PerfectSwitch.cc c1ab57ea8805
src/mem/ruby/network/simple/SimpleNetwork.cc c1ab57ea8805
src/mem/ruby/network/simple/Switch.cc c1ab57ea8805
src/mem/ruby/network/simple/Throttle.cc c1ab57ea8805
src/mem/ruby/recorder/CacheRecorder.hh c1ab57ea8805
src/mem/ruby/recorder/CacheRecorder.cc c1ab57ea8805
src/mem/ruby/recorder/SConscript c1ab57ea8805
src/mem/ruby/recorder/TraceRecord.hh c1ab57ea8805
src/mem/ruby/recorder/TraceRecord.cc c1ab57ea8805
src/mem/ruby/recorder/Tracer.hh c1ab57ea8805
src/mem/ruby/recorder/Tracer.cc c1ab57ea8805
src/mem/ruby/recorder/Tracer.py c1ab57ea8805
src/mem/ruby/slicc_interface/AbstractController.hh c1ab57ea8805
src/mem/ruby/slicc_interface/AbstractController.cc c1ab57ea8805
src/mem/ruby/system/Cache.py c1ab57ea8805
src/mem/ruby/system/CacheMemory.hh c1ab57ea8805
src/mem/ruby/system/CacheMemory.cc c1ab57ea8805
src/mem/ruby/system/DMASequencer.hh c1ab57ea8805
src/mem/ruby/system/MemoryControl.hh c1ab57ea8805
src/mem/ruby/system/MemoryControl.cc c1ab57ea8805
src/mem/ruby/system/RubyPort.hh c1ab57ea8805
src/mem/ruby/system/RubyPort.cc c1ab57ea8805
src/mem/ruby/system/SConscript c1ab57ea8805
src/mem/ruby/system/Sequencer.hh c1ab57ea8805
src/mem/ruby/system/Sequencer.cc c1ab57ea8805
src/mem/ruby/system/System.hh c1ab57ea8805
src/mem/ruby/system/System.cc c1ab57ea8805
src/mem/slicc/symbols/StateMachine.py c1ab57ea8805
src/sim/eventq.hh c1ab57ea8805

Diff: http://reviews.m5sim.org/r/927/diff


Testing
-------


Thanks,

Nilay
Nilay Vaish
2011-12-05 10:02:32 UTC
Permalink
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.m5sim.org/r/927/
-----------------------------------------------------------

(Updated 2011-12-05 02:02:32.672702)


Review request for Default.


Summary
-------

Ruby: Resurrect Cache Warmup Capability
This patch resurrects ruby's cache warmup capability.


Diffs
-----

configs/ruby/MOESI_hammer.py c1ab57ea8805
configs/ruby/Ruby.py c1ab57ea8805
src/mem/SConscript c1ab57ea8805
src/mem/protocol/MOESI_CMP_token-L1cache.sm c1ab57ea8805
src/mem/protocol/MOESI_CMP_token-L2cache.sm c1ab57ea8805
src/mem/protocol/MOESI_CMP_token-dir.sm c1ab57ea8805
src/mem/protocol/MOESI_hammer-cache.sm c1ab57ea8805
src/mem/protocol/MOESI_hammer-dir.sm c1ab57ea8805
src/mem/ruby/buffers/MessageBuffer.cc c1ab57ea8805
src/mem/ruby/common/Global.hh c1ab57ea8805
src/mem/ruby/common/Global.cc c1ab57ea8805
src/mem/ruby/eventqueue/RubyEventQueue.hh c1ab57ea8805
src/mem/ruby/eventqueue/RubyEventQueue.cc c1ab57ea8805
src/mem/ruby/network/Network.hh c1ab57ea8805
src/mem/ruby/network/Network.cc c1ab57ea8805
src/mem/ruby/network/Topology.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/GarnetNetwork_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/NetworkInterface_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/OutVcState_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/RoutingUnit_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/flexible-pipeline/GarnetNetwork.cc c1ab57ea8805
src/mem/ruby/network/garnet/flexible-pipeline/NetworkInterface.cc c1ab57ea8805
src/mem/ruby/network/garnet/flexible-pipeline/Router.cc c1ab57ea8805
src/mem/ruby/network/simple/PerfectSwitch.cc c1ab57ea8805
src/mem/ruby/network/simple/SimpleNetwork.cc c1ab57ea8805
src/mem/ruby/network/simple/Switch.cc c1ab57ea8805
src/mem/ruby/network/simple/Throttle.cc c1ab57ea8805
src/mem/ruby/recorder/CacheRecorder.hh c1ab57ea8805
src/mem/ruby/recorder/CacheRecorder.cc c1ab57ea8805
src/mem/ruby/recorder/SConscript c1ab57ea8805
src/mem/ruby/recorder/TraceRecord.hh c1ab57ea8805
src/mem/ruby/recorder/TraceRecord.cc c1ab57ea8805
src/mem/ruby/recorder/Tracer.hh c1ab57ea8805
src/mem/ruby/recorder/Tracer.cc c1ab57ea8805
src/mem/ruby/recorder/Tracer.py c1ab57ea8805
src/mem/ruby/slicc_interface/AbstractController.hh c1ab57ea8805
src/mem/ruby/slicc_interface/AbstractController.cc c1ab57ea8805
src/mem/ruby/system/Cache.py c1ab57ea8805
src/mem/ruby/system/CacheMemory.hh c1ab57ea8805
src/mem/ruby/system/CacheMemory.cc c1ab57ea8805
src/mem/ruby/system/DMASequencer.hh c1ab57ea8805
src/mem/ruby/system/MemoryControl.hh c1ab57ea8805
src/mem/ruby/system/MemoryControl.cc c1ab57ea8805
src/mem/ruby/system/RubyPort.hh c1ab57ea8805
src/mem/ruby/system/RubyPort.cc c1ab57ea8805
src/mem/ruby/system/SConscript c1ab57ea8805
src/mem/ruby/system/Sequencer.hh c1ab57ea8805
src/mem/ruby/system/Sequencer.cc c1ab57ea8805
src/mem/ruby/system/System.hh c1ab57ea8805
src/mem/ruby/system/System.cc c1ab57ea8805
src/mem/slicc/symbols/StateMachine.py c1ab57ea8805
src/sim/eventq.hh c1ab57ea8805

Diff: http://reviews.m5sim.org/r/927/diff


Testing
-------


Thanks,

Nilay
Gabe Black
2011-12-05 10:15:49 UTC
Permalink
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.m5sim.org/r/927/#review1725
-----------------------------------------------------------



src/sim/eventq.hh
<http://reviews.m5sim.org/r/927/#comment2212>

One place you're using this is to slip an event to the front of the list, run it, and then put the head event back. That seems like a really bad way to do things. You could just call the process function on the event directly and avoid hacks like this.


- Gabe
Post by Nilay Vaish
-----------------------------------------------------------
http://reviews.m5sim.org/r/927/
-----------------------------------------------------------
(Updated 2011-12-05 02:02:32)
Review request for Default.
Summary
-------
Ruby: Resurrect Cache Warmup Capability
This patch resurrects ruby's cache warmup capability.
Diffs
-----
configs/ruby/MOESI_hammer.py c1ab57ea8805
configs/ruby/Ruby.py c1ab57ea8805
src/mem/SConscript c1ab57ea8805
src/mem/protocol/MOESI_CMP_token-L1cache.sm c1ab57ea8805
src/mem/protocol/MOESI_CMP_token-L2cache.sm c1ab57ea8805
src/mem/protocol/MOESI_CMP_token-dir.sm c1ab57ea8805
src/mem/protocol/MOESI_hammer-cache.sm c1ab57ea8805
src/mem/protocol/MOESI_hammer-dir.sm c1ab57ea8805
src/mem/ruby/buffers/MessageBuffer.cc c1ab57ea8805
src/mem/ruby/common/Global.hh c1ab57ea8805
src/mem/ruby/common/Global.cc c1ab57ea8805
src/mem/ruby/eventqueue/RubyEventQueue.hh c1ab57ea8805
src/mem/ruby/eventqueue/RubyEventQueue.cc c1ab57ea8805
src/mem/ruby/network/Network.hh c1ab57ea8805
src/mem/ruby/network/Network.cc c1ab57ea8805
src/mem/ruby/network/Topology.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/GarnetNetwork_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/NetworkInterface_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/OutVcState_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/RoutingUnit_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/flexible-pipeline/GarnetNetwork.cc c1ab57ea8805
src/mem/ruby/network/garnet/flexible-pipeline/NetworkInterface.cc c1ab57ea8805
src/mem/ruby/network/garnet/flexible-pipeline/Router.cc c1ab57ea8805
src/mem/ruby/network/simple/PerfectSwitch.cc c1ab57ea8805
src/mem/ruby/network/simple/SimpleNetwork.cc c1ab57ea8805
src/mem/ruby/network/simple/Switch.cc c1ab57ea8805
src/mem/ruby/network/simple/Throttle.cc c1ab57ea8805
src/mem/ruby/recorder/CacheRecorder.hh c1ab57ea8805
src/mem/ruby/recorder/CacheRecorder.cc c1ab57ea8805
src/mem/ruby/recorder/SConscript c1ab57ea8805
src/mem/ruby/recorder/TraceRecord.hh c1ab57ea8805
src/mem/ruby/recorder/TraceRecord.cc c1ab57ea8805
src/mem/ruby/recorder/Tracer.hh c1ab57ea8805
src/mem/ruby/recorder/Tracer.cc c1ab57ea8805
src/mem/ruby/recorder/Tracer.py c1ab57ea8805
src/mem/ruby/slicc_interface/AbstractController.hh c1ab57ea8805
src/mem/ruby/slicc_interface/AbstractController.cc c1ab57ea8805
src/mem/ruby/system/Cache.py c1ab57ea8805
src/mem/ruby/system/CacheMemory.hh c1ab57ea8805
src/mem/ruby/system/CacheMemory.cc c1ab57ea8805
src/mem/ruby/system/DMASequencer.hh c1ab57ea8805
src/mem/ruby/system/MemoryControl.hh c1ab57ea8805
src/mem/ruby/system/MemoryControl.cc c1ab57ea8805
src/mem/ruby/system/RubyPort.hh c1ab57ea8805
src/mem/ruby/system/RubyPort.cc c1ab57ea8805
src/mem/ruby/system/SConscript c1ab57ea8805
src/mem/ruby/system/Sequencer.hh c1ab57ea8805
src/mem/ruby/system/Sequencer.cc c1ab57ea8805
src/mem/ruby/system/System.hh c1ab57ea8805
src/mem/ruby/system/System.cc c1ab57ea8805
src/mem/slicc/symbols/StateMachine.py c1ab57ea8805
src/sim/eventq.hh c1ab57ea8805
Diff: http://reviews.m5sim.org/r/927/diff
Testing
-------
Thanks,
Nilay
Nilay Vaish
2011-12-05 16:07:38 UTC
Permalink
Post by Nilay Vaish
src/sim/eventq.hh, line 411
<http://reviews.m5sim.org/r/927/diff/1/?file=15916#file15916line411>
One place you're using this is to slip an event to the front of the list, run it, and then put the head event back. That seems like a really bad way to do things. You could just call the process function on the event directly and avoid hacks like this.
That can be done, but in this case we want to ensure that no event currently
on the event queue is processed, before Ruby is done with warming up caches.
The call to simulate() will invoke the process() function of the just inserted
event, which in turn will possibly insert many more events in the main event
queue. We want all those events to get processed before other events already
on the event queue. This was one of the ways suggested by Steve in a discussion
several months back.


- Nilay


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.m5sim.org/r/927/#review1725
-----------------------------------------------------------
Post by Nilay Vaish
-----------------------------------------------------------
http://reviews.m5sim.org/r/927/
-----------------------------------------------------------
(Updated 2011-12-05 02:02:32)
Review request for Default.
Summary
-------
Ruby: Resurrect Cache Warmup Capability
This patch resurrects ruby's cache warmup capability.
Diffs
-----
configs/ruby/MOESI_hammer.py c1ab57ea8805
configs/ruby/Ruby.py c1ab57ea8805
src/mem/SConscript c1ab57ea8805
src/mem/protocol/MOESI_CMP_token-L1cache.sm c1ab57ea8805
src/mem/protocol/MOESI_CMP_token-L2cache.sm c1ab57ea8805
src/mem/protocol/MOESI_CMP_token-dir.sm c1ab57ea8805
src/mem/protocol/MOESI_hammer-cache.sm c1ab57ea8805
src/mem/protocol/MOESI_hammer-dir.sm c1ab57ea8805
src/mem/ruby/buffers/MessageBuffer.cc c1ab57ea8805
src/mem/ruby/common/Global.hh c1ab57ea8805
src/mem/ruby/common/Global.cc c1ab57ea8805
src/mem/ruby/eventqueue/RubyEventQueue.hh c1ab57ea8805
src/mem/ruby/eventqueue/RubyEventQueue.cc c1ab57ea8805
src/mem/ruby/network/Network.hh c1ab57ea8805
src/mem/ruby/network/Network.cc c1ab57ea8805
src/mem/ruby/network/Topology.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/GarnetNetwork_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/NetworkInterface_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/OutVcState_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/RoutingUnit_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/flexible-pipeline/GarnetNetwork.cc c1ab57ea8805
src/mem/ruby/network/garnet/flexible-pipeline/NetworkInterface.cc c1ab57ea8805
src/mem/ruby/network/garnet/flexible-pipeline/Router.cc c1ab57ea8805
src/mem/ruby/network/simple/PerfectSwitch.cc c1ab57ea8805
src/mem/ruby/network/simple/SimpleNetwork.cc c1ab57ea8805
src/mem/ruby/network/simple/Switch.cc c1ab57ea8805
src/mem/ruby/network/simple/Throttle.cc c1ab57ea8805
src/mem/ruby/recorder/CacheRecorder.hh c1ab57ea8805
src/mem/ruby/recorder/CacheRecorder.cc c1ab57ea8805
src/mem/ruby/recorder/SConscript c1ab57ea8805
src/mem/ruby/recorder/TraceRecord.hh c1ab57ea8805
src/mem/ruby/recorder/TraceRecord.cc c1ab57ea8805
src/mem/ruby/recorder/Tracer.hh c1ab57ea8805
src/mem/ruby/recorder/Tracer.cc c1ab57ea8805
src/mem/ruby/recorder/Tracer.py c1ab57ea8805
src/mem/ruby/slicc_interface/AbstractController.hh c1ab57ea8805
src/mem/ruby/slicc_interface/AbstractController.cc c1ab57ea8805
src/mem/ruby/system/Cache.py c1ab57ea8805
src/mem/ruby/system/CacheMemory.hh c1ab57ea8805
src/mem/ruby/system/CacheMemory.cc c1ab57ea8805
src/mem/ruby/system/DMASequencer.hh c1ab57ea8805
src/mem/ruby/system/MemoryControl.hh c1ab57ea8805
src/mem/ruby/system/MemoryControl.cc c1ab57ea8805
src/mem/ruby/system/RubyPort.hh c1ab57ea8805
src/mem/ruby/system/RubyPort.cc c1ab57ea8805
src/mem/ruby/system/SConscript c1ab57ea8805
src/mem/ruby/system/Sequencer.hh c1ab57ea8805
src/mem/ruby/system/Sequencer.cc c1ab57ea8805
src/mem/ruby/system/System.hh c1ab57ea8805
src/mem/ruby/system/System.cc c1ab57ea8805
src/mem/slicc/symbols/StateMachine.py c1ab57ea8805
src/sim/eventq.hh c1ab57ea8805
Diff: http://reviews.m5sim.org/r/927/diff
Testing
-------
Thanks,
Nilay
mir shan
2011-12-06 12:16:00 UTC
Permalink
***@talpur-OptiPlex-380:~/gem5-stable$ build/ARM_FS/gem5.opt configs/example/fs.py


Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/home/talpur/gem5-stable/src/python/m5/main.py", line 356, in main
    exec filecode in scope
  File "configs/example/fs.py", line 139, in <module>
    bare_metal=options.bare_metal)
  File "/home/talpur/gem5-stable/configs/common/FSConfig.py", line 218, in makeArmSystem
    if mdesc.disk()[-4:] == ".img":
  File "/home/talpur/gem5-stable/configs/common/Benchmarks.py", line 59, in disk
    return env.get('LINUX_IMAGE', disk('ael-arm.ext2'))
  File "/home/talpur/gem5-stable/configs/common/SysPaths.py", line 37, in disk
    system()
  File "/home/talpur/gem5-stable/configs/common/SysPaths.py", line 59, in system
    raise ImportError, "Can't find a path to system files."
ImportError: Can't find a path to system files.


Thanks
mirshany
Gabe Black
2011-12-07 00:32:52 UTC
Permalink
Post by Nilay Vaish
src/sim/eventq.hh, line 411
<http://reviews.m5sim.org/r/927/diff/1/?file=15916#file15916line411>
One place you're using this is to slip an event to the front of the list, run it, and then put the head event back. That seems like a really bad way to do things. You could just call the process function on the event directly and avoid hacks like this.
That can be done, but in this case we want to ensure that no event currently
on the event queue is processed, before Ruby is done with warming up caches.
The call to simulate() will invoke the process() function of the just inserted
event, which in turn will possibly insert many more events in the main event
queue. We want all those events to get processed before other events already
on the event queue. This was one of the ways suggested by Steve in a discussion
several months back.
I still don't think this is a good idea. You'll process the warm up events, then go back in time, and then process the real events. It all seems like a giant hack. Maybe Steve can talk about why he suggested this approach?


- Gabe


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.m5sim.org/r/927/#review1725
-----------------------------------------------------------
Post by Nilay Vaish
-----------------------------------------------------------
http://reviews.m5sim.org/r/927/
-----------------------------------------------------------
(Updated 2011-12-05 08:09:50)
Review request for Default, Ali Saidi, Gabe Black, Steve Reinhardt, and Nathan Binkert.
Summary
-------
Ruby: Resurrect Cache Warmup Capability
This patch resurrects ruby's cache warmup capability.
Diffs
-----
configs/ruby/MOESI_hammer.py c1ab57ea8805
configs/ruby/Ruby.py c1ab57ea8805
src/mem/SConscript c1ab57ea8805
src/mem/protocol/MOESI_CMP_token-L1cache.sm c1ab57ea8805
src/mem/protocol/MOESI_CMP_token-L2cache.sm c1ab57ea8805
src/mem/protocol/MOESI_CMP_token-dir.sm c1ab57ea8805
src/mem/protocol/MOESI_hammer-cache.sm c1ab57ea8805
src/mem/protocol/MOESI_hammer-dir.sm c1ab57ea8805
src/mem/ruby/buffers/MessageBuffer.cc c1ab57ea8805
src/mem/ruby/common/Global.hh c1ab57ea8805
src/mem/ruby/common/Global.cc c1ab57ea8805
src/mem/ruby/eventqueue/RubyEventQueue.hh c1ab57ea8805
src/mem/ruby/eventqueue/RubyEventQueue.cc c1ab57ea8805
src/mem/ruby/network/Network.hh c1ab57ea8805
src/mem/ruby/network/Network.cc c1ab57ea8805
src/mem/ruby/network/Topology.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/GarnetNetwork_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/NetworkInterface_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/OutVcState_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/RoutingUnit_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/flexible-pipeline/GarnetNetwork.cc c1ab57ea8805
src/mem/ruby/network/garnet/flexible-pipeline/NetworkInterface.cc c1ab57ea8805
src/mem/ruby/network/garnet/flexible-pipeline/Router.cc c1ab57ea8805
src/mem/ruby/network/simple/PerfectSwitch.cc c1ab57ea8805
src/mem/ruby/network/simple/SimpleNetwork.cc c1ab57ea8805
src/mem/ruby/network/simple/Switch.cc c1ab57ea8805
src/mem/ruby/network/simple/Throttle.cc c1ab57ea8805
src/mem/ruby/recorder/CacheRecorder.hh c1ab57ea8805
src/mem/ruby/recorder/CacheRecorder.cc c1ab57ea8805
src/mem/ruby/recorder/SConscript c1ab57ea8805
src/mem/ruby/recorder/TraceRecord.hh c1ab57ea8805
src/mem/ruby/recorder/TraceRecord.cc c1ab57ea8805
src/mem/ruby/recorder/Tracer.hh c1ab57ea8805
src/mem/ruby/recorder/Tracer.cc c1ab57ea8805
src/mem/ruby/recorder/Tracer.py c1ab57ea8805
src/mem/ruby/slicc_interface/AbstractController.hh c1ab57ea8805
src/mem/ruby/slicc_interface/AbstractController.cc c1ab57ea8805
src/mem/ruby/system/Cache.py c1ab57ea8805
src/mem/ruby/system/CacheMemory.hh c1ab57ea8805
src/mem/ruby/system/CacheMemory.cc c1ab57ea8805
src/mem/ruby/system/DMASequencer.hh c1ab57ea8805
src/mem/ruby/system/MemoryControl.hh c1ab57ea8805
src/mem/ruby/system/MemoryControl.cc c1ab57ea8805
src/mem/ruby/system/RubyPort.hh c1ab57ea8805
src/mem/ruby/system/RubyPort.cc c1ab57ea8805
src/mem/ruby/system/SConscript c1ab57ea8805
src/mem/ruby/system/Sequencer.hh c1ab57ea8805
src/mem/ruby/system/Sequencer.cc c1ab57ea8805
src/mem/ruby/system/System.hh c1ab57ea8805
src/mem/ruby/system/System.cc c1ab57ea8805
src/mem/slicc/symbols/StateMachine.py c1ab57ea8805
src/sim/eventq.hh c1ab57ea8805
Diff: http://reviews.m5sim.org/r/927/diff
Testing
-------
Thanks,
Nilay
Steve Reinhardt
2011-12-07 01:18:58 UTC
Permalink
Post by Nilay Vaish
src/sim/eventq.hh, line 411
<http://reviews.m5sim.org/r/927/diff/1/?file=15916#file15916line411>
One place you're using this is to slip an event to the front of the list, run it, and then put the head event back. That seems like a really bad way to do things. You could just call the process function on the event directly and avoid hacks like this.
That can be done, but in this case we want to ensure that no event currently
on the event queue is processed, before Ruby is done with warming up caches.
The call to simulate() will invoke the process() function of the just inserted
event, which in turn will possibly insert many more events in the main event
queue. We want all those events to get processed before other events already
on the event queue. This was one of the ways suggested by Steve in a discussion
several months back.
I still don't think this is a good idea. You'll process the warm up events, then go back in time, and then process the real events. It all seems like a giant hack. Maybe Steve can talk about why he suggested this approach?
It is a giant hack. I recall having a discussion on this where we went over several ugly alternatives, and identified one as least ugly. To be honest, it was quite a while ago and I don't remember all the alternatives we came up with or why this one seemed best in the end (or even that this was specifically the one we decided on). IIRC, basically Ruby does cache warmup by replaying a synthetic trace that's saved with the checkpoint, and it needs curTick to move monotonically forward to process that trace, but at the end of replaying the trace you need to make sure that curTick is still what it was when you started. Again, IIRC, the original code had two different event queues, the normal one and one that was only used for warmup, and a flag in each Ruby object telling it which queue to
use. This approach of temporarily hijacking the global queue seemed like an improvement over that, since you just have one somewhat localized hack instead of a big distributed hack.

That said, it definitely needs some comments explaining what this is for.


- Steve


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.m5sim.org/r/927/#review1725
-----------------------------------------------------------
Post by Nilay Vaish
-----------------------------------------------------------
http://reviews.m5sim.org/r/927/
-----------------------------------------------------------
(Updated 2011-12-05 08:09:50)
Review request for Default, Ali Saidi, Gabe Black, Steve Reinhardt, and Nathan Binkert.
Summary
-------
Ruby: Resurrect Cache Warmup Capability
This patch resurrects ruby's cache warmup capability.
Diffs
-----
configs/ruby/MOESI_hammer.py c1ab57ea8805
configs/ruby/Ruby.py c1ab57ea8805
src/mem/SConscript c1ab57ea8805
src/mem/protocol/MOESI_CMP_token-L1cache.sm c1ab57ea8805
src/mem/protocol/MOESI_CMP_token-L2cache.sm c1ab57ea8805
src/mem/protocol/MOESI_CMP_token-dir.sm c1ab57ea8805
src/mem/protocol/MOESI_hammer-cache.sm c1ab57ea8805
src/mem/protocol/MOESI_hammer-dir.sm c1ab57ea8805
src/mem/ruby/buffers/MessageBuffer.cc c1ab57ea8805
src/mem/ruby/common/Global.hh c1ab57ea8805
src/mem/ruby/common/Global.cc c1ab57ea8805
src/mem/ruby/eventqueue/RubyEventQueue.hh c1ab57ea8805
src/mem/ruby/eventqueue/RubyEventQueue.cc c1ab57ea8805
src/mem/ruby/network/Network.hh c1ab57ea8805
src/mem/ruby/network/Network.cc c1ab57ea8805
src/mem/ruby/network/Topology.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/GarnetNetwork_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/NetworkInterface_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/OutVcState_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/RoutingUnit_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/flexible-pipeline/GarnetNetwork.cc c1ab57ea8805
src/mem/ruby/network/garnet/flexible-pipeline/NetworkInterface.cc c1ab57ea8805
src/mem/ruby/network/garnet/flexible-pipeline/Router.cc c1ab57ea8805
src/mem/ruby/network/simple/PerfectSwitch.cc c1ab57ea8805
src/mem/ruby/network/simple/SimpleNetwork.cc c1ab57ea8805
src/mem/ruby/network/simple/Switch.cc c1ab57ea8805
src/mem/ruby/network/simple/Throttle.cc c1ab57ea8805
src/mem/ruby/recorder/CacheRecorder.hh c1ab57ea8805
src/mem/ruby/recorder/CacheRecorder.cc c1ab57ea8805
src/mem/ruby/recorder/SConscript c1ab57ea8805
src/mem/ruby/recorder/TraceRecord.hh c1ab57ea8805
src/mem/ruby/recorder/TraceRecord.cc c1ab57ea8805
src/mem/ruby/recorder/Tracer.hh c1ab57ea8805
src/mem/ruby/recorder/Tracer.cc c1ab57ea8805
src/mem/ruby/recorder/Tracer.py c1ab57ea8805
src/mem/ruby/slicc_interface/AbstractController.hh c1ab57ea8805
src/mem/ruby/slicc_interface/AbstractController.cc c1ab57ea8805
src/mem/ruby/system/Cache.py c1ab57ea8805
src/mem/ruby/system/CacheMemory.hh c1ab57ea8805
src/mem/ruby/system/CacheMemory.cc c1ab57ea8805
src/mem/ruby/system/DMASequencer.hh c1ab57ea8805
src/mem/ruby/system/MemoryControl.hh c1ab57ea8805
src/mem/ruby/system/MemoryControl.cc c1ab57ea8805
src/mem/ruby/system/RubyPort.hh c1ab57ea8805
src/mem/ruby/system/RubyPort.cc c1ab57ea8805
src/mem/ruby/system/SConscript c1ab57ea8805
src/mem/ruby/system/Sequencer.hh c1ab57ea8805
src/mem/ruby/system/Sequencer.cc c1ab57ea8805
src/mem/ruby/system/System.hh c1ab57ea8805
src/mem/ruby/system/System.cc c1ab57ea8805
src/mem/slicc/symbols/StateMachine.py c1ab57ea8805
src/sim/eventq.hh c1ab57ea8805
Diff: http://reviews.m5sim.org/r/927/diff
Testing
-------
Thanks,
Nilay
Nilay Vaish
2011-12-07 02:49:58 UTC
Permalink
Post by Nilay Vaish
src/sim/eventq.hh, line 411
<http://reviews.m5sim.org/r/927/diff/1/?file=15916#file15916line411>
One place you're using this is to slip an event to the front of the list, run it, and then put the head event back. That seems like a really bad way to do things. You could just call the process function on the event directly and avoid hacks like this.
That can be done, but in this case we want to ensure that no event currently
on the event queue is processed, before Ruby is done with warming up caches.
The call to simulate() will invoke the process() function of the just inserted
event, which in turn will possibly insert many more events in the main event
queue. We want all those events to get processed before other events already
on the event queue. This was one of the ways suggested by Steve in a discussion
several months back.
I still don't think this is a good idea. You'll process the warm up events, then go back in time, and then process the real events. It all seems like a giant hack. Maybe Steve can talk about why he suggested this approach?
It is a giant hack. I recall having a discussion on this where we went over several ugly alternatives, and identified one as least ugly. To be honest, it was quite a while ago and I don't remember all the alternatives we came up with or why this one seemed best in the end (or even that this was specifically the one we decided on). IIRC, basically Ruby does cache warmup by replaying a synthetic trace that's saved with the checkpoint, and it needs curTick to move monotonically forward to process that trace, but at the end of replaying the trace you need to make sure that curTick is still what it was when you started. Again, IIRC, the original code had two different event queues, the normal one and one that was only used for warmup, and a flag in each Ruby object telling it which qu
eue to use. This approach of temporarily hijacking the global queue seemed like an improvement over that, since you just have one somewhat localized hack instead of a big distributed hack.
Post by Nilay Vaish
That said, it definitely needs some comments explaining what this is for.
Here is the link to the discussion that took in June this year.

http://www.mail-archive.com/gem5-dev-1Gs4CP2/***@public.gmane.org/msg00024.html

I will add some comments to the function in question.


- Nilay


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.m5sim.org/r/927/#review1725
-----------------------------------------------------------
Post by Nilay Vaish
-----------------------------------------------------------
http://reviews.m5sim.org/r/927/
-----------------------------------------------------------
(Updated 2011-12-05 08:09:50)
Review request for Default, Ali Saidi, Gabe Black, Steve Reinhardt, and Nathan Binkert.
Summary
-------
Ruby: Resurrect Cache Warmup Capability
This patch resurrects ruby's cache warmup capability.
Diffs
-----
configs/ruby/MOESI_hammer.py c1ab57ea8805
configs/ruby/Ruby.py c1ab57ea8805
src/mem/SConscript c1ab57ea8805
src/mem/protocol/MOESI_CMP_token-L1cache.sm c1ab57ea8805
src/mem/protocol/MOESI_CMP_token-L2cache.sm c1ab57ea8805
src/mem/protocol/MOESI_CMP_token-dir.sm c1ab57ea8805
src/mem/protocol/MOESI_hammer-cache.sm c1ab57ea8805
src/mem/protocol/MOESI_hammer-dir.sm c1ab57ea8805
src/mem/ruby/buffers/MessageBuffer.cc c1ab57ea8805
src/mem/ruby/common/Global.hh c1ab57ea8805
src/mem/ruby/common/Global.cc c1ab57ea8805
src/mem/ruby/eventqueue/RubyEventQueue.hh c1ab57ea8805
src/mem/ruby/eventqueue/RubyEventQueue.cc c1ab57ea8805
src/mem/ruby/network/Network.hh c1ab57ea8805
src/mem/ruby/network/Network.cc c1ab57ea8805
src/mem/ruby/network/Topology.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/GarnetNetwork_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/NetworkInterface_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/OutVcState_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/RoutingUnit_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/flexible-pipeline/GarnetNetwork.cc c1ab57ea8805
src/mem/ruby/network/garnet/flexible-pipeline/NetworkInterface.cc c1ab57ea8805
src/mem/ruby/network/garnet/flexible-pipeline/Router.cc c1ab57ea8805
src/mem/ruby/network/simple/PerfectSwitch.cc c1ab57ea8805
src/mem/ruby/network/simple/SimpleNetwork.cc c1ab57ea8805
src/mem/ruby/network/simple/Switch.cc c1ab57ea8805
src/mem/ruby/network/simple/Throttle.cc c1ab57ea8805
src/mem/ruby/recorder/CacheRecorder.hh c1ab57ea8805
src/mem/ruby/recorder/CacheRecorder.cc c1ab57ea8805
src/mem/ruby/recorder/SConscript c1ab57ea8805
src/mem/ruby/recorder/TraceRecord.hh c1ab57ea8805
src/mem/ruby/recorder/TraceRecord.cc c1ab57ea8805
src/mem/ruby/recorder/Tracer.hh c1ab57ea8805
src/mem/ruby/recorder/Tracer.cc c1ab57ea8805
src/mem/ruby/recorder/Tracer.py c1ab57ea8805
src/mem/ruby/slicc_interface/AbstractController.hh c1ab57ea8805
src/mem/ruby/slicc_interface/AbstractController.cc c1ab57ea8805
src/mem/ruby/system/Cache.py c1ab57ea8805
src/mem/ruby/system/CacheMemory.hh c1ab57ea8805
src/mem/ruby/system/CacheMemory.cc c1ab57ea8805
src/mem/ruby/system/DMASequencer.hh c1ab57ea8805
src/mem/ruby/system/MemoryControl.hh c1ab57ea8805
src/mem/ruby/system/MemoryControl.cc c1ab57ea8805
src/mem/ruby/system/RubyPort.hh c1ab57ea8805
src/mem/ruby/system/RubyPort.cc c1ab57ea8805
src/mem/ruby/system/SConscript c1ab57ea8805
src/mem/ruby/system/Sequencer.hh c1ab57ea8805
src/mem/ruby/system/Sequencer.cc c1ab57ea8805
src/mem/ruby/system/System.hh c1ab57ea8805
src/mem/ruby/system/System.cc c1ab57ea8805
src/mem/slicc/symbols/StateMachine.py c1ab57ea8805
src/sim/eventq.hh c1ab57ea8805
Diff: http://reviews.m5sim.org/r/927/diff
Testing
-------
Thanks,
Nilay
Gabe Black
2011-12-05 10:20:27 UTC
Permalink
I notice that you removed the four individual default reviewers from
your review. I wouldn't mind if that was the default since now I get all
review emails twice, once from the list and once to me directly. I
usually just delete the ones in my inbox instead of my m5-dev folder,
but it would be nice if I didn't need to.

Gabe
Post by Nilay Vaish
-----------------------------------------------------------
http://reviews.m5sim.org/r/927/
-----------------------------------------------------------
(Updated 2011-12-05 02:02:32.672702)
Review request for Default.
Summary
-------
Ruby: Resurrect Cache Warmup Capability
This patch resurrects ruby's cache warmup capability.
Diffs
-----
configs/ruby/MOESI_hammer.py c1ab57ea8805
configs/ruby/Ruby.py c1ab57ea8805
src/mem/SConscript c1ab57ea8805
src/mem/protocol/MOESI_CMP_token-L1cache.sm c1ab57ea8805
src/mem/protocol/MOESI_CMP_token-L2cache.sm c1ab57ea8805
src/mem/protocol/MOESI_CMP_token-dir.sm c1ab57ea8805
src/mem/protocol/MOESI_hammer-cache.sm c1ab57ea8805
src/mem/protocol/MOESI_hammer-dir.sm c1ab57ea8805
src/mem/ruby/buffers/MessageBuffer.cc c1ab57ea8805
src/mem/ruby/common/Global.hh c1ab57ea8805
src/mem/ruby/common/Global.cc c1ab57ea8805
src/mem/ruby/eventqueue/RubyEventQueue.hh c1ab57ea8805
src/mem/ruby/eventqueue/RubyEventQueue.cc c1ab57ea8805
src/mem/ruby/network/Network.hh c1ab57ea8805
src/mem/ruby/network/Network.cc c1ab57ea8805
src/mem/ruby/network/Topology.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/GarnetNetwork_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/NetworkInterface_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/OutVcState_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/RoutingUnit_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/flexible-pipeline/GarnetNetwork.cc c1ab57ea8805
src/mem/ruby/network/garnet/flexible-pipeline/NetworkInterface.cc c1ab57ea8805
src/mem/ruby/network/garnet/flexible-pipeline/Router.cc c1ab57ea8805
src/mem/ruby/network/simple/PerfectSwitch.cc c1ab57ea8805
src/mem/ruby/network/simple/SimpleNetwork.cc c1ab57ea8805
src/mem/ruby/network/simple/Switch.cc c1ab57ea8805
src/mem/ruby/network/simple/Throttle.cc c1ab57ea8805
src/mem/ruby/recorder/CacheRecorder.hh c1ab57ea8805
src/mem/ruby/recorder/CacheRecorder.cc c1ab57ea8805
src/mem/ruby/recorder/SConscript c1ab57ea8805
src/mem/ruby/recorder/TraceRecord.hh c1ab57ea8805
src/mem/ruby/recorder/TraceRecord.cc c1ab57ea8805
src/mem/ruby/recorder/Tracer.hh c1ab57ea8805
src/mem/ruby/recorder/Tracer.cc c1ab57ea8805
src/mem/ruby/recorder/Tracer.py c1ab57ea8805
src/mem/ruby/slicc_interface/AbstractController.hh c1ab57ea8805
src/mem/ruby/slicc_interface/AbstractController.cc c1ab57ea8805
src/mem/ruby/system/Cache.py c1ab57ea8805
src/mem/ruby/system/CacheMemory.hh c1ab57ea8805
src/mem/ruby/system/CacheMemory.cc c1ab57ea8805
src/mem/ruby/system/DMASequencer.hh c1ab57ea8805
src/mem/ruby/system/MemoryControl.hh c1ab57ea8805
src/mem/ruby/system/MemoryControl.cc c1ab57ea8805
src/mem/ruby/system/RubyPort.hh c1ab57ea8805
src/mem/ruby/system/RubyPort.cc c1ab57ea8805
src/mem/ruby/system/SConscript c1ab57ea8805
src/mem/ruby/system/Sequencer.hh c1ab57ea8805
src/mem/ruby/system/Sequencer.cc c1ab57ea8805
src/mem/ruby/system/System.hh c1ab57ea8805
src/mem/ruby/system/System.cc c1ab57ea8805
src/mem/slicc/symbols/StateMachine.py c1ab57ea8805
src/sim/eventq.hh c1ab57ea8805
Diff: http://reviews.m5sim.org/r/927/diff
Testing
-------
Thanks,
Nilay
_______________________________________________
gem5-dev mailing list
http://m5sim.org/mailman/listinfo/gem5-dev
Nilay Vaish
2011-12-05 16:09:50 UTC
Permalink
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.m5sim.org/r/927/
-----------------------------------------------------------

(Updated 2011-12-05 08:09:50.021072)


Review request for Default, Ali Saidi, Gabe Black, Steve Reinhardt, and Nathan Binkert.


Changes
-------

Restored the default reviewers on Gabe's request.


Summary
-------

Ruby: Resurrect Cache Warmup Capability
This patch resurrects ruby's cache warmup capability.


Diffs
-----

configs/ruby/MOESI_hammer.py c1ab57ea8805
configs/ruby/Ruby.py c1ab57ea8805
src/mem/SConscript c1ab57ea8805
src/mem/protocol/MOESI_CMP_token-L1cache.sm c1ab57ea8805
src/mem/protocol/MOESI_CMP_token-L2cache.sm c1ab57ea8805
src/mem/protocol/MOESI_CMP_token-dir.sm c1ab57ea8805
src/mem/protocol/MOESI_hammer-cache.sm c1ab57ea8805
src/mem/protocol/MOESI_hammer-dir.sm c1ab57ea8805
src/mem/ruby/buffers/MessageBuffer.cc c1ab57ea8805
src/mem/ruby/common/Global.hh c1ab57ea8805
src/mem/ruby/common/Global.cc c1ab57ea8805
src/mem/ruby/eventqueue/RubyEventQueue.hh c1ab57ea8805
src/mem/ruby/eventqueue/RubyEventQueue.cc c1ab57ea8805
src/mem/ruby/network/Network.hh c1ab57ea8805
src/mem/ruby/network/Network.cc c1ab57ea8805
src/mem/ruby/network/Topology.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/GarnetNetwork_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/NetworkInterface_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/OutVcState_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/RoutingUnit_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/flexible-pipeline/GarnetNetwork.cc c1ab57ea8805
src/mem/ruby/network/garnet/flexible-pipeline/NetworkInterface.cc c1ab57ea8805
src/mem/ruby/network/garnet/flexible-pipeline/Router.cc c1ab57ea8805
src/mem/ruby/network/simple/PerfectSwitch.cc c1ab57ea8805
src/mem/ruby/network/simple/SimpleNetwork.cc c1ab57ea8805
src/mem/ruby/network/simple/Switch.cc c1ab57ea8805
src/mem/ruby/network/simple/Throttle.cc c1ab57ea8805
src/mem/ruby/recorder/CacheRecorder.hh c1ab57ea8805
src/mem/ruby/recorder/CacheRecorder.cc c1ab57ea8805
src/mem/ruby/recorder/SConscript c1ab57ea8805
src/mem/ruby/recorder/TraceRecord.hh c1ab57ea8805
src/mem/ruby/recorder/TraceRecord.cc c1ab57ea8805
src/mem/ruby/recorder/Tracer.hh c1ab57ea8805
src/mem/ruby/recorder/Tracer.cc c1ab57ea8805
src/mem/ruby/recorder/Tracer.py c1ab57ea8805
src/mem/ruby/slicc_interface/AbstractController.hh c1ab57ea8805
src/mem/ruby/slicc_interface/AbstractController.cc c1ab57ea8805
src/mem/ruby/system/Cache.py c1ab57ea8805
src/mem/ruby/system/CacheMemory.hh c1ab57ea8805
src/mem/ruby/system/CacheMemory.cc c1ab57ea8805
src/mem/ruby/system/DMASequencer.hh c1ab57ea8805
src/mem/ruby/system/MemoryControl.hh c1ab57ea8805
src/mem/ruby/system/MemoryControl.cc c1ab57ea8805
src/mem/ruby/system/RubyPort.hh c1ab57ea8805
src/mem/ruby/system/RubyPort.cc c1ab57ea8805
src/mem/ruby/system/SConscript c1ab57ea8805
src/mem/ruby/system/Sequencer.hh c1ab57ea8805
src/mem/ruby/system/Sequencer.cc c1ab57ea8805
src/mem/ruby/system/System.hh c1ab57ea8805
src/mem/ruby/system/System.cc c1ab57ea8805
src/mem/slicc/symbols/StateMachine.py c1ab57ea8805
src/sim/eventq.hh c1ab57ea8805

Diff: http://reviews.m5sim.org/r/927/diff


Testing
-------


Thanks,

Nilay
Gabe Black
2011-12-07 00:36:16 UTC
Permalink
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.m5sim.org/r/927/#review1736
-----------------------------------------------------------


I was actually asking for us as individuals to be removed from the default set of reviewers since the mailing list is already there too.

- Gabe
Post by Nilay Vaish
-----------------------------------------------------------
http://reviews.m5sim.org/r/927/
-----------------------------------------------------------
(Updated 2011-12-05 08:09:50)
Review request for Default, Ali Saidi, Gabe Black, Steve Reinhardt, and Nathan Binkert.
Summary
-------
Ruby: Resurrect Cache Warmup Capability
This patch resurrects ruby's cache warmup capability.
Diffs
-----
configs/ruby/MOESI_hammer.py c1ab57ea8805
configs/ruby/Ruby.py c1ab57ea8805
src/mem/SConscript c1ab57ea8805
src/mem/protocol/MOESI_CMP_token-L1cache.sm c1ab57ea8805
src/mem/protocol/MOESI_CMP_token-L2cache.sm c1ab57ea8805
src/mem/protocol/MOESI_CMP_token-dir.sm c1ab57ea8805
src/mem/protocol/MOESI_hammer-cache.sm c1ab57ea8805
src/mem/protocol/MOESI_hammer-dir.sm c1ab57ea8805
src/mem/ruby/buffers/MessageBuffer.cc c1ab57ea8805
src/mem/ruby/common/Global.hh c1ab57ea8805
src/mem/ruby/common/Global.cc c1ab57ea8805
src/mem/ruby/eventqueue/RubyEventQueue.hh c1ab57ea8805
src/mem/ruby/eventqueue/RubyEventQueue.cc c1ab57ea8805
src/mem/ruby/network/Network.hh c1ab57ea8805
src/mem/ruby/network/Network.cc c1ab57ea8805
src/mem/ruby/network/Topology.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/GarnetNetwork_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/NetworkInterface_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/OutVcState_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/RoutingUnit_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/flexible-pipeline/GarnetNetwork.cc c1ab57ea8805
src/mem/ruby/network/garnet/flexible-pipeline/NetworkInterface.cc c1ab57ea8805
src/mem/ruby/network/garnet/flexible-pipeline/Router.cc c1ab57ea8805
src/mem/ruby/network/simple/PerfectSwitch.cc c1ab57ea8805
src/mem/ruby/network/simple/SimpleNetwork.cc c1ab57ea8805
src/mem/ruby/network/simple/Switch.cc c1ab57ea8805
src/mem/ruby/network/simple/Throttle.cc c1ab57ea8805
src/mem/ruby/recorder/CacheRecorder.hh c1ab57ea8805
src/mem/ruby/recorder/CacheRecorder.cc c1ab57ea8805
src/mem/ruby/recorder/SConscript c1ab57ea8805
src/mem/ruby/recorder/TraceRecord.hh c1ab57ea8805
src/mem/ruby/recorder/TraceRecord.cc c1ab57ea8805
src/mem/ruby/recorder/Tracer.hh c1ab57ea8805
src/mem/ruby/recorder/Tracer.cc c1ab57ea8805
src/mem/ruby/recorder/Tracer.py c1ab57ea8805
src/mem/ruby/slicc_interface/AbstractController.hh c1ab57ea8805
src/mem/ruby/slicc_interface/AbstractController.cc c1ab57ea8805
src/mem/ruby/system/Cache.py c1ab57ea8805
src/mem/ruby/system/CacheMemory.hh c1ab57ea8805
src/mem/ruby/system/CacheMemory.cc c1ab57ea8805
src/mem/ruby/system/DMASequencer.hh c1ab57ea8805
src/mem/ruby/system/MemoryControl.hh c1ab57ea8805
src/mem/ruby/system/MemoryControl.cc c1ab57ea8805
src/mem/ruby/system/RubyPort.hh c1ab57ea8805
src/mem/ruby/system/RubyPort.cc c1ab57ea8805
src/mem/ruby/system/SConscript c1ab57ea8805
src/mem/ruby/system/Sequencer.hh c1ab57ea8805
src/mem/ruby/system/Sequencer.cc c1ab57ea8805
src/mem/ruby/system/System.hh c1ab57ea8805
src/mem/ruby/system/System.cc c1ab57ea8805
src/mem/slicc/symbols/StateMachine.py c1ab57ea8805
src/sim/eventq.hh c1ab57ea8805
Diff: http://reviews.m5sim.org/r/927/diff
Testing
-------
Thanks,
Nilay
Brad Beckmann
2011-12-07 23:46:59 UTC
Permalink
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.m5sim.org/r/927/#review1744
-----------------------------------------------------------


Thanks for pushing this forward Nilay! I will feel very good to see this functionality finally checked in. I know people are concerned about the eventq manipulation, but what Nilay has implemented is much better than some of our other suggestions. As a result of this change, we can start the process of removing all the files in src/mem/ruby/eventqueue. That will be very nice.

My two biggest concerns are the global variables and not utilizing the recently added cache flush support to create valid data checkpoints of both the caches and memory. See comments below.


src/mem/protocol/MOESI_CMP_token-L1cache.sm
<http://reviews.m5sim.org/r/927/#comment2235>

Do not comment out this line. Having to remove the clean data checks, along with the two event queues, were the two reasons why I didn't check in this patch a long time ago. Now that we have cache flush support and functional access support in Ruby, we should be able to create Ruby checkpoints with valid data in both main memory and in the cache trace. Therefore, we should not have to worry about cache warmup traces breaking this check. If you're not aware of how those checks broke in the past, please let me know and I'm happy to discuss the details.



src/mem/protocol/MOESI_CMP_token-L2cache.sm
<http://reviews.m5sim.org/r/927/#comment2236>

Same here. Do not comment out.



src/mem/protocol/MOESI_CMP_token-dir.sm
<http://reviews.m5sim.org/r/927/#comment2237>

Here too



src/mem/protocol/MOESI_hammer-cache.sm
<http://reviews.m5sim.org/r/927/#comment2238>

here



src/mem/protocol/MOESI_hammer-dir.sm
<http://reviews.m5sim.org/r/927/#comment2239>

here



src/mem/ruby/common/Global.hh
<http://reviews.m5sim.org/r/927/#comment2240>

Why do these need to be global? Can they be added to the gem5 eventqueue (not the ruby eventqueue) or ruby system object instead? I seem to remember that is where they were before.



src/mem/ruby/system/System.cc
<http://reviews.m5sim.org/r/927/#comment2241>

After recording the cache contents, we should flush the cache contents to memory. Then Ruby's memory image can be checkpointed with valid data.


- Brad
Post by Nilay Vaish
-----------------------------------------------------------
http://reviews.m5sim.org/r/927/
-----------------------------------------------------------
(Updated 2011-12-05 08:09:50)
Review request for Default, Ali Saidi, Gabe Black, Steve Reinhardt, and Nathan Binkert.
Summary
-------
Ruby: Resurrect Cache Warmup Capability
This patch resurrects ruby's cache warmup capability.
Diffs
-----
configs/ruby/MOESI_hammer.py c1ab57ea8805
configs/ruby/Ruby.py c1ab57ea8805
src/mem/SConscript c1ab57ea8805
src/mem/protocol/MOESI_CMP_token-L1cache.sm c1ab57ea8805
src/mem/protocol/MOESI_CMP_token-L2cache.sm c1ab57ea8805
src/mem/protocol/MOESI_CMP_token-dir.sm c1ab57ea8805
src/mem/protocol/MOESI_hammer-cache.sm c1ab57ea8805
src/mem/protocol/MOESI_hammer-dir.sm c1ab57ea8805
src/mem/ruby/buffers/MessageBuffer.cc c1ab57ea8805
src/mem/ruby/common/Global.hh c1ab57ea8805
src/mem/ruby/common/Global.cc c1ab57ea8805
src/mem/ruby/eventqueue/RubyEventQueue.hh c1ab57ea8805
src/mem/ruby/eventqueue/RubyEventQueue.cc c1ab57ea8805
src/mem/ruby/network/Network.hh c1ab57ea8805
src/mem/ruby/network/Network.cc c1ab57ea8805
src/mem/ruby/network/Topology.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/GarnetNetwork_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/NetworkInterface_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/OutVcState_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/RoutingUnit_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/flexible-pipeline/GarnetNetwork.cc c1ab57ea8805
src/mem/ruby/network/garnet/flexible-pipeline/NetworkInterface.cc c1ab57ea8805
src/mem/ruby/network/garnet/flexible-pipeline/Router.cc c1ab57ea8805
src/mem/ruby/network/simple/PerfectSwitch.cc c1ab57ea8805
src/mem/ruby/network/simple/SimpleNetwork.cc c1ab57ea8805
src/mem/ruby/network/simple/Switch.cc c1ab57ea8805
src/mem/ruby/network/simple/Throttle.cc c1ab57ea8805
src/mem/ruby/recorder/CacheRecorder.hh c1ab57ea8805
src/mem/ruby/recorder/CacheRecorder.cc c1ab57ea8805
src/mem/ruby/recorder/SConscript c1ab57ea8805
src/mem/ruby/recorder/TraceRecord.hh c1ab57ea8805
src/mem/ruby/recorder/TraceRecord.cc c1ab57ea8805
src/mem/ruby/recorder/Tracer.hh c1ab57ea8805
src/mem/ruby/recorder/Tracer.cc c1ab57ea8805
src/mem/ruby/recorder/Tracer.py c1ab57ea8805
src/mem/ruby/slicc_interface/AbstractController.hh c1ab57ea8805
src/mem/ruby/slicc_interface/AbstractController.cc c1ab57ea8805
src/mem/ruby/system/Cache.py c1ab57ea8805
src/mem/ruby/system/CacheMemory.hh c1ab57ea8805
src/mem/ruby/system/CacheMemory.cc c1ab57ea8805
src/mem/ruby/system/DMASequencer.hh c1ab57ea8805
src/mem/ruby/system/MemoryControl.hh c1ab57ea8805
src/mem/ruby/system/MemoryControl.cc c1ab57ea8805
src/mem/ruby/system/RubyPort.hh c1ab57ea8805
src/mem/ruby/system/RubyPort.cc c1ab57ea8805
src/mem/ruby/system/SConscript c1ab57ea8805
src/mem/ruby/system/Sequencer.hh c1ab57ea8805
src/mem/ruby/system/Sequencer.cc c1ab57ea8805
src/mem/ruby/system/System.hh c1ab57ea8805
src/mem/ruby/system/System.cc c1ab57ea8805
src/mem/slicc/symbols/StateMachine.py c1ab57ea8805
src/sim/eventq.hh c1ab57ea8805
Diff: http://reviews.m5sim.org/r/927/diff
Testing
-------
Thanks,
Nilay
Gabe Black
2011-12-08 00:03:09 UTC
Permalink
Post by Nilay Vaish
Post by Brad Beckmann
Thanks for pushing this forward Nilay! I will feel very good to see this functionality finally checked in. I know people are concerned about the eventq manipulation, but what Nilay has implemented is much better than some of our other suggestions. As a result of this change, we can start the process of removing all the files in src/mem/ruby/eventqueue. That will be very nice.
My two biggest concerns are the global variables and not utilizing the recently added cache flush support to create valid data checkpoints of both the caches and memory. See comments below.
Well, at least we're making the best of a presumably not so great situation. It does sound like a good idea to get rid of a parallel implementation of the event queue, but weren't we trying to have multiple queues for the parallelization work? If so, maybe we can have two instances of the same class and avoid the sneaky tricks. We may not be ready for that today, but it sounds like a decent longer term goal. In any case, I'm glad to see things improve even if we don't/can't go directly to an ideal solution.


- Gabe


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.m5sim.org/r/927/#review1744
-----------------------------------------------------------
Post by Nilay Vaish
-----------------------------------------------------------
http://reviews.m5sim.org/r/927/
-----------------------------------------------------------
(Updated 2011-12-05 08:09:50)
Review request for Default, Ali Saidi, Gabe Black, Steve Reinhardt, and Nathan Binkert.
Summary
-------
Ruby: Resurrect Cache Warmup Capability
This patch resurrects ruby's cache warmup capability.
Diffs
-----
configs/ruby/MOESI_hammer.py c1ab57ea8805
configs/ruby/Ruby.py c1ab57ea8805
src/mem/SConscript c1ab57ea8805
src/mem/protocol/MOESI_CMP_token-L1cache.sm c1ab57ea8805
src/mem/protocol/MOESI_CMP_token-L2cache.sm c1ab57ea8805
src/mem/protocol/MOESI_CMP_token-dir.sm c1ab57ea8805
src/mem/protocol/MOESI_hammer-cache.sm c1ab57ea8805
src/mem/protocol/MOESI_hammer-dir.sm c1ab57ea8805
src/mem/ruby/buffers/MessageBuffer.cc c1ab57ea8805
src/mem/ruby/common/Global.hh c1ab57ea8805
src/mem/ruby/common/Global.cc c1ab57ea8805
src/mem/ruby/eventqueue/RubyEventQueue.hh c1ab57ea8805
src/mem/ruby/eventqueue/RubyEventQueue.cc c1ab57ea8805
src/mem/ruby/network/Network.hh c1ab57ea8805
src/mem/ruby/network/Network.cc c1ab57ea8805
src/mem/ruby/network/Topology.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/GarnetNetwork_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/NetworkInterface_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/OutVcState_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/RoutingUnit_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/flexible-pipeline/GarnetNetwork.cc c1ab57ea8805
src/mem/ruby/network/garnet/flexible-pipeline/NetworkInterface.cc c1ab57ea8805
src/mem/ruby/network/garnet/flexible-pipeline/Router.cc c1ab57ea8805
src/mem/ruby/network/simple/PerfectSwitch.cc c1ab57ea8805
src/mem/ruby/network/simple/SimpleNetwork.cc c1ab57ea8805
src/mem/ruby/network/simple/Switch.cc c1ab57ea8805
src/mem/ruby/network/simple/Throttle.cc c1ab57ea8805
src/mem/ruby/recorder/CacheRecorder.hh c1ab57ea8805
src/mem/ruby/recorder/CacheRecorder.cc c1ab57ea8805
src/mem/ruby/recorder/SConscript c1ab57ea8805
src/mem/ruby/recorder/TraceRecord.hh c1ab57ea8805
src/mem/ruby/recorder/TraceRecord.cc c1ab57ea8805
src/mem/ruby/recorder/Tracer.hh c1ab57ea8805
src/mem/ruby/recorder/Tracer.cc c1ab57ea8805
src/mem/ruby/recorder/Tracer.py c1ab57ea8805
src/mem/ruby/slicc_interface/AbstractController.hh c1ab57ea8805
src/mem/ruby/slicc_interface/AbstractController.cc c1ab57ea8805
src/mem/ruby/system/Cache.py c1ab57ea8805
src/mem/ruby/system/CacheMemory.hh c1ab57ea8805
src/mem/ruby/system/CacheMemory.cc c1ab57ea8805
src/mem/ruby/system/DMASequencer.hh c1ab57ea8805
src/mem/ruby/system/MemoryControl.hh c1ab57ea8805
src/mem/ruby/system/MemoryControl.cc c1ab57ea8805
src/mem/ruby/system/RubyPort.hh c1ab57ea8805
src/mem/ruby/system/RubyPort.cc c1ab57ea8805
src/mem/ruby/system/SConscript c1ab57ea8805
src/mem/ruby/system/Sequencer.hh c1ab57ea8805
src/mem/ruby/system/Sequencer.cc c1ab57ea8805
src/mem/ruby/system/System.hh c1ab57ea8805
src/mem/ruby/system/System.cc c1ab57ea8805
src/mem/slicc/symbols/StateMachine.py c1ab57ea8805
src/sim/eventq.hh c1ab57ea8805
Diff: http://reviews.m5sim.org/r/927/diff
Testing
-------
Thanks,
Nilay
Nilay Vaish
2011-12-08 00:50:15 UTC
Permalink
Post by Gabe Black
Post by Brad Beckmann
Thanks for pushing this forward Nilay! I will feel very good to see this functionality finally checked in. I know people are concerned about the eventq manipulation, but what Nilay has implemented is much better than some of our other suggestions. As a result of this change, we can start the process of removing all the files in src/mem/ruby/eventqueue. That will be very nice.
My two biggest concerns are the global variables and not utilizing the recently added cache flush support to create valid data checkpoints of both the caches and memory. See comments below.
Well, at least we're making the best of a presumably not so great situation. It does sound like a good idea to get rid of a parallel implementation of the event queue, but weren't we trying to have multiple queues for the parallelization work? If so, maybe we can have two instances of the same class and avoid the sneaky tricks. We may not be ready for that today, but it sounds like a decent longer term goal. In any case, I'm glad to see things improve even if we don't/can't go directly to an ideal solution.
Brad, thanks for the review. I can take care of all of the things you have pointed
out. I'll add functions for serializing and unserializing the memory image.

But I have other questions. Is flushing the cache necessary? If we are correctly
restoring the data in the caches, I think that we can checkpoint the memory
image even with stale data. Secondly, why were those checks breaking earlier?
I picked those lines directly from the patch you had provided to Somayeh.


- Nilay


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.m5sim.org/r/927/#review1744
-----------------------------------------------------------
Post by Gabe Black
-----------------------------------------------------------
http://reviews.m5sim.org/r/927/
-----------------------------------------------------------
(Updated 2011-12-05 08:09:50)
Review request for Default, Ali Saidi, Gabe Black, Steve Reinhardt, and Nathan Binkert.
Summary
-------
Ruby: Resurrect Cache Warmup Capability
This patch resurrects ruby's cache warmup capability.
Diffs
-----
configs/ruby/MOESI_hammer.py c1ab57ea8805
configs/ruby/Ruby.py c1ab57ea8805
src/mem/SConscript c1ab57ea8805
src/mem/protocol/MOESI_CMP_token-L1cache.sm c1ab57ea8805
src/mem/protocol/MOESI_CMP_token-L2cache.sm c1ab57ea8805
src/mem/protocol/MOESI_CMP_token-dir.sm c1ab57ea8805
src/mem/protocol/MOESI_hammer-cache.sm c1ab57ea8805
src/mem/protocol/MOESI_hammer-dir.sm c1ab57ea8805
src/mem/ruby/buffers/MessageBuffer.cc c1ab57ea8805
src/mem/ruby/common/Global.hh c1ab57ea8805
src/mem/ruby/common/Global.cc c1ab57ea8805
src/mem/ruby/eventqueue/RubyEventQueue.hh c1ab57ea8805
src/mem/ruby/eventqueue/RubyEventQueue.cc c1ab57ea8805
src/mem/ruby/network/Network.hh c1ab57ea8805
src/mem/ruby/network/Network.cc c1ab57ea8805
src/mem/ruby/network/Topology.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/GarnetNetwork_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/NetworkInterface_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/OutVcState_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/fixed-pipeline/RoutingUnit_d.cc c1ab57ea8805
src/mem/ruby/network/garnet/flexible-pipeline/GarnetNetwork.cc c1ab57ea8805
src/mem/ruby/network/garnet/flexible-pipeline/NetworkInterface.cc c1ab57ea8805
src/mem/ruby/network/garnet/flexible-pipeline/Router.cc c1ab57ea8805
src/mem/ruby/network/simple/PerfectSwitch.cc c1ab57ea8805
src/mem/ruby/network/simple/SimpleNetwork.cc c1ab57ea8805
src/mem/ruby/network/simple/Switch.cc c1ab57ea8805
src/mem/ruby/network/simple/Throttle.cc c1ab57ea8805
src/mem/ruby/recorder/CacheRecorder.hh c1ab57ea8805
src/mem/ruby/recorder/CacheRecorder.cc c1ab57ea8805
src/mem/ruby/recorder/SConscript c1ab57ea8805
src/mem/ruby/recorder/TraceRecord.hh c1ab57ea8805
src/mem/ruby/recorder/TraceRecord.cc c1ab57ea8805
src/mem/ruby/recorder/Tracer.hh c1ab57ea8805
src/mem/ruby/recorder/Tracer.cc c1ab57ea8805
src/mem/ruby/recorder/Tracer.py c1ab57ea8805
src/mem/ruby/slicc_interface/AbstractController.hh c1ab57ea8805
src/mem/ruby/slicc_interface/AbstractController.cc c1ab57ea8805
src/mem/ruby/system/Cache.py c1ab57ea8805
src/mem/ruby/system/CacheMemory.hh c1ab57ea8805
src/mem/ruby/system/CacheMemory.cc c1ab57ea8805
src/mem/ruby/system/DMASequencer.hh c1ab57ea8805
src/mem/ruby/system/MemoryControl.hh c1ab57ea8805
src/mem/ruby/system/MemoryControl.cc c1ab57ea8805
src/mem/ruby/system/RubyPort.hh c1ab57ea8805
src/mem/ruby/system/RubyPort.cc c1ab57ea8805
src/mem/ruby/system/SConscript c1ab57ea8805
src/mem/ruby/system/Sequencer.hh c1ab57ea8805
src/mem/ruby/system/Sequencer.cc c1ab57ea8805
src/mem/ruby/system/System.hh c1ab57ea8805
src/mem/ruby/system/System.cc c1ab57ea8805
src/mem/slicc/symbols/StateMachine.py c1ab57ea8805
src/sim/eventq.hh c1ab57ea8805
Diff: http://reviews.m5sim.org/r/927/diff
Testing
-------
Thanks,
Nilay
Beckmann, Brad
2011-12-08 01:20:07 UTC
Permalink
Switching to email.

The thing to remember is the cache trace doesn’t keep track of whether shared data is dirty or not. It simply marks that address for a load request. We don’t want to store dirty state in the cache since we want to make these traces protocol agnostic and each protocol can potentially manage dirty data differently. That is why the current patch breaks those checks.

Brad



Brad, thanks for the review. I can take care of all of the things you have pointed

out. I'll add functions for serializing and unserializing the memory image.



But I have other questions. Is flushing the cache necessary? If we are correctly

restoring the data in the caches, I think that we can checkpoint the memory

image even with stale data. Secondly, why were those checks breaking earlier?

I picked those lines directly from the patch you had provided to Somayeh.


- Nilay
Nilay Vaish
2011-12-08 14:55:47 UTC
Permalink
Brad, but flushing the caches would mean that we cannot support periodic
checkpointing.

--
Nilay
Post by Beckmann, Brad
Switching to email.
The thing to remember is the cache trace doesn’t keep track of whether
shared data is dirty or not. It simply marks that address for a load
request. We don’t want to store dirty state in the cache since we
want to make these traces protocol agnostic and each protocol can
potentially manage dirty data differently. That is why the current
patch breaks those checks.
Brad
Brad, thanks for the review. I can take care of all of the things you have pointed
out. I'll add functions for serializing and unserializing the memory image.
But I have other questions. Is flushing the cache necessary? If we are correctly
restoring the data in the caches, I think that we can checkpoint the memory
image even with stale data. Secondly, why were those checks breaking earlier?
I picked those lines directly from the patch you had provided to Somayeh.
- Nilay
Beckmann, Brad
2011-12-08 16:31:23 UTC
Permalink
I'm curious to know why you want to support periodic checkpointing with Ruby. Certainly periodic checkpointing with the Classic memory system is desired, especially in atomic mode. It makes sense to use Classic+atomic w/ periodic checkpointing to find the interesting parts of a workload and the run from those interesting checkpoints using more detailed simulation (Ruby, O3, etc.). However, due to the slowdown of Ruby, it is not clear to me why one would want to use periodic checkpointing with Ruby. Furthermore, as you know, taking a Ruby checkpoint perturbs the system. Ruby requires that all outstanding requests be completed before checkpointing the memory and cache state. I would like to avoid having to take a Ruby checkpoint unless absolutely necessary. One may argue that we should checkpoint all the outstanding state in Ruby so that checkpoint doesn't perturb the system, but I strongly believe that it is important to make Ruby checkpoints protocol and configuration agnostic. Tuning workloads is a tough job and once one creates a good set of checkpoints, you want to leverage that work as much as possible.

Brad
-----Original Message-----
Sent: Thursday, December 08, 2011 6:56 AM
To: Beckmann, Brad
Cc: Steve Reinhardt; Ali Saidi; Gabe Black; Nathan Binkert; Default
Subject: RE: Review Request: Ruby: Resurrect Cache Warmup Capability
Brad, but flushing the caches would mean that we cannot support periodic
checkpointing.
--
Nilay
Post by Beckmann, Brad
Switching to email.
The thing to remember is the cache trace doesn’t keep track of whether
shared data is dirty or not. It simply marks that address for a load
request. We don’t want to store dirty state in the cache since we
want to make these traces protocol agnostic and each protocol can
potentially manage dirty data differently. That is why the current
patch breaks those checks.
Brad
Brad, thanks for the review. I can take care of all of the things you have pointed
out. I'll add functions for serializing and unserializing the memory image.
But I have other questions. Is flushing the cache necessary? If we are correctly
restoring the data in the caches, I think that we can checkpoint the memory
image even with stale data. Secondly, why were those checks breaking
earlier?
Post by Beckmann, Brad
I picked those lines directly from the patch you had provided to Somayeh.
- Nilay
Nilay Vaish
2011-12-08 23:57:40 UTC
Permalink
Brad, you are right. Now that I think of it, it really does not make much
sense to take periodic checkpoints when the simulation is in timing mode
(and not in atomic mode) as checkpointing interferes with the timing.

I was thinking about checkpointing the memory image. I have not been able
to convince myself about some reasonably correct way of doing this. We
need to flush the caches, before we can take a checkpoint. It appears this
can only happen while the system is draining. My understanding of cache
flushing is that it would write back the data to the memory and invalidate
the cache line. Since the cache does not have the line any more, this
means that we cannot have that line in the cache trace. It seems only the
lines that have access permission as Read_Only can be part of the cache
trace. Is my understanding correct?

Thanks
Nilay
Post by Beckmann, Brad
I'm curious to know why you want to support periodic checkpointing with
Ruby. Certainly periodic checkpointing with the Classic memory system
is desired, especially in atomic mode. It makes sense to use
Classic+atomic w/ periodic checkpointing to find the interesting parts
of a workload and the run from those interesting checkpoints using more
detailed simulation (Ruby, O3, etc.). However, due to the slowdown of
Ruby, it is not clear to me why one would want to use periodic
checkpointing with Ruby. Furthermore, as you know, taking a Ruby
checkpoint perturbs the system. Ruby requires that all outstanding
requests be completed before checkpointing the memory and cache state.
I would like to avoid having to take a Ruby checkpoint unless absolutely
necessary. One may argue that we should checkpoint all the outstanding
state in Ruby so that checkpoint doesn't perturb the system, but I
strongly believe that it is important to make Ruby checkpoints protocol
and configuration agnostic. Tuning workloads is a tough job and once
one creates a good set of checkpoints, you want to leverage that work as
much as possible.
Brad
-----Original Message-----
Sent: Thursday, December 08, 2011 6:56 AM
To: Beckmann, Brad
Cc: Steve Reinhardt; Ali Saidi; Gabe Black; Nathan Binkert; Default
Subject: RE: Review Request: Ruby: Resurrect Cache Warmup Capability
Brad, but flushing the caches would mean that we cannot support periodic
checkpointing.
--
Nilay
Post by Beckmann, Brad
Switching to email.
The thing to remember is the cache trace doesn’t keep track of whether
shared data is dirty or not. It simply marks that address for a load
request. We don’t want to store dirty state in the cache since we
want to make these traces protocol agnostic and each protocol can
potentially manage dirty data differently. That is why the current
patch breaks those checks.
Brad
Brad, thanks for the review. I can take care of all of the things you have pointed
out. I'll add functions for serializing and unserializing the memory image.
But I have other questions. Is flushing the cache necessary? If we are correctly
restoring the data in the caches, I think that we can checkpoint the memory
image even with stale data. Secondly, why were those checks breaking
earlier?
Post by Beckmann, Brad
I picked those lines directly from the patch you had provided to Somayeh.
- Nilay
Beckmann, Brad
2011-12-09 01:10:26 UTC
Permalink
I was imagining that we flush the caches during the serialization/checkpointing process, not before it. I'm thinking the cache trace creation is the first step of the Ruby serialize function, then we flush the caches, and final we take the memory checkpoint. Is there a reason we can do that in that order?

Brad
-----Original Message-----
Sent: Thursday, December 08, 2011 3:58 PM
To: Beckmann, Brad
Cc: Steve Reinhardt; Ali Saidi; Gabe Black; Nathan Binkert; Default
Subject: RE: Review Request: Ruby: Resurrect Cache Warmup Capability
Brad, you are right. Now that I think of it, it really does not make much sense
to take periodic checkpoints when the simulation is in timing mode (and not
in atomic mode) as checkpointing interferes with the timing.
I was thinking about checkpointing the memory image. I have not been able
to convince myself about some reasonably correct way of doing this. We
need to flush the caches, before we can take a checkpoint. It appears this
can only happen while the system is draining. My understanding of cache
flushing is that it would write back the data to the memory and invalidate the
cache line. Since the cache does not have the line any more, this means that
we cannot have that line in the cache trace. It seems only the lines that have
access permission as Read_Only can be part of the cache trace. Is my
understanding correct?
Thanks
Nilay
Post by Beckmann, Brad
I'm curious to know why you want to support periodic checkpointing
with Ruby. Certainly periodic checkpointing with the Classic memory
system is desired, especially in atomic mode. It makes sense to use
Classic+atomic w/ periodic checkpointing to find the interesting parts
of a workload and the run from those interesting checkpoints using
more detailed simulation (Ruby, O3, etc.). However, due to the
slowdown of Ruby, it is not clear to me why one would want to use
periodic checkpointing with Ruby. Furthermore, as you know, taking a
Ruby checkpoint perturbs the system. Ruby requires that all
outstanding requests be completed before checkpointing the memory and
cache state.
Post by Beckmann, Brad
I would like to avoid having to take a Ruby checkpoint unless
absolutely necessary. One may argue that we should checkpoint all the
outstanding state in Ruby so that checkpoint doesn't perturb the
system, but I strongly believe that it is important to make Ruby
checkpoints protocol and configuration agnostic. Tuning workloads is
a tough job and once one creates a good set of checkpoints, you want
to leverage that work as much as possible.
Brad
-----Original Message-----
Sent: Thursday, December 08, 2011 6:56 AM
To: Beckmann, Brad
Cc: Steve Reinhardt; Ali Saidi; Gabe Black; Nathan Binkert; Default
Subject: RE: Review Request: Ruby: Resurrect Cache Warmup Capability
Brad, but flushing the caches would mean that we cannot support
periodic checkpointing.
--
Nilay
Post by Beckmann, Brad
Switching to email.
The thing to remember is the cache trace doesn’t keep track of
whether shared data is dirty or not. It simply marks that address
for a load request. We don’t want to store dirty state in the cache
since we want to make these traces protocol agnostic and each
protocol can potentially manage dirty data differently. That is why
the current patch breaks those checks.
Brad
Brad, thanks for the review. I can take care of all of the things you have pointed
out. I'll add functions for serializing and unserializing the memory image.
But I have other questions. Is flushing the cache necessary? If we are correctly
restoring the data in the caches, I think that we can checkpoint the memory
image even with stale data. Secondly, why were those checks breaking
earlier?
Post by Beckmann, Brad
I picked those lines directly from the patch you had provided to
Somayeh.
Post by Beckmann, Brad
Post by Beckmann, Brad
- Nilay
Nilay Vaish
2011-12-09 02:39:20 UTC
Permalink
I was initially thinking that a flush request for line is issued along
with it being added to the trace. But then, I realized (I might be wrong
here), that once the system has got drained, it may no longer process any
events. This means that the flush requests would actually not be processed
at all.

--
Nilay
Post by Beckmann, Brad
I was imagining that we flush the caches during the
serialization/checkpointing process, not before it. I'm thinking the
cache trace creation is the first step of the Ruby serialize function,
then we flush the caches, and final we take the memory checkpoint. Is
there a reason we can do that in that order?
Brad
-----Original Message-----
Sent: Thursday, December 08, 2011 3:58 PM
To: Beckmann, Brad
Cc: Steve Reinhardt; Ali Saidi; Gabe Black; Nathan Binkert; Default
Subject: RE: Review Request: Ruby: Resurrect Cache Warmup Capability
Brad, you are right. Now that I think of it, it really does not make much sense
to take periodic checkpoints when the simulation is in timing mode (and not
in atomic mode) as checkpointing interferes with the timing.
I was thinking about checkpointing the memory image. I have not been able
to convince myself about some reasonably correct way of doing this. We
need to flush the caches, before we can take a checkpoint. It appears this
can only happen while the system is draining. My understanding of cache
flushing is that it would write back the data to the memory and invalidate the
cache line. Since the cache does not have the line any more, this means that
we cannot have that line in the cache trace. It seems only the lines that have
access permission as Read_Only can be part of the cache trace. Is my
understanding correct?
Thanks
Nilay
Nilay Vaish
2011-12-09 16:02:12 UTC
Permalink
I thought more about this. It seems we can do it in the same manner as we
do things in unserialize(). In the serialize() function, we can change the
event queue head, issue the cache flush requests, and then call
simulate(). Once the caches are flushed, the event queue head is restored
and the memory image is checkpointed.

--
Nilay
I was initially thinking that a flush request for line is issued along with
it being added to the trace. But then, I realized (I might be wrong here),
that once the system has got drained, it may no longer process any events.
This means that the flush requests would actually not be processed at all.
--
Nilay
Post by Beckmann, Brad
I was imagining that we flush the caches during the
serialization/checkpointing process, not before it. I'm thinking the cache
trace creation is the first step of the Ruby serialize function, then we
flush the caches, and final we take the memory checkpoint. Is there a
reason we can do that in that order?
Brad
-----Original Message-----
Sent: Thursday, December 08, 2011 3:58 PM
To: Beckmann, Brad
Cc: Steve Reinhardt; Ali Saidi; Gabe Black; Nathan Binkert; Default
Subject: RE: Review Request: Ruby: Resurrect Cache Warmup Capability
Brad, you are right. Now that I think of it, it really does not make much sense
to take periodic checkpoints when the simulation is in timing mode (and not
in atomic mode) as checkpointing interferes with the timing.
I was thinking about checkpointing the memory image. I have not been able
to convince myself about some reasonably correct way of doing this. We
need to flush the caches, before we can take a checkpoint. It appears this
can only happen while the system is draining. My understanding of cache
flushing is that it would write back the data to the memory and invalidate the
cache line. Since the cache does not have the line any more, this means that
we cannot have that line in the cache trace. It seems only the lines that have
access permission as Read_Only can be part of the cache trace. Is my
understanding correct?
Thanks
Nilay
Beckmann, Brad
2011-12-09 17:07:34 UTC
Permalink
Ah, I understand the issue now. Yes, your solution manipulating the event queue seems like it will work.

It would be nice if we could have a regression test that can test this functionality once you check it in. I could imagine a further checkin accidently breaking this process.

Brad
-----Original Message-----
Sent: Friday, December 09, 2011 8:02 AM
To: Beckmann, Brad
Cc: Steve Reinhardt; Ali Saidi; Gabe Black; Nathan Binkert; Default
Subject: RE: Review Request: Ruby: Resurrect Cache Warmup Capability
I thought more about this. It seems we can do it in the same manner as we
do things in unserialize(). In the serialize() function, we can change the event
queue head, issue the cache flush requests, and then call simulate(). Once
the caches are flushed, the event queue head is restored and the memory
image is checkpointed.
--
Nilay
Post by Nilay Vaish
I was initially thinking that a flush request for line is issued along
with it being added to the trace. But then, I realized (I might be
wrong here), that once the system has got drained, it may no longer
process any events.
Post by Nilay Vaish
This means that the flush requests would actually not be processed at all.
--
Nilay
Post by Beckmann, Brad
I was imagining that we flush the caches during the
serialization/checkpointing process, not before it. I'm thinking the
cache trace creation is the first step of the Ruby serialize
function, then we flush the caches, and final we take the memory
checkpoint. Is there a reason we can do that in that order?
Brad
-----Original Message-----
Sent: Thursday, December 08, 2011 3:58 PM
To: Beckmann, Brad
Cc: Steve Reinhardt; Ali Saidi; Gabe Black; Nathan Binkert; Default
Subject: RE: Review Request: Ruby: Resurrect Cache Warmup Capability
Brad, you are right. Now that I think of it, it really does not make
much sense to take periodic checkpoints when the simulation is in
timing mode (and not in atomic mode) as checkpointing interferes
with the timing.
I was thinking about checkpointing the memory image. I have not been
able to convince myself about some reasonably correct way of doing
this. We need to flush the caches, before we can take a checkpoint.
It appears this can only happen while the system is draining. My
understanding of cache flushing is that it would write back the data
to the memory and invalidate the cache line. Since the cache does
not have the line any more, this means that we cannot have that line
in the cache trace. It seems only the lines that have access
permission as Read_Only can be part of the cache trace. Is my
understanding correct?
Thanks
Nilay
Steve Reinhardt
2011-12-09 18:50:47 UTC
Permalink
Have you looked at issues with statistics or other Ruby state that might
make this process awkward? E.g., will the flush operations affect the
stats, or are there timestamps in some of the controllers that will get
confused if time does not increase monotonically?

Steve
Post by Beckmann, Brad
Ah, I understand the issue now. Yes, your solution manipulating the event
queue seems like it will work.
It would be nice if we could have a regression test that can test this
functionality once you check it in. I could imagine a further checkin
accidently breaking this process.
Brad
-----Original Message-----
Sent: Friday, December 09, 2011 8:02 AM
To: Beckmann, Brad
Cc: Steve Reinhardt; Ali Saidi; Gabe Black; Nathan Binkert; Default
Subject: RE: Review Request: Ruby: Resurrect Cache Warmup Capability
I thought more about this. It seems we can do it in the same manner as we
do things in unserialize(). In the serialize() function, we can change
the event
queue head, issue the cache flush requests, and then call simulate().
Once
the caches are flushed, the event queue head is restored and the memory
image is checkpointed.
--
Nilay
Post by Nilay Vaish
I was initially thinking that a flush request for line is issued along
with it being added to the trace. But then, I realized (I might be
wrong here), that once the system has got drained, it may no longer
process any events.
Post by Nilay Vaish
This means that the flush requests would actually not be processed at
all.
Post by Nilay Vaish
--
Nilay
Post by Beckmann, Brad
I was imagining that we flush the caches during the
serialization/checkpointing process, not before it. I'm thinking the
cache trace creation is the first step of the Ruby serialize
function, then we flush the caches, and final we take the memory
checkpoint. Is there a reason we can do that in that order?
Brad
-----Original Message-----
Sent: Thursday, December 08, 2011 3:58 PM
To: Beckmann, Brad
Cc: Steve Reinhardt; Ali Saidi; Gabe Black; Nathan Binkert; Default
Subject: RE: Review Request: Ruby: Resurrect Cache Warmup Capability
Brad, you are right. Now that I think of it, it really does not make
much sense to take periodic checkpoints when the simulation is in
timing mode (and not in atomic mode) as checkpointing interferes
with the timing.
I was thinking about checkpointing the memory image. I have not been
able to convince myself about some reasonably correct way of doing
this. We need to flush the caches, before we can take a checkpoint.
It appears this can only happen while the system is draining. My
understanding of cache flushing is that it would write back the data
to the memory and invalidate the cache line. Since the cache does
not have the line any more, this means that we cannot have that line
in the cache trace. It seems only the lines that have access
permission as Read_Only can be part of the cache trace. Is my
understanding correct?
Thanks
Nilay
Beckmann, Brad
2011-12-09 19:14:54 UTC
Permalink
That could be an issue, but I would imagine that the stats aren't that important to the user if they're taking a checkpoint at the end of their run. I agree that it would be nice to avoid the flush from perturbing the results, but I don't think it is a "show stopper".

Brad


From: Steve Reinhardt [mailto:stever-***@public.gmane.org]
Sent: Friday, December 09, 2011 10:51 AM
To: Beckmann, Brad
Cc: Nilay Vaish; Ali Saidi; Gabe Black; Nathan Binkert; Default
Subject: Re: Review Request: Ruby: Resurrect Cache Warmup Capability

Have you looked at issues with statistics or other Ruby state that might make this process awkward? E.g., will the flush operations affect the stats, or are there timestamps in some of the controllers that will get confused if time does not increase monotonically?

Steve
On Fri, Dec 9, 2011 at 9:07 AM, Beckmann, Brad <Brad.Beckmann-***@public.gmane.org<mailto:Brad.Beckmann-***@public.gmane.org>> wrote:
Ah, I understand the issue now. Yes, your solution manipulating the event queue seems like it will work.

It would be nice if we could have a regression test that can test this functionality once you check it in. I could imagine a further checkin accidently breaking this process.

Brad
-----Original Message-----
Sent: Friday, December 09, 2011 8:02 AM
To: Beckmann, Brad
Cc: Steve Reinhardt; Ali Saidi; Gabe Black; Nathan Binkert; Default
Subject: RE: Review Request: Ruby: Resurrect Cache Warmup Capability
I thought more about this. It seems we can do it in the same manner as we
do things in unserialize(). In the serialize() function, we can change the event
queue head, issue the cache flush requests, and then call simulate(). Once
the caches are flushed, the event queue head is restored and the memory
image is checkpointed.
--
Nilay
Post by Nilay Vaish
I was initially thinking that a flush request for line is issued along
with it being added to the trace. But then, I realized (I might be
wrong here), that once the system has got drained, it may no longer
process any events.
Post by Nilay Vaish
This means that the flush requests would actually not be processed at all.
--
Nilay
Post by Beckmann, Brad
I was imagining that we flush the caches during the
serialization/checkpointing process, not before it. I'm thinking the
cache trace creation is the first step of the Ruby serialize
function, then we flush the caches, and final we take the memory
checkpoint. Is there a reason we can do that in that order?
Brad
-----Original Message-----
Sent: Thursday, December 08, 2011 3:58 PM
To: Beckmann, Brad
Cc: Steve Reinhardt; Ali Saidi; Gabe Black; Nathan Binkert; Default
Subject: RE: Review Request: Ruby: Resurrect Cache Warmup Capability
Brad, you are right. Now that I think of it, it really does not make
much sense to take periodic checkpoints when the simulation is in
timing mode (and not in atomic mode) as checkpointing interferes
with the timing.
I was thinking about checkpointing the memory image. I have not been
able to convince myself about some reasonably correct way of doing
this. We need to flush the caches, before we can take a checkpoint.
It appears this can only happen while the system is draining. My
understanding of cache flushing is that it would write back the data
to the memory and invalidate the cache line. Since the cache does
not have the line any more, this means that we cannot have that line
in the cache trace. It seems only the lines that have access
permission as Read_Only can be part of the cache trace. Is my
understanding correct?
Thanks
Nilay
Nilay Vaish
2012-01-04 00:43:40 UTC
Permalink
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.m5sim.org/r/927/
-----------------------------------------------------------

(Updated 2012-01-03 16:43:40.577821)


Review request for Default, Ali Saidi, Gabe Black, Steve Reinhardt, and Nathan Binkert.


Summary (updated)
-------

Ruby: Resurrect Cache Warmup Capability
This patch resurrects ruby's cache warmup capability. It essentially
makes use of all the infrastructure that was added to the controllers,
memories and the cache recorder.


Diffs (updated)
-----

src/mem/ruby/buffers/MessageBuffer.cc 09b482ee9ae0
src/mem/ruby/system/DMASequencer.hh 09b482ee9ae0
src/mem/ruby/system/DirectoryMemory.cc 09b482ee9ae0
src/mem/ruby/system/RubyPort.hh 09b482ee9ae0
src/mem/ruby/system/RubyPort.cc 09b482ee9ae0
src/mem/ruby/system/Sequencer.hh 09b482ee9ae0
src/mem/ruby/system/Sequencer.cc 09b482ee9ae0
src/mem/ruby/system/System.hh 09b482ee9ae0
src/mem/ruby/system/System.cc 09b482ee9ae0

Diff: http://reviews.m5sim.org/r/927/diff


Testing
-------


Thanks,

Nilay
Brad Beckmann
2012-01-07 01:49:56 UTC
Permalink
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.m5sim.org/r/927/#review1863
-----------------------------------------------------------


I have a few questions below. Also FYI...it appears that some of parts of patch don't apply cleanly to the main repo.


src/mem/ruby/system/Sequencer.cc
<http://reviews.m5sim.org/r/927/#comment2370>

I like the term cooldown, but I think it is a little bit confusing in this situation. If I understand this code correctly, you're not trying to avoid data reads and writes before taking a checkpoint. Rather you don't want the hitcallbacks of Flush requests trying to modify memory. Is that correct? If so, I thought we already dealt with that problem by simply not setting the data pointer of flush packet. Maybe I'm missing something.



src/mem/ruby/system/Sequencer.cc
<http://reviews.m5sim.org/r/927/#comment2369>

Did you mean to set this value to false? I'm not sure why it was ever set to true before, but it is unclear how this patch would impact this value.



src/mem/ruby/system/Sequencer.cc
<http://reviews.m5sim.org/r/927/#comment2371>

Am I reading this correctly? Is this issuing the next fetch or flush request inside the previous request's callback? Is that how the patch used to work?

If so, that seems very odd and would create one deep call stack. I must be missing something here. Possibly the next request is being schedule for the next cycle. If that is the case, then these functions should be renamed to something like "notifyFetchCompletion" or "notifyFlushCompletion"


- Brad
Post by Nilay Vaish
-----------------------------------------------------------
http://reviews.m5sim.org/r/927/
-----------------------------------------------------------
(Updated 2012-01-03 16:43:40)
Review request for Default, Ali Saidi, Gabe Black, Steve Reinhardt, and Nathan Binkert.
Summary
-------
Ruby: Resurrect Cache Warmup Capability
This patch resurrects ruby's cache warmup capability. It essentially
makes use of all the infrastructure that was added to the controllers,
memories and the cache recorder.
Diffs
-----
src/mem/ruby/buffers/MessageBuffer.cc 09b482ee9ae0
src/mem/ruby/system/DMASequencer.hh 09b482ee9ae0
src/mem/ruby/system/DirectoryMemory.cc 09b482ee9ae0
src/mem/ruby/system/RubyPort.hh 09b482ee9ae0
src/mem/ruby/system/RubyPort.cc 09b482ee9ae0
src/mem/ruby/system/Sequencer.hh 09b482ee9ae0
src/mem/ruby/system/Sequencer.cc 09b482ee9ae0
src/mem/ruby/system/System.hh 09b482ee9ae0
src/mem/ruby/system/System.cc 09b482ee9ae0
Diff: http://reviews.m5sim.org/r/927/diff
Testing
-------
Thanks,
Nilay
Nilay Vaish
2012-01-07 02:56:50 UTC
Permalink
Post by Nilay Vaish
Post by Brad Beckmann
I have a few questions below. Also FYI...it appears that some of parts of patch don't apply cleanly to the main repo.
I think that was happening because I have committed some changes to the repo
after I updated the review request. I updated the review request, everything
seems to normal now.
Post by Nilay Vaish
Post by Brad Beckmann
src/mem/ruby/system/Sequencer.cc, line 564
<http://reviews.m5sim.org/r/927/diff/2/?file=16851#file16851line564>
Am I reading this correctly? Is this issuing the next fetch or flush request inside the previous request's callback? Is that how the patch used to work?
If so, that seems very odd and would create one deep call stack. I must be missing something here. Possibly the next request is being schedule for the next cycle. If that is the case, then these functions should be renamed to something like "notifyFetchCompletion" or "notifyFlushCompletion"
The flush/fetch request will get enqueued in the mandatory queue, which
will be taken serviced next time the controller wakes up. This will not
create a deep call stack. Actually the Cache Recorder is issuing these
fetch and flush requests. From that point of view, I feel issueFlushRequest()
sounds better.

But I have a question here. Can the flush request be generated for any
cache line? Or does that cache line needs to be in the L1 cache? I expecting
that flush requests can be issued for any cache line, or else I am not sure how
data from the L2 cache will get flushed to the memory.
Post by Nilay Vaish
Post by Brad Beckmann
src/mem/ruby/system/Sequencer.cc, line 534
<http://reviews.m5sim.org/r/927/diff/2/?file=16851#file16851line534>
Did you mean to set this value to false? I'm not sure why it was ever set to true before, but it is unclear how this patch would impact this value.
Even I don't remember why I made this change. Since we already
check for the data ptr not being NULL, I think this is
unnecessary. I'll remove this from the patch.
Post by Nilay Vaish
Post by Brad Beckmann
src/mem/ruby/system/Sequencer.cc, line 526
<http://reviews.m5sim.org/r/927/diff/2/?file=16851#file16851line526>
I like the term cooldown, but I think it is a little bit confusing in this situation. If I understand this code correctly, you're not trying to avoid data reads and writes before taking a checkpoint. Rather you don't want the hitcallbacks of Flush requests trying to modify memory. Is that correct? If so, I thought we already dealt with that problem by simply not setting the data pointer of flush packet. Maybe I'm missing something.
You are right that the data pointer of the flush packet is NULL.
But if this 'else if' clause is removed, then the 'else' clause
will be executed, and if debug flag MemoryAccess is enabled,
we will get some prints.

As such I am ready to remove this 'else if' clause.


- Nilay


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.m5sim.org/r/927/#review1863
-----------------------------------------------------------
Post by Nilay Vaish
-----------------------------------------------------------
http://reviews.m5sim.org/r/927/
-----------------------------------------------------------
(Updated 2012-01-06 18:24:19)
Review request for Default, Ali Saidi, Gabe Black, Steve Reinhardt, and Nathan Binkert.
Summary
-------
Ruby: Resurrect Cache Warmup Capability
This patch resurrects ruby's cache warmup capability. It essentially
makes use of all the infrastructure that was added to the controllers,
memories and the cache recorder.
Diffs
-----
src/mem/ruby/buffers/MessageBuffer.cc c3d878fbdaea
src/mem/ruby/system/DMASequencer.hh c3d878fbdaea
src/mem/ruby/system/DirectoryMemory.cc c3d878fbdaea
src/mem/ruby/system/RubyPort.hh c3d878fbdaea
src/mem/ruby/system/RubyPort.cc c3d878fbdaea
src/mem/ruby/system/Sequencer.hh c3d878fbdaea
src/mem/ruby/system/Sequencer.cc c3d878fbdaea
src/mem/ruby/system/System.hh c3d878fbdaea
src/mem/ruby/system/System.cc c3d878fbdaea
Diff: http://reviews.m5sim.org/r/927/diff
Testing
-------
Thanks,
Nilay
Brad Beckmann
2012-01-09 19:54:33 UTC
Permalink
Post by Nilay Vaish
Post by Brad Beckmann
src/mem/ruby/system/Sequencer.cc, line 526
<http://reviews.m5sim.org/r/927/diff/2/?file=16851#file16851line526>
I like the term cooldown, but I think it is a little bit confusing in this situation. If I understand this code correctly, you're not trying to avoid data reads and writes before taking a checkpoint. Rather you don't want the hitcallbacks of Flush requests trying to modify memory. Is that correct? If so, I thought we already dealt with that problem by simply not setting the data pointer of flush packet. Maybe I'm missing something.
You are right that the data pointer of the flush packet is NULL.
But if this 'else if' clause is removed, then the 'else' clause
will be executed, and if debug flag MemoryAccess is enabled,
we will get some prints.
As such I am ready to remove this 'else if' clause.
I think seeing the prints if the MemoryAccess debug flag is enabled is fine, but if you rather not see them for flush requests, I would add an explicit if not Flush request condition to make it more clear what you're trying to avoid here. Adding the cooldown flag is really confusing.
Post by Nilay Vaish
Post by Brad Beckmann
src/mem/ruby/system/Sequencer.cc, line 564
<http://reviews.m5sim.org/r/927/diff/2/?file=16851#file16851line564>
Am I reading this correctly? Is this issuing the next fetch or flush request inside the previous request's callback? Is that how the patch used to work?
If so, that seems very odd and would create one deep call stack. I must be missing something here. Possibly the next request is being schedule for the next cycle. If that is the case, then these functions should be renamed to something like "notifyFetchCompletion" or "notifyFlushCompletion"
The flush/fetch request will get enqueued in the mandatory queue, which
will be taken serviced next time the controller wakes up. This will not
create a deep call stack. Actually the Cache Recorder is issuing these
fetch and flush requests. From that point of view, I feel issueFlushRequest()
sounds better.
But I have a question here. Can the flush request be generated for any
cache line? Or does that cache line needs to be in the L1 cache? I expecting
that flush requests can be issued for any cache line, or else I am not sure how
data from the L2 cache will get flushed to the memory.
I still don't think the functions issueFlush/FetchRequest are named correctly. They definitely imply a deep call stack. How about enqueueNextFlush/FetchRequest instead?

To answer your question, flush requests can be issued for any address in physical memory. You don't need to have to move data to the L1 to flush the L2, though underneath the protocol's transistions may implement the flush in that way.


- Brad


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.m5sim.org/r/927/#review1863
-----------------------------------------------------------
Post by Nilay Vaish
-----------------------------------------------------------
http://reviews.m5sim.org/r/927/
-----------------------------------------------------------
(Updated 2012-01-07 05:15:42)
Review request for Default, Ali Saidi, Gabe Black, Steve Reinhardt, and Nathan Binkert.
Summary
-------
Ruby: Resurrect Cache Warmup Capability
This patch resurrects ruby's cache warmup capability. It essentially
makes use of all the infrastructure that was added to the controllers,
memories and the cache recorder.
Diffs
-----
src/mem/ruby/buffers/MessageBuffer.cc c3d878fbdaea
src/mem/ruby/system/DMASequencer.hh c3d878fbdaea
src/mem/ruby/system/DirectoryMemory.cc c3d878fbdaea
src/mem/ruby/system/RubyPort.hh c3d878fbdaea
src/mem/ruby/system/RubyPort.cc c3d878fbdaea
src/mem/ruby/system/Sequencer.hh c3d878fbdaea
src/mem/ruby/system/Sequencer.cc c3d878fbdaea
src/mem/ruby/system/System.hh c3d878fbdaea
src/mem/ruby/system/System.cc c3d878fbdaea
Diff: http://reviews.m5sim.org/r/927/diff
Testing
-------
Thanks,
Nilay
Nilay Vaish
2012-01-07 02:24:19 UTC
Permalink
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.m5sim.org/r/927/
-----------------------------------------------------------

(Updated 2012-01-06 18:24:19.752960)


Review request for Default, Ali Saidi, Gabe Black, Steve Reinhardt, and Nathan Binkert.


Summary
-------

Ruby: Resurrect Cache Warmup Capability
This patch resurrects ruby's cache warmup capability. It essentially
makes use of all the infrastructure that was added to the controllers,
memories and the cache recorder.


Diffs (updated)
-----

src/mem/ruby/buffers/MessageBuffer.cc c3d878fbdaea
src/mem/ruby/system/DMASequencer.hh c3d878fbdaea
src/mem/ruby/system/DirectoryMemory.cc c3d878fbdaea
src/mem/ruby/system/RubyPort.hh c3d878fbdaea
src/mem/ruby/system/RubyPort.cc c3d878fbdaea
src/mem/ruby/system/Sequencer.hh c3d878fbdaea
src/mem/ruby/system/Sequencer.cc c3d878fbdaea
src/mem/ruby/system/System.hh c3d878fbdaea
src/mem/ruby/system/System.cc c3d878fbdaea

Diff: http://reviews.m5sim.org/r/927/diff


Testing
-------


Thanks,

Nilay
Nilay Vaish
2012-01-07 13:15:42 UTC
Permalink
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.m5sim.org/r/927/
-----------------------------------------------------------

(Updated 2012-01-07 05:15:42.888984)


Review request for Default, Ali Saidi, Gabe Black, Steve Reinhardt, and Nathan Binkert.


Summary
-------

Ruby: Resurrect Cache Warmup Capability
This patch resurrects ruby's cache warmup capability. It essentially
makes use of all the infrastructure that was added to the controllers,
memories and the cache recorder.


Diffs (updated)
-----

src/mem/ruby/buffers/MessageBuffer.cc c3d878fbdaea
src/mem/ruby/system/DMASequencer.hh c3d878fbdaea
src/mem/ruby/system/DirectoryMemory.cc c3d878fbdaea
src/mem/ruby/system/RubyPort.hh c3d878fbdaea
src/mem/ruby/system/RubyPort.cc c3d878fbdaea
src/mem/ruby/system/Sequencer.hh c3d878fbdaea
src/mem/ruby/system/Sequencer.cc c3d878fbdaea
src/mem/ruby/system/System.hh c3d878fbdaea
src/mem/ruby/system/System.cc c3d878fbdaea

Diff: http://reviews.m5sim.org/r/927/diff


Testing
-------


Thanks,

Nilay
Nilay Vaish
2012-01-07 13:31:49 UTC
Permalink
Brad, I have updated the patch. There is one thing that I have not talked
about, which probably will not be clear from just reading the patch. The
warmup and cooldown happens differently in case of Sparse Memory compared
to Memory Vector, and hence there is a difference in timing for the two
memories.

Sparse Memory makes use of blocks allocated by the directory controller,
so while warming up the Sparse Memory, timing requests are issued for all
the blocks that were in the Sparse Memory at the time of cool down,
followed timing requests for all the blocks that were in the caches.

In case of Memory Vector, the memory pages are maintained independently of
the directory controller. So those pages can be recreated without
requiring to issue any timing requests. This results in the difference
in timing.

One can, in theory, issue requests for all the blocks that make up the
pages of Memory Vector, but it is likely that many more requests will be
issued than necessary (not all blocks of a page may have been touched) and
still this is unlikely to match the timing obtained while warming up
Sparse Memory.

--
Nilay
Post by Nilay Vaish
-----------------------------------------------------------
http://reviews.m5sim.org/r/927/
-----------------------------------------------------------
(Updated 2012-01-07 05:15:42.888984)
Review request for Default, Ali Saidi, Gabe Black, Steve Reinhardt, and Nathan Binkert.
Summary
-------
Ruby: Resurrect Cache Warmup Capability
This patch resurrects ruby's cache warmup capability. It essentially
makes use of all the infrastructure that was added to the controllers,
memories and the cache recorder.
Diffs (updated)
-----
src/mem/ruby/buffers/MessageBuffer.cc c3d878fbdaea
src/mem/ruby/system/DMASequencer.hh c3d878fbdaea
src/mem/ruby/system/DirectoryMemory.cc c3d878fbdaea
src/mem/ruby/system/RubyPort.hh c3d878fbdaea
src/mem/ruby/system/RubyPort.cc c3d878fbdaea
src/mem/ruby/system/Sequencer.hh c3d878fbdaea
src/mem/ruby/system/Sequencer.cc c3d878fbdaea
src/mem/ruby/system/System.hh c3d878fbdaea
src/mem/ruby/system/System.cc c3d878fbdaea
Diff: http://reviews.m5sim.org/r/927/diff
Testing
-------
Thanks,
Nilay
Nilay Vaish
2012-01-07 14:02:39 UTC
Permalink
This also means that a checkpoint created with Memory Vector cannot be
used to bring up a system with Sparse Memory and vice-versa.

--
Nilay
Post by Nilay Vaish
Brad, I have updated the patch. There is one thing that I have not talked
about, which probably will not be clear from just reading the patch. The
warmup and cooldown happens differently in case of Sparse Memory compared to
Memory Vector, and hence there is a difference in timing for the two
memories.
Sparse Memory makes use of blocks allocated by the directory controller, so
while warming up the Sparse Memory, timing requests are issued for all the
blocks that were in the Sparse Memory at the time of cool down, followed
timing requests for all the blocks that were in the caches.
In case of Memory Vector, the memory pages are maintained independently of
the directory controller. So those pages can be recreated without requiring
to issue any timing requests. This results in the difference in timing.
One can, in theory, issue requests for all the blocks that make up the pages
of Memory Vector, but it is likely that many more requests will be issued
than necessary (not all blocks of a page may have been touched) and still
this is unlikely to match the timing obtained while warming up Sparse Memory.
--
Nilay
Post by Nilay Vaish
-----------------------------------------------------------
http://reviews.m5sim.org/r/927/
-----------------------------------------------------------
(Updated 2012-01-07 05:15:42.888984)
Review request for Default, Ali Saidi, Gabe Black, Steve Reinhardt, and Nathan Binkert.
Summary
-------
Ruby: Resurrect Cache Warmup Capability
This patch resurrects ruby's cache warmup capability. It essentially
makes use of all the infrastructure that was added to the controllers,
memories and the cache recorder.
Diffs (updated)
-----
src/mem/ruby/buffers/MessageBuffer.cc c3d878fbdaea
src/mem/ruby/system/DMASequencer.hh c3d878fbdaea
src/mem/ruby/system/DirectoryMemory.cc c3d878fbdaea
src/mem/ruby/system/RubyPort.hh c3d878fbdaea
src/mem/ruby/system/RubyPort.cc c3d878fbdaea
src/mem/ruby/system/Sequencer.hh c3d878fbdaea
src/mem/ruby/system/Sequencer.cc c3d878fbdaea
src/mem/ruby/system/System.hh c3d878fbdaea
src/mem/ruby/system/System.cc c3d878fbdaea
Diff: http://reviews.m5sim.org/r/927/diff
Testing
-------
Thanks,
Nilay
Loading...