Procrastination has been a powerful technique to enable scalable data-structure access. RCU enables read-only data-structure operations to avoid explicit synchronization, thus to scale well with an ever-increasing number of cores. However, updates often require complicated synchronization between readers and updaters that does not scale well with more cores, and introduces undesirable latency spikes. During updates, memory reclamation is delayed until a future point when it is safe.
In this talk, we'll introduce a technique that uses local access to time as a global ordering on events to make updates cheap for preemptive systems. We'll discuss the multiple ways we've used time to synchronize between cores, thus minimizing shared memory synchronization.