From the technical sessions that I attended, there were two presentations that really stood out. Both were invited talks from "Special Sessions", and usually what I look for in such talks is ideas for future research directions (sadly, the "Future of EDA" panel didn't really deliver on that front yesterday).
First, Keith Bowman from Intel had a really nice presentation on micro-architectural techniques to deal with increasing process variations. The key idea is to instrument a design with two additional features
- Run-time error detection, using, for example, double sampling
- A mechanism to recover from errors when they occur
Together, these features allow the fabricated die to ignore the in-built design margins and to scale the voltage down to the point where timing constraints are just met. Of course, these ideas are not entirely new, but I thought that some of the future research directions that the speaker alluded to were pretty interesting
- Characterizing energy versus probability-of-error trade-offs: Obviously, as the voltage is reduced, more errors will begin to manifest in the circuit and at some point, the overhead of fault recovery starts significantly hurting throughput. From an EDA perspective, can we come up with tools that can help characterize this trade-off curve in advance instead of getting this data after fabrication from silicon tests?
- What if we are ready to deal with a small probability-of-error, instead of instantiating a fault-recovery mechanism every time. Given this, can we come up with EDA tools that use this information for further timing optimization or maybe even resynthesize the logic?
In the afternoon, Shekhar Borkar, also from Intel (I'm not plugging Intel, I promise!) spoke about design challenges at the 22 nm node. While the talk had a fairly broad scope, and there was plenty of stuff that I'd heard in other venues before, some of the insights were pretty novel
- As the number of transistors on a chip increases, it becomes increasingly important to utilize these transistors judiciously and power efficiently. According to the speaker, the best way to do this is to operate logic in the near threshold voltage regime, where its energy efficiency is greatest, and to perform fine-grained dynamic power management to deal with workload/environmental variations.
- The communication network between the components on a die must be scalable, but also power efficient. Scaling standard mesh-based packet switching NoCs to the 22 nm node may not necessarily be the most power efficient solution and competing solutions like circuit switched networks should be evaluated also.
No comments:
Post a Comment