Thursday, July 30, 2009

Update on the Special Panel on Green Technologies

Before attending the special panel, my opinion was that the connection between EDA and green technology is tenuous at best. I was hoping that the panel would prove me wrong. After attending the special panel, my opinion is that the connection between EDA and green technology is tenuous at best. To be fair, the panelists did outline a number of areas where folks who work on semiconductor technology, sensor and actuator networks, control systems and wireless communications could help increase the efficiency of energy production and reduce green-house emissions, but really, nothing specific to EDA.

Now, there is no reason why EDA folks shouldn't also look at these issues, but we do need to make a strong case that we bring something new to the table; that EDA experts would do a better job than experts who have been working on, for example, control systems for ages. Or maybe I'm just being too myopic?

In any case, here are some of the technology challenges that came up during the discussion -
  • Smart electronics for metering electrical grids
  • Sensor networks for monitoring buildings and efficient wireless communication networks to transmit the collected data
  • Reducing data center power consumption
  • Control system modeling for fuel efficient electric cars

To conclude, I really didn't think the panel was very appropriate for DAC. The panelists were clearly struggling, for no fault of theirs, to present opportunities relevant to the EDA community and mostly ended up speaking at length about their own endeavours.

What's Hot at DAC on Thursday

There are two big events slated for the day, and I'll mention them in the order of what I'm looking forward to more.

Wild and Crazy Idea (WACI) Session (9:00AM Room 131) This has been one of the most popular sessions at DAC since it was first help two years back. The idea is to have people present "whacky" ideas that aren't necessarily perfectly fleshed out, but are more revolutionary rather than evolutionary. Here are two interesting papers from this session

  • "Human Computing for EDA" DeOrio and Bertacco, U. Michigan
  • "A Learning Digital Computer" Barr, Basu, Brink, Hasler, GaTech

SPECIAL PLENARY PANEL: How Green Is My Silicon Valley (12:00 noon, Gateway Ballroom) The Keynote Session yesterday was preceded by a video plugging this session, so there's certainly been an attempt to drum up publicity for this. Given the impressive list of invited panelists, it should be interesting; I just hope the panelists emphasize the bridge to the EDA community. There's a related session on green data centers in the evening today that promises to be more immediately relevant to EDA folks.

I'm at the speaker breakfast right now where they just announced the best paper ward winners, though I presume its not going to be made public till the formal award ceremony. There was an excellent selection of best paper award nominees this year, but without discrediting any of the nominees, my personal pick for best paper, which wasn't even nominated :( , was in this session, titled "Physically Justifiable Die-Level Modeling of Spatial Variation in View of Systematic Across Wafer Variability." If any of the authors are reading this, great job guys!

Wednesday, July 29, 2009

Update from the Wednesday Keynote

Today's keynote by Bill Dally was, in many ways, distinctly different from yesterdays in that it got down and dirty with technical specifics from the very first slide and stayed that way throughout the end. I'm not sure if everyone in the audience liked it, but I for one, had a great time.

Bill Dally's argument basically came down to the following - current general purpose processor architects are in a state of denial - denial about the fact that (1) in the absence of frequency scaling, performance can only be extracted by explicitly parallel architectures; and (2) power efficiency can only be obtained by locality-aware distributed on-chip memories, as opposed to flat cache hierarchies.

Both arguments were well motivated. From a parallelism perspective, Prof. Dally had a nice plot that showed that improvements in processor throughput have traditionally been driven by three factors - improvement in gate delay, improvement in clock frequency by deeper pipelining and architectural innovations that make use of the additional transistors available in every technology generation. Till 2001, this led to a 52% performance improvement every generation.

Here's the problem though - increasing clock frequency is no longer possible and extracting parallelism using single core superscalar out-of-order processors is running out of steam. In effect, all we're left with is the improvement in gate delay from technology scaling, which gives us only 20% improvement in performance every technology generation!! Explicit parallelism is the only way to get us back on the 52% performance gain per generation.

From a power efficiency stand-point, moving data even 1 mm across a chip apparently requires many times more energy than the floating point operation that produced the data. Spatial locality is therefore the key to energy efficiency, making it imperative to store data close to where it is produced. This provides the motivation for distributed caches across a chip, as opposed to a single big one.

All of this points the way to the kind of massively parallel GPU systems that NVIDIA produces as the platform of choice for the future - I would have been surprised if the VP of NVIDIA came up with any other conclusion (not that I disagree entirely with the conclusion)! Now I have a few quibbles with the argument that Prof. Dally presented, chiefly that he really only compared large unicore systems (from way back when) with massively parallel GPUs, but not really with the 4 or 8 core systems that Intel or AMD sell. Also, exposing parallelism to the programmer as opposed to having the h/w extract it brings to the front questions about how it will effect programmer efficiency and the time required to debug and verify parallel programs. I'm not a s/w guy, but I imagine parallel are significantly more complicated to debug/verify than single threaded ones.

What was really nice is that talk focused also on the synergestic relationship between EDA and the types of massively parallel platforms that were described, i.e., how is EDA benefited by such architectures and what can EDA tools/vendors do toaid the design of such systems.

A number of companies exhibiting at DAC have already embraced the former by offering EDA tools that have been architected for parallel execution. With regards to what EDA tools can do for NVIDIA GPUs, Prof. Dally pointed towards the need for tools that provide accurate power estimates at early stages in the design process and in general, the need for more sophisticated low power design methodologies.

All in all, it was an hour well spent!

What to Watch out for at DAC Today

Today's one of those days where I wish I could be a two places at one time, since there are so many really interesting talks to attend. I have a feeling I'm going to be to doing a lot of shuttling back and forth between sessions today :).

The big attraction today is the Keynote presentation by Bill Dally from NVIDIA/Stanford (Gateway Ballroom, 11:15 AM).

Nonetheless, I'm hoping that opposed to the last keynote talk, Prof. Dally's will have more technical details and less "let's-all-work-together" soft content. There's been a lot of talk recently about how massively parallel GPUs will be the general-purpose compute platform of the future and the synopsis of the talk seems to be pushing, not surprisingly, the same theme. What I like is the focus on how the EDA industry can contribute to the design if such massively parallel platforms.

UPDATE:

I made a huge mistake. Apparently, the Special Session on green technologies is NOT today as I previously blogged, but tomorrow. So there is, in fact, no scheduling conflict between the Keynote and the Special Session.

Tuesday, July 28, 2009

Tuesday Wrap-up

The day kicked off with the keynote speech by Dr. Fu-Chieh Hsu from TSMC. It's always interesting to hear from folks who don't necessarily come directly come from the EDA industry, or as in this case, even from immediate customers of EDA tools (although I suppose a case could be made that TSMC is an EDA player also) . One of the things I've already heard repeated over and over again at DAC this year is how increasing design complexity is a becoming huge issue, and really the only way out is moving to higher levels of abstraction, wherever possible. So it was a bit surprising to hear Dr. Hsu push the ASIC route, which admittedly comes with its own advantages, without mentioning the tremendous increase in design effort of a custom ASIC solution over more general purpose fabrics. In fact, one of the keynote speakers at ISQED earlier this year had a graph showing that man-hours (is it politically correct to say woman-hours now?) are an increasing fraction of the cost of a semiconductor product.

From the technical sessions that I attended, there were two presentations that really stood out. Both were invited talks from "Special Sessions", and usually what I look for in such talks is ideas for future research directions (sadly, the "Future of EDA" panel didn't really deliver on that front yesterday).

First, Keith Bowman from Intel had a really nice presentation on micro-architectural techniques to deal with increasing process variations. The key idea is to instrument a design with two additional features
  • Run-time error detection, using, for example, double sampling
  • A mechanism to recover from errors when they occur

Together, these features allow the fabricated die to ignore the in-built design margins and to scale the voltage down to the point where timing constraints are just met. Of course, these ideas are not entirely new, but I thought that some of the future research directions that the speaker alluded to were pretty interesting

  • Characterizing energy versus probability-of-error trade-offs: Obviously, as the voltage is reduced, more errors will begin to manifest in the circuit and at some point, the overhead of fault recovery starts significantly hurting throughput. From an EDA perspective, can we come up with tools that can help characterize this trade-off curve in advance instead of getting this data after fabrication from silicon tests?
  • What if we are ready to deal with a small probability-of-error, instead of instantiating a fault-recovery mechanism every time. Given this, can we come up with EDA tools that use this information for further timing optimization or maybe even resynthesize the logic?

In the afternoon, Shekhar Borkar, also from Intel (I'm not plugging Intel, I promise!) spoke about design challenges at the 22 nm node. While the talk had a fairly broad scope, and there was plenty of stuff that I'd heard in other venues before, some of the insights were pretty novel

  • As the number of transistors on a chip increases, it becomes increasingly important to utilize these transistors judiciously and power efficiently. According to the speaker, the best way to do this is to operate logic in the near threshold voltage regime, where its energy efficiency is greatest, and to perform fine-grained dynamic power management to deal with workload/environmental variations.
  • The communication network between the components on a die must be scalable, but also power efficient. Scaling standard mesh-based packet switching NoCs to the 22 nm node may not necessarily be the most power efficient solution and competing solutions like circuit switched networks should be evaluated also.

Whats Hot at DAC Today

Glancing through the technical program, here's what I'm particularly excited about today.

1) Special Session on Mechanisms for Surviving Uncertainty in Semiconductor Design (10:30-12:00, Room 133) http://www.dac.com/events/eventdetails.aspx?id=95-2

Dealing with variations and uncertainty in design has been a hot topic at DAC for the last few years, but I think that the focus on micro-architectural and system-level techniques to mitigate process variation, which seems to be the theme of this session, is relatively new. I had a chance to read the papers yesterday night and have lots of questions, so I'm really looking forward to this!

2) Special Session on Dawn of the 22nm Design Era (2:00-4:00, Room 133)http://www.dac.com/events/eventdetails.aspx?id=95-8

Features a bunch of interesting talks. I've attended some of Shekhar Borkar's talks before and they're always provocative, and importantly, provide pointers for future research issues, so don't plan to miss this one. The other talks are from Andrzej Strojwas (CMU), Kaushik Roy (Purdue) and Carl Anderson (IBM).


3) Technical Session on Design Flexibility (4:30-6:00, Room 124)
http://www.dac.com/events/eventdetails.aspx?id=95-18

Features one of the best paper award candidates intriguingly titled "A Design Origami: Folding Streams in FPGAs." Didn't get a chance to glance over the paper, but hopefully the presentation will be informative.

4) Last but certainly not he least, the SIGDA Ph.D. Forum (http://www.sigda.org/daforum/) will be held between 6:00-7:30 in Room 134. This is an opportunity of senior or recently graduated Ph.D. students to talk about their research and get feedback from other students, faculty and industry. I was exhibiting my research at this event last year and got some really great feedback, so I hope to return the favor this year :).

Monday, July 27, 2009

Updates from the DAC CEO Panel

The CEO Panel on the "Future of EDA" was a full-house tonight, featuring CEOs from the three leading EDA companies - Synopsys, Cadence and Mentor Graphics. As a disclaimer, I have to confess that, being an academic, I attended the panel hoping to hear more technical details than economics, and though there was more of the latter, there was enough of the former to whet my appetite.

The "Future of EDA" would have been a perfectly relevant topic for a panel discussion even without the looming elephant in the room, but of course, the Recession ended up being the focus of a lot of questions today. One of the key themes that emerged from the discussion, and one that was repeated multiple times, was that economic recessions also happen to be times when groundbreaking innovation tends to happen; apparently the iPhone (or the iPod?) was developed during the 2001 dotcom bust!

The other important theme, also relating to the R word, was that in tough economic conditions, EDA customers are looking to reduce design efforts and time-to-market and that this provides an opportunity for EDA vendors, particularly start-ups, to offer new solutions that help meet these needs.

For me, the two most interesting questions of the evening were the ones that more directly addressed the CEOs vision on how they see EDA evolving in the future.

The first and the more technical of the two was about the feasibility of 3D integration technology and what it means for the EDA industry. While all three panelists sounded cautiously optimistic about the prospects of vertical integration, Walden Rhines from Mentor Graphics suggested that even though upgrading EDA tools to support 3D integration is inevitable, EDA companies are unlikely to see additional profits from this upgrade since customers might be unwilling to pay extra for this additional feature. Now I don't know if I completely agree with this - it seems to me that there are a number of opportunities offered by 3D integration that are unique to this technology. EDA solution that address these issues can't really be thought of as "upgrades", though I presume that at some point this becomes more of a marketing issue than a technical issue!

The other interesting question was about the possibility of EDA vendors venturing outside the semi-conductor industry and offering solutions for a more diverse range of customers. This is something that has been a fairly popular theme of discussion in academia as well, and a number of EDA researchers have branched out into allied areas (check out this guest editorial from DAC last year: http://www.dac.com/newsletter/shownewsletter.aspx?newsid=10). Again, I think that all three panelists agreed that there were opportunities outside of the semiconductor business, though there seemed to be some disagreement about how far out EDA should venture - the suggestions ranged from providing solutions for the biomedical industry to the electric power industry (which is arguably slightly closer to home). I really do think that there is tremendous potential here, but then again, I wonder how many other communities (applied mathematics, operation research,...) are equally convinced that their bread-and-butter techniques would solve the world's problems?

DAC Exhibits

Had the chance to do a very quick tour of the DAC exhibits area during lunch. The number of booths are clearly fewer than last year, which also means fewer freebies :(, but there seemed to be a lot of interesting products and tools on display that I'm looking forward to checking out later during the conference.

Sunday, July 26, 2009

Just checked into the hotel and very excited about my first day at DAC tomorrow. For the most part, I will be attending the Young Faculty Workshop which, if I'm not mistaken, is being held for the first time this year. The basic idea is to get senior Ph.D. students and industry researchers looking to get into academia acquainted with the both the application process and what to expect in the beginning of an acedemic career.

What I like is that the workshop promises to cover a fairly broad range of topics that include not only the job application process, but also information and tips about the tenure process, getting funding and balancing research with teching! For more information check out http://kona.ee.pitt.edu/yfws/Young_Faculty_Workshop-DAC_2009.pdf

Also on the cards tomorrow is the CEO panel that features talks on the "Future of EDA" from Synopsys, Cadence and Mentor Graphics. I've heard a lot of talk at other EDA conferences and also last year at DAC about the grim future for the EDA industry, so it will certainly be enlightening to hear from these guys what they think!

For now. I'm going to retire to the bar at the top floor of the hotel, which I hear has a great view of the city :).

Thursday, July 23, 2009

Introductions

Welcome to the new DAC 2009 Blog! Let me begin by introducing myself - I'm a final year Ph.D. student in the ECE Department over at Carnegie-Mellon University in Pittsburgh PA and I'll be keeping you posted on the latest and greatest at the conference all through next week as part of DAC's new cyberpublicity thrust.

In the meantime, check out DAC 2009 on Twitter at http://twitter.com/46DAC