2011年8月18日星期四

Taking a Disruptive Approach to Exascale

Early in August the U.S. Department of Energy’s Office of Science and Office of Advanced Scientific Computing Research (ASCR) held a workshop called “Exascale and Beyond: Gaps in Research, Gaps in our Thinking” that brought together luminaries from the world of high performance computing to discuss research and practical challenges at exascale.

Given the scope of the short event’s series of discussions, we wanted to highlight a few noteworthy presentations to lend a view into how researchers perceive the coming challenges of exascale computing. While all of the speakers addressed known challenges of exascale computing, most brought their own research and practical experiences from large HPC centers to bear.

For instance, MIT professor of Electrical Engineering and Computer Science and director of the university’s Computer Science and Artificial Intelligence Laboratory (CSAIL) Anant Agarwal asked attendees if the current approach to exascale computing is radical enough.

Agarwal focused on the targets set by the Ubiquitous High Performance Computing (UHPC) set forth by DARPA, claiming that the debates have centered on increasing performance while reducing energy but that the challenges are far greater than mere energy. Agarwal argues that the other great hurdles lie in programmability and resiliency—and that to arrive at solutions for these problems, “disruptive research” is required. This kind of research will focus on the fact that getting two out the three big problems (performance, efficiency and programmability) will be relatively “easy” getting all three right presents significant challenges.

NVIDIA’s Bill Dally echoed some of Agarwal’s assertions in his presentation, “Power and Programmability: The Challenges of Exascale Computing” in which he proclaimed the end to historic levels of scaling, citing challenges related to power and code.

In his presentation, Dally claimed that it’s not about the FLOPs any longer, it’s about data movement. And further, it’s not simply a matter of power efficiency as we traditionally think about, it’s about locality.

Dally argues that “algorithms should be designed to perform more work per unit data movement” and that “programming systems should further optimize this data movement.” He went to cite the fact that architectures need to facilitate data movement by providing an exposed hierarchy and efficient communication.

In some ways, Dally’s presentation offered some of the “disruptive” ideas Agarwal cited that can radicalize ways of thinking about exascale limitations. Dally’s focus on locality (optimizing data movement versus focusing on the FLOPs; optimizing subdivision and fetching paradigms; offering an exposed storage hierarchy with more efficient communication and bulk transfer) is a break from the norm in terms of offering solutions for exascale challenges—and one that generated rich fodder for the presentation, which you can find in detail here.

Locality was a hot-button issue at this workshop, drawing a detailed, solution-rich presentation from Allan Snavely, associate director of the San Diego Supercomputer Center and adjunct professor in UCSD’s Department of Computer Science and Engineering.

In his presentation, “Whose Job is it to Find Locality?” Snavely dug deeper into some of the initial concepts Dally put forth. Snavely recognized that people seem to be waiting on “magic” compilers and programming languages to come along, for application programmers to suddenly be rendered flawless, or for machines to simply let users choose how to burn up resources.

He claims that the attitude of “LINPACK has lots of locality, so what’s the problem” is the root of a problem as everyone waits for answers to locality problems to fall out of the sky. In his presentation, Snavely proposes a few solutions, including a new approach to the software stack found here.

In addition to moving the conversation out of the theoretical and into the realm of actual solutions, Snavely discussed how his UCSD team is currently developing tools and methodologies that can identify location in applications to reduce the processor frequency for effective power savings and further, working on tools that can automate the process of inserting “frequency throttling calls” into large-scale applications.

没有评论:

发表评论