When viewing the Technical Program schedule, on the far righthand side
is a column labeled "PLANNER." Use this planner to build your own
schedule. Once you select an event and want to add it to your personal
schedule, just click on the calendar icon of your choice (outlook
calendar, ical calendar or google calendar) and that event will be
stored there. As you select events in this manner, you will have your
own schedule to guide you through the week.
You can also create your personal schedule on the SC11 app (Boopsie) on your smartphone. Simply select a session you want to attend and "add" it to your plan. Continue in this manner until you have created your own personal schedule. All your events will appear under "My Event Planner" on your smartphone.
Prospects for scalable 3D FFTs on heterogeneous exascale systems
SESSION: Research Poster Reception
EVENT TYPE: ACM Student Research Competition Poster, Poster, Electronic Poster
TIME: 5:15PM - 7:00PM
SESSION CHAIR: Bernd Mohr
AUTHOR(S):Kenneth Czechowski, Casey Battaglino, Chris McClanahan, Richard Vuduc
ROOM:WSCC North Galleria 2nd/3rd Floors
ABSTRACT: We consider the problem of implementing scalable three-dimensional fast Fourier transforms with an eye toward future exascale systems comprised of graphics co-processors (GPUs) or other similarly high-density compute units. We describe a new software implementation; derive and calibrate a suitable analytical performance model; and use this model to make predictions about potential outcomes at exascale, based on current and likely technology trends. We evaluate the scalability of our software and instantiate models on real systems, including 64 nodes (192 NVIDIA “Fermi” GPUs) of the Keeneland system at Oak Ridge National Laboratory. We use our analytical model to quantify the impact of both inter- and intra-node communication that impede further
scalability. Among various observations, a key prediction is that although inter-node all-to-all communication is expected to be the bottleneck of distributed FFTs, it is actually intra-node communication that may play an even more critical role.
Bernd Mohr (Chair) - Juelich Supercomputing Centre