Ideas Wanted For Processing Paradigms That Accelerate Computer Simulations

DARPA invites input on how to speed up computation of the complex mathematics that characterize scientific computing 

Whether designed to predict the spread of an epidemic, understand the potential impacts of climate change, or model the acoustical signature of a newly designed ship hull, computer simulations are an essential tool of scientific discovery. By using mathematical models that capture the complex physical phenomena of the real world, scientists and engineers can validate theories and explore system dynamics that are too costly to test experimentally and too complicated to analyze theoretically. Over the past half century, as supercomputers got faster and more powerful, such simulations became ever more accurate and useful. But in recent years even the best computer architectures haven’t been able to keep up with demand for the kind of simulation processing power needed to handle exceedingly complex design optimization and related problems.

DARPA’s ACCESS RFI seeks new processing paradigms that have the potential to overcome current barriers in computing performance. “Old fashioned” analog approaches may be part of the solution.
DARPA’s ACCESS RFI seeks new processing paradigms that have the potential to overcome current barriers in computing performance. “Old fashioned” analog approaches may be part of the solution.

“The standard computer cluster equipped with multiple central processing units (CPUs), each programmed to tackle a particular piece of a problem, is just not designed to solve the kinds of equations at the core of large-scale simulations, such as those describing complex fluid dynamics and plasmas,” said Vincent Tang, program manager in DARPA’s Defense Sciences Office. These critical equations, known as partial differential equations, describe fundamental physical principles like motion, diffusion, and equilibrium. But because they involve continuous rates of change over a large range of physical parameters relating to the problems of interest—and in many cases also involve long-distance interactions—they do not lend themselves to being broken up and solved in discrete pieces by individual CPUs.  A processor specially designed for such equations may enable revolutionary new simulation capabilities for design, prediction, and discovery. But what might that processor look like?

DARPA is interested in pursuing the somewhat counterintuitive premise that “old fashioned” analog approaches may be part of the solution. Analog computers, which solve equations by manipulating continuously changing values instead of discrete measurements, have been around for more than a century. In the 1930s, for example, Vannevar Bush—who a decade later would help initiate and administer the Manhattan Project—created an analog “differential analyzer” that computed complex integrations through the use of a novel wheel-and-disc mechanism. And in the 1940s, the Norden bombsight made its way into U.S. warplanes, where it used analog methods to calculate bomb trajectories. But in the 1950s and 1960s, as transistor-based digital computers proved more efficient for most kinds of problems, analog methods fell into disuse.

They haven’t been forgotten however. And their potential to excel at dynamical problems too challenging for today’s digital processors may today be bolstered by other recent breakthroughs, including advances in microelectromechanical systems, optical engineering, microfluidics, metamaterials and even approaches to using DNA as a computational platform. It is conceivable, Tang said, that novel computational substrates could exceed the performance of modern CPUs for certain specialized problems, if they can be scaled and integrated into modern computer architectures.

To help DARPA consider how best to support a process that might lead to the discovery of such approaches, DARPA has released a Request for Information (RFI) called Analog and Continuous-variable Co-processors for Efficient Scientific Simulation (ACCESS), available here: http://go.usa.gov/3CV43. The RFI seeks new processing paradigms that have the potential to overcome current barriers in computing performance.

“In general, we’re interested in information on all approaches, analog, digital, or hybrid ones, that have the potential to revolutionize how we perform scientific simulations,” Tang said.

The RFI invites short responses that address the following needs, either singly or in combination:

  • Scalable, controllable, and measurable processes that can be physically instantiated in co-processors for acceleration of computational tasks frequently encountered in scientific simulation
  • Algorithms that use analog, non-linear, non-serial, or continuous-variable computational primitives to reduce the time, space, and communicative complexity relative to von Neumann/CPU/GPU processing architectures
  • System architectures, schedulers, hybrid and specialized integrated circuits, compute languages, programming models, controller designs, and other elements for efficient problem decomposition, memory access, and task allocation across multi-hybrid co-processors
  • Methods for modeling and simulation via direct physical analogy

Technology development beyond these areas will be considered so long as it supports the RFI’s goals. DARPA is particularly interested in engaging nontraditional contributors to help develop leap-ahead technologies in the focus areas above, as well as other technologies that could potentially improve the computational tractability of complex nonlinear systems.

DARPA’s Request for Information solicitation with more details on the Analog and Continuous-variable Co-processors for Efficient Scientific Simulation (ACCESS) notification is available here: http://go.usa.gov/3CV43. Responses are due by 4:00 p.m. Eastern on April 14, 2015.

Leave A Reply

Your email address will not be published.