Tame the deluge of data by enriching AI algorithms with new processors

0


[ad_1]

The vision of A3D3 is to establish a tightly coupled organization of field scientists, computer scientists and engineers who unite three essential components that are essential to realize real-time AI to transform science: AI, computer hardware and scientific applications. Credit: A3D3

An impending data tsunami threatens to overwhelm huge data-rich research projects in fields ranging from tiny neutrinos to supernova blasts, as well as mysteries deep within the brain.

When LIGO picks up a gravitational wave signal from a distant collision of black holes and neutron stars, a clock kicks in to capture the earliest possible light that can accompany them: time is of the essence in this race. Data collected from electrical sensors monitoring brain activity is beyond computational capacity. Information from broken particle beams from the Large Hadron Collider (LHC) will soon exceed 1 petabit per second.

To tackle this approaching data bottleneck in real time, a team of researchers from nine institutions led by the University of Washington, including MIT, received $ 15 million in funding to establish the Accelerated AI Algorithms for Data-Driven Discovery institute (A3D3). From MIT, the research team includes Philip Harris, assistant professor of physics, who will serve as deputy director of the A3D3 Institute; Song Han, assistant professor of electrical engineering and computer science, who will be the co-PI of A3D3; and Erik Katsavounidis, Principal Investigator at MIT Kavli Institute for Astrophysics and Space Research.

Infused with this five-year Harnessing the Data Revolution Big Idea grant, and jointly funded by the Office of Advanced Cyberinfrastructure, A3D3 will focus on three data-rich areas: multi-messenger astrophysics, high-energy particle physics and neuroscience in brain imaging. By enriching AI algorithms with new processors, A3D3 seeks to accelerate AI algorithms to solve fundamental problems in collider physics, neutrino physics, astronomy, gravitational wave physics, computer science and neuroscience.

“I am very excited about the new institute’s research possibilities in nuclear and particle physics,” said Boleslaw Wyslouch, director of the Nuclear Science Laboratory. “Modern particle detectors produce an enormous amount of data, and we are looking for extraordinarily rare signatures. Applying extremely fast processors to sift through these mountains of data will make a huge difference in what we measure and find out.”

The seeds for A3D3 were planted in 2017, when Harris and his colleagues at Fermilab and CERN decided to integrate real-time AI algorithms to process the incredible data rates at the LHC. Through an email correspondence with Han, Harris’s team built a compiler, HLS4ML, capable of running an AI algorithm in nanoseconds.

“Before the development of HLS4ML, the fastest processing we knew was about a millisecond by AI inference, maybe a little faster,” says Harris. “We realized that all AI algorithms were designed to solve much slower problems, such as image and voice recognition. in an approach that was vastly different from what others were doing. “

A few months later, Harris presented his research at a physics faculty meeting, where Katsavounidis became intrigued. Over a coffee in Building 7, they discussed combining Harris ‘FPGA with Katsavounidis’ use of machine learning to find gravitational waves. FPGAs and other newer types of processors, such as graphics processing units (GPUs), are speeding up AI algorithms to analyze huge amounts of data faster.

“I had worked with the first FPGAs that were on the market in the early 90s and witnessed how they revolutionized front-end electronics and data acquisition in large high-end physics experiments. energies on which I was working at the time “, remembers Katsavounidis. “The possibility of having them analyze gravitational wave data has been running through my mind since I joined LIGO over 20 years ago. “

Two years ago, they received their first grant, and Shih-Chieh Hsu from the University of Washington joined them. The team started the Fast Machine Lab, published around 40 papers on the topic, built the group to around 50 researchers and “the industry on how to explore a region of AI that hasn’t been explored in the past, ”says Harris.“ We basically started this without any funding. We have received small grants for various projects over the years. A3D3 represents our first major grant to support this effort. ”

“What makes A3D3 so special and suited to MIT is its exploration of a technical frontier, where AI is not implemented in high-level software, but rather in lower-level firmware, reconfiguring individual doors to answer the scientific question, ”says Rob Simcoe, director of the MIT Kavli Institute for Astrophysics and Space Research and Francis Friedman Professor of Physics. “We are in an age where experiments generate torrents of data. The acceleration achieved through the personalization of reprogrammable and bespoke computers at the processor level can take real-time analysis of that data to new levels. speed and sophistication. “

Huge data from the Large Hadron Collider

With data rates already exceeding 500 terabits per second, the LHC processes more data than any other scientific instrument on earth. Its future aggregate data rates will soon exceed 1 petabit per second, the highest data rate in the world.

“Through the use of AI, A3D3 aims to perform advanced analyzes, such as anomaly detection and particle reconstruction on all collisions occurring 40 million times per second,” says Harris.

The goal is to find in all this data a way to identify the few collisions out of the 3.2 billion collisions per second that could reveal new forces, explain the formation of dark matter and complete the picture of how fundamental forces interact with matter. Processing all of this information requires a custom computer system capable of interpreting the collider information with ultra-low latencies.

“The challenge of running it across the hundreds of terabits per second in real time is daunting and requires a complete overhaul of the way we design and implement AI algorithms,” says Harris. “With large increases in detector resolution leading to even greater data rates, the challenge of finding just one collision among many will become even more daunting.”

The brain and the universe

Thanks to advances in techniques such as medical imaging and electrical recordings from implanted electrodes, neuroscience is also collecting greater amounts of data on how neural networks in the brain process responses to stimuli and carry out information. motor. A3D3 plans to develop and implement high throughput, low latency AI algorithms to process, organize and analyze massive neural data sets in real time, to probe brain function to enable new experiences and therapies .

With multi-messenger astrophysics (MMA), A3D3 aims to rapidly identify astronomical events by efficiently processing data from gravitational waves, gamma-ray bursts and neutrinos picked up by telescopes and detectors.

A3D3 researchers also include a multidisciplinary group of 15 other researchers, including the University of Washington project leader, as well as Caltech, Duke University, Purdue University, UC San Diego, University of Illinois Urbana-Champaign, University of Minnesota, and the University of Wisconsin-Madison. It will include neutrino research at Icecube and DUNE, and visible astronomy at Zwicky Transient Facility, and host deep learning workshops and boot camps to train students and researchers on how to contribute to the framework and expand the use of rapid AI strategies.

“We have reached a point where the growth of the detector network will be transformative, both in terms of event rate and in terms of astrophysical scope and ultimately discoveries,” Katsavounidis said. “” Fast “and” efficient “is the only way to combat the” weak “and” hazy “that exist in the universe, and the way to get the most out of our detectors. A3D3 on the one hand will bring l ‘AI at the production scale to gravitational wave physics and multi-messenger astronomy; but on the other hand, we aspire to go beyond our immediate realms and become the go-to place to across the country for accelerated AI applications to data-driven disciplines. ”


Increase computing power for the future of particle physics


More information:
Alec Gunny et al, Hardware accelerated inference for real-time gravitational wave astronomy. arXiv: 2108.12430v1 [gr-qc], arxiv.org/abs/2108.12430

Provided by the Massachusetts Institute of Technology


This story is republished courtesy of MIT News (web.mit.edu/newsoffice/), a popular site covering current research, innovation and education at MIT.

Quote: Taming the data deluge by enriching AI algorithms with new processors (2021, November 1) retrieved November 1, 2021 from https://techxplore.com/news/2021-11-deluge-enriching-ai-algorithms- processors.html

This document is subject to copyright. Other than fair use for private study or research purposes, no part may be reproduced without written permission. The content is provided for information only.

[ad_2]

Leave A Reply

Your email address will not be published.