Skip to main content

Slashdot: World's Fastest AI Supercomputer Built from 6,159 NVIDIA A100 Tensor Core GPUs

World's Fastest AI Supercomputer Built from 6,159 NVIDIA A100 Tensor Core GPUs
Published on May 31, 2021 at 05:04PM
Slashdot reader 4wdloop shared this report from NVIDIA's blog, joking that maybe this is where all NVIDIA's chips are going: It will help piece together a 3D map of the universe, probe subatomic interactions for green energy sources and much more. Perlmutter, officially dedicated Thursday at the National Energy Research Scientific Computing Center (NERSC), is a supercomputer that will deliver nearly four exaflops of AI performance for more than 7,000 researchers. That makes Perlmutter the fastest system on the planet on the 16- and 32-bit mixed-precision math AI uses. And that performance doesn't even include a second phase coming later this year to the system based at Lawrence Berkeley National Lab. More than two dozen applications are getting ready to be among the first to ride the 6,159 NVIDIA A100 Tensor Core GPUs in Perlmutter, the largest A100-powered system in the world. They aim to advance science in astrophysics, climate science and more. In one project, the supercomputer will help assemble the largest 3D map of the visible universe to date. It will process data from the Dark Energy Spectroscopic Instrument (DESI), a kind of cosmic camera that can capture as many as 5,000 galaxies in a single exposure. Researchers need the speed of Perlmutter's GPUs to capture dozens of exposures from one night to know where to point DESI the next night. Preparing a year's worth of the data for publication would take weeks or months on prior systems, but Perlmutter should help them accomplish the task in as little as a few days. "I'm really happy with the 20x speedups we've gotten on GPUs in our preparatory work," said Rollin Thomas, a data architect at NERSC who's helping researchers get their code ready for Perlmutter. DESI's map aims to shed light on dark energy, the mysterious physics behind the accelerating expansion of the universe. A similar spirit fuels many projects that will run on NERSC's new supercomputer. For example, work in materials science aims to discover atomic interactions that could point the way to better batteries and biofuels. Traditional supercomputers can barely handle the math required to generate simulations of a few atoms over a few nanoseconds with programs such as Quantum Espresso. But by combining their highly accurate simulations with machine learning, scientists can study more atoms over longer stretches of time. "In the past it was impossible to do fully atomistic simulations of big systems like battery interfaces, but now scientists plan to use Perlmutter to do just that," said Brandon Cook, an applications performance specialist at NERSC who's helping researchers launch such projects. That's where Tensor Cores in the A100 play a unique role. They accelerate both the double-precision floating point math for simulations and the mixed-precision calculations required for deep learning.

Read more of this story at Slashdot.

Comments

Popular posts from this blog

Slashdot: AT&T Says Leaked Data of 70 Million People Is Not From Its Systems

AT&T Says Leaked Data of 70 Million People Is Not From Its Systems Published on March 20, 2024 at 02:15AM An anonymous reader quotes a report from BleepingComputer: AT&T says a massive trove of data impacting 71 million people did not originate from its systems after a hacker leaked it on a cybercrime forum and claimed it was stolen in a 2021 breach of the company. While BleepingComputer has not been able to confirm the legitimacy of all the data in the database, we have confirmed some of the entries are accurate, including those whose data is not publicly accessible for scraping. The data is from an alleged 2021 AT&T data breach that a threat actor known as ShinyHunters attempted to sell on the RaidForums data theft forum for a starting price of $200,000 and incremental offers of $30,000. The hacker stated they would sell it immediately for $1 million. AT&T told BleepingComputer then that the data did not originate from them and that its systems were not breached. ...

Slashdot: US Plans $825 Million Investment For New York Semiconductor R&D Facility

US Plans $825 Million Investment For New York Semiconductor R&D Facility Published on November 02, 2024 at 03:00AM The Biden administration is investing $825 million in a new semiconductor research and development facility in Albany, New York. Reuters reports: The New York facility will be expected to drive innovation in EUV technology, a complex process necessary to make semiconductors, the U.S. Department of Commerce and Natcast, operator of the National Semiconductor Technology Center (NTSC) said. The launch of the facility "represents a key milestone in ensuring the United States remains a global leader in innovation and semiconductor research and development," Commerce Secretary Gina Raimondo said. From the U.S. Department of Commerce press release: EUV Lithography is essential for manufacturing smaller, faster, and more efficient microchips. As the semiconductor industry pushes the limits of Moore's Law, EUV lithography has emerged as a critical technology to ...

Slashdot: AT&T, T-Mobile Prep First RedCap 5G IoT Devices

AT&T, T-Mobile Prep First RedCap 5G IoT Devices Published on October 15, 2024 at 03:20AM The first 5G Internet of Things (IoT) devices are launching soon. According to Fierce Wireless, T-Mobile plans to launch its first RedCap devices by the end of the year, while AT&T's devices are expected sometime in 2025. From the report: All of this should pave the way for higher performance 5G gadgets to make an impact in the world of IoT. RedCap, which stands for reduced capabilities, was introduced as part of the 3GPP's Release 17 5G standard, which was completed -- or frozen in 3GPP terms -- in mid-2022. The specification, which is also called NR-Light, is the first 5G-specific spec for IoT. RedCap promises to offer data transfer speeds of between 30 Mbps to 80 Mbps. The RedCap spec greatly reduces the bandwidth needed for 5G, allowing the signal to run in a 20 MHz channel rather than the 100 MHz channel required for full scale 5G communications. Read more of this story at...