ORNL Snags Part of DOE’s $67M for AI Science Research
The Department of Energy (DOE) has launched a significant initiative to consolidate artificial intelligence as a focal point in forthcoming scientific ventures, revealing a robust $67 million funding initiative for various AI projects. Stretching across governmental and academic institutions, this initiative falls under the AI for Science drive. Amongst the beneficiaries, Oak Ridge National Laboratory (ORNL) has taken a leading role, showcasing its vast potential and expertise in AI technology.
The ambitious funding aims to spearhead foundational breakthroughs in scientific sectors, including the development of scientific machine learning, the creation of expansive language models designated for high-performance computing, and the enhancement of laboratory workflow automation.
Out of the multifaceted initiatives, six projects spearheaded or co-led by ORNL have successfully secured funding:
- ENGAGE: (E)nergy-efficient (N)ovel Al(g)orithms and (A)rchitectures for (G)raph L(e)arning.
- DyGenAI: Dynamic Generative Artificial Intelligence for Prediction and Control of High-Dimensional Nonlinear Complex Systems.
- SciGPT: Scalable Foundational Model for Scientific Machine Learning.
- Productive AI-Assisted HPC Software Ecosystem.
- Privacy-Preserving Federated Learning for Science: Building Sustainable and Trustworthy Foundation Models.
- Durban: Enhancing Performance Portability in HPC (high-performance computing) Software with Artificial Intelligence.
The projects were meticulously selected through a competitive peer review process under the DOE’s Funding Opportunity Announcement (FOA) for Advancements in Artificial Intelligence for Science. Funding for each project will last up to three years, rendering ample time for significant development and innovation.
“This announcement is crucial for the lab as we’ve been hearing about AI’s progression for years now,” explained William Godoy, a senior computer scientist at ORNL. “We were constantly pondering on what AI essentially means for high-performance computing (HPC), given the intricate nature of HPC systems.”
For Godoy and his dedicated team, this funding signifies a delve into innovating ways to optimally employ large language models (LLMs) on systems like Frontier, the premier supercomputer to achieve the exascale barrier.
Notably, after witnessing the rise of ChatGPT, an AI-powered chatbot, the national laboratory community, Godoy observed, began scrutinizing collaboration opportunities between LLMs and DOE’s overarching mission.
Godoy is poised to utilize the new funding by collaborating closely with AI and HPC specialists from Lawrence Livermore National Laboratory, the University of Maryland, and Northeastern University. The goal? Pinpoint the most effective strategies in crafting LLMs tailored explicitly for HPC.
ORNL’s senior computer scientist, Pedro Valero Lara, working alongside Godoy, outlined the potential of these LLMs for programming language translation. Effectively translating legacy HPC Fortran codes into more efficient and modern C++ codes can notably enhance performance by an order of magnitude. “Simply by translating the code, you can achieve performance improvements,” bewildered Valero Lara. The team’s endeavor is part of a broader strategy to reinforce support for HPC endeavors.
Moreover, Godoy emphasized the work’s purpose of fortifying AI-powered collaboration throughout the national laboratory ecosystem and nurturing the future HPC workforce, where LLMs are an increasingly ubiquitous learning modality for interns.
“We aim to cultivate synergies across large, multidisciplinary, and complex projects, enhancing our impact collectively,” remarked Godoy. “We’re currently collaborating with the ORNL-led Durban project to capitalize on AI’s immense value for our HPC mission.”
Similarly, other projects benefitting from this DOE funding round are advancing AI development, reflecting the crucial nature of these investments.
Leading the Privacy-Preserving Federated Learning for Science project, Olivera Kotevska, a research scientist within the Computer Science and Mathematics Division at ORNL, highlighted the gravitas of aiding this work. “This support empowers our team in forging paths in privacy-preserving AI, protecting sensitive scientific data while promoting institutional collaboration,” Kotevska expressed. By spearheading sustainable and trustworthy AI solutions, ORNL is not only impacting scientific discovery but heralding advancements in national security as well.
Kotevska emphasized that broad-reaching implications are possible through ORNL’s burgeoning leadership in trustworthy AI systems—“benefiting both the lab and the broader scientific community.”
Praised for its comprehensive history in AI research, ORNL is well-equipped to navigate the burgeoning AI landscape. Prasanna Balaprakash, director of AI programs at ORNL leading the lab’s AI Initiative, expressed immense pride, citing ORNL’s matchless distinctiveness having projects covering all five areas dictated by the FOA.
“These awards stand testament to ORNL’s AI prowess and reinforce its prominent role in AI for science,” Balaprakash stated. Originally nurtured by ORNL’s AI Initiative, several projects endeavor to deliver secure, trustworthy, and energy-efficient AI solutions addressing concerns of national importance efficiently.
Under the custodianship of UT-Battelle, ORNL functions under the aegis of the Department of Energy’s Office of Science—the principal patron of basic research ventures within the United States’ physical sciences domain. This exceptionally rich support is paramount in answering some of the predominant challenges contemporary society encounters.
For more expansive insights, visit energy.gov/science.
— Mark Alewine
Note: This content has been slightly modified for clarity and brevity. Mirage.News remains neutral, and all views and information conveyed are solely those of the original authors.