Exascale computing refers to a new era of high-performance computing that promises to revolutionize various fields by providing unprecedented computational power and memory capacity. This emerging technology will enable researchers to tackle complex problems that were previously unsolvable, leading to breakthroughs in our understanding of the world.
The development of exascale computing is being driven by advancements in hardware architectures, including the use of FPGAs (Field-Programmable Gate Arrays) and new programming models. These emerging technologies will enable more efficient data processing and memory access, reducing power consumption and increasing overall performance. As a result, exascale computers will be able to tackle complex problems in fields such as climate modeling, materials science, and chemistry.
Exascale computing is expected to have a profound impact on artificial intelligence and machine learning, enabling researchers to develop more sophisticated AI models that can tackle complex problems in fields such as healthcare and finance. The US Department of Energy’s Exascale Computing Project has identified several key challenges that must be addressed in order to achieve exascale performance, but the potential benefits are significant, and researchers are working tirelessly to overcome them and unlock the full potential of this emerging technology.
Defining Exascale Computing
Exascale computing refers to the ability of a supercomputer to perform one exaflop, which is equivalent to one billion billion (10^18) floating-point operations per second. This level of computational power is considered the next major milestone in high-performance computing after petaflop and petascale computing.
To achieve exascale performance, supercomputers must be designed with a combination of advanced hardware and software technologies. One key aspect is the use of heterogeneous architectures, which integrate multiple types of processing units, such as CPUs, GPUs, and FPGAs, to optimize different workloads. For example, the Summit supercomputer at Oak Ridge National Laboratory uses a hybrid architecture that combines IBM Power9 CPUs with NVIDIA V100 GPUs (Hollingsworth et al., 2018).
Another critical factor is the development of new programming models and software frameworks that can effectively utilize exascale architectures. The OpenMP API, for instance, has been extended to support heterogeneous computing and provide a unified programming model across different processing units (Dagum & Menon, 1998). Furthermore, the use of machine learning and artificial intelligence techniques is becoming increasingly important in optimizing supercomputer performance and energy efficiency.
The exascale era also brings new challenges in terms of power consumption and heat management. As computational density increases, so does the amount of heat generated by the system. To mitigate this issue, researchers are exploring innovative cooling technologies, such as immersion cooling and 3D stacked architectures (Kang et al., 2019). Additionally, the development of more efficient algorithms and software tools is essential to minimize energy consumption and maximize performance.
The pursuit of exascale computing has significant implications for various fields, including scientific research, climate modeling, and materials science. For instance, the ability to simulate complex phenomena at the molecular level can lead to breakthroughs in fields like medicine and materials engineering (Tuckerman et al., 2019). Furthermore, the development of more powerful supercomputers will enable researchers to tackle previously intractable problems and make new discoveries.
The exascale era is also expected to drive innovation in areas such as data analytics and machine learning. As large-scale simulations produce vast amounts of data, researchers are developing new techniques for processing and analyzing this information (Beygelis et al., 2019). Furthermore, the integration of AI and ML algorithms with supercomputing will enable more accurate predictions and better decision-making in various fields.
History Of High-performance Computing
The concept of High-Performance Computing (HPC) has its roots in the early 20th century, with the development of the first electronic computers such as ENIAC (Electronic Numerical Integrator And Computer) in 1946. This machine was designed to calculate artillery firing tables for the US Army and marked the beginning of a new era in computing power.
The first major breakthrough in HPC came with the introduction of the transistor, which replaced vacuum tubes and significantly improved computer performance. The development of the Integrated Circuit (IC) by Jack Kilby at Texas Instruments in 1958 further accelerated this progress. ICs enabled the creation of smaller, faster, and more efficient computers that could perform complex calculations.
The 1960s saw the emergence of supercomputers, which were designed to tackle large-scale scientific simulations and data analysis tasks. The first commercial supercomputer, the CDC 6600, was released in 1964 by Control Data Corporation (CDC). This machine boasted a clock speed of 1 MHz and could perform calculations at a rate of 3 million instructions per second.
The development of parallel processing architectures in the 1980s revolutionized HPC. The introduction of the Connection Machine CM-2 by Thinking Machines Corporation in 1987 enabled computers to process multiple tasks simultaneously, leading to significant performance gains. This technology was later adopted by other manufacturers and became a cornerstone of modern HPC systems.
The current generation of HPC systems is characterized by their ability to perform exascale calculations, which involve processing over 1 exaflop (1 billion billion calculations per second). The development of exascale computing has been driven by the need for faster simulations in fields such as climate modeling, materials science, and genomics. The first exascale supercomputer, Summit, was launched in 2018 at Oak Ridge National Laboratory.
The pursuit of exascale computing continues to push the boundaries of HPC technology. Researchers are exploring new architectures, such as neuromorphic processing and quantum computing, which promise even greater performance gains. As these technologies mature, they will likely become integral components of future HPC systems, enabling scientists to tackle increasingly complex problems.
Moore’s Law And Its Limitations
Moore’s Law states that the number of transistors on a microchip doubles approximately every two years, leading to exponential improvements in computing power and reductions in cost. This prediction was first made by Gordon Moore, co-founder of Intel, in 1965 (Moore, 1965). The law has held remarkably true for several decades, with the number of transistors on a microchip increasing from around 2,300 to over 20 billion between 1971 and 2019 (Kane, 2019).
The implications of Moore’s Law are profound. As computing power increases, so does the complexity of problems that can be solved. This has led to breakthroughs in fields such as medicine, finance, and climate modeling. However, the law is not without its limitations. As transistors shrink to sizes approaching atomic dimensions, it becomes increasingly difficult to maintain their performance (Datta, 2018).
One major challenge facing Moore’s Law is the difficulty of scaling down transistors while maintaining their electrical properties. As transistors get smaller, they become more susceptible to quantum fluctuations and other forms of noise that can disrupt their operation (Koch, 2007). This has led to a shift towards new technologies such as graphene and nanowires, which may be able to overcome some of the limitations imposed by traditional silicon-based transistors.
Despite these challenges, researchers continue to push the boundaries of what is possible with Moore’s Law. The development of new materials and manufacturing techniques has allowed for continued improvements in computing power, even if they are not as rapid as those seen in the past (International Technology Roadmap for Semiconductors, 2019). However, it remains to be seen whether these advances will be enough to sustain the exponential growth predicted by Moore’s Law.
The concept of exascale computing is closely tied to the idea of pushing the limits of Moore’s Law. Exascale refers to a level of computing power that is one quintillion (10^18) floating-point operations per second, which is roughly 1,000 times faster than today’s fastest supercomputers (Hollingsworth, 2019). Achieving exascale will require significant advances in materials science, nanotechnology, and computer architecture.
The Need For Exascale Computing
Exascale computing refers to the ability to perform one exa-flop, or one billion billion calculations per second, which is roughly equivalent to the processing power of 1 million high-performance computers working together (Hollingsworth et al., 2019). This level of computational power is necessary for simulating complex phenomena such as climate change, understanding the behavior of subatomic particles, and modeling the human brain.
To achieve exascale computing, researchers are developing new architectures that combine traditional CPUs with specialized accelerators like graphics processing units (GPUs) and tensor processing units (TPUs). These accelerators can perform specific tasks much faster than traditional CPUs, but they also require significant amounts of memory and power to operate. For example, the Summit supercomputer at Oak Ridge National Laboratory uses a combination of IBM Power9 CPUs and NVIDIA V100 GPUs to achieve an exaflop-level performance (Summit Supercomputer, 2020).
The need for exascale computing is driven by the increasing complexity of scientific simulations and the growing demand for high-performance computing in fields like medicine, finance, and climate modeling. As the world’s population continues to grow and become more interconnected, the need for accurate and reliable predictions about everything from weather patterns to economic trends will only continue to increase.
One of the key challenges facing exascale computing is energy efficiency. As computers get faster and more powerful, they also consume more power and generate more heat, which can lead to significant cooling costs and environmental impact. To address this challenge, researchers are developing new materials and architectures that can reduce energy consumption while maintaining performance (Kaplan et al., 2018).
The development of exascale computing is a global effort involving governments, industry leaders, and research institutions working together to push the boundaries of what is possible with high-performance computing. For example, the European Union’s Horizon 2020 program has invested heavily in exascale computing research, including the development of new architectures and applications (Horizon 2020, 2019).
The first exascale supercomputer, Frontier, was announced by Oak Ridge National Laboratory in 2021 and is expected to be operational in 2023. This system will use a combination of AMD EPYC CPUs and NVIDIA A100 GPUs to achieve an exaflop-level performance (Frontier Supercomputer, 2021).
Characteristics Of Exascale Systems
Exascale systems are designed to achieve a peak performance of one exaflop, which is equivalent to one billion billion (10^18) floating-point operations per second. This level of computing power is necessary for simulating complex phenomena in fields such as climate modeling, materials science, and astrophysics. According to the Exascale Computing Project, a joint effort between the US Department of Energy and industry partners, exascale systems will be capable of performing simulations that are 10-100 times more detailed than those currently possible (Exascale Computing Project, 2020).
The architecture of exascale systems is characterized by a large number of processing nodes, each with its own memory and communication capabilities. These nodes are connected through high-speed interconnects, such as InfiniBand or Ethernet, to form a distributed computing system. The use of heterogeneous architectures, which combine different types of processors (e.g., CPUs, GPUs, FPGAs), is also becoming increasingly common in exascale systems. This allows for the efficient execution of diverse workloads and the optimization of performance for specific applications (Horton et al., 2019).
One of the key challenges in designing exascale systems is managing the heat generated by the large number of processing nodes. As the power density increases, so does the risk of overheating, which can lead to reduced system reliability and increased maintenance costs. To mitigate this issue, researchers are exploring novel cooling technologies, such as immersion cooling or air-side economization, that can efficiently remove heat from the system (Bahl et al., 2018).
The programming models used for exascale systems are also evolving to take advantage of the distributed nature of these architectures. New frameworks, such as Kokkos and RAJA, are being developed to provide a unified interface for parallel programming across different hardware platforms. These frameworks enable developers to write portable code that can run efficiently on a wide range of exascale systems (Henderson et al., 2020).
The development of exascale systems is also driving innovation in the field of materials science. Researchers are using these powerful computing resources to simulate the behavior of complex materials, such as nanomaterials or biomaterials, under various conditions. This allows for a deeper understanding of material properties and the optimization of material design (Teter et al., 2019).
The integration of artificial intelligence (AI) and machine learning (ML) techniques with exascale systems is also becoming increasingly important. AI and ML algorithms can be used to optimize system performance, predict faults, or even control the cooling system. This integration has the potential to significantly improve the efficiency and reliability of exascale systems (Koch et al., 2020).
Exaflop Performance And Its Implications
The concept of exascale computing has been gaining significant attention in the scientific community, with several countries investing heavily in developing exascale supercomputers. An exaflop is a unit of measurement that represents one billion billion (10^18) floating-point operations per second. To put this into perspective, the world’s fastest supercomputer, Summit, achieved an exaflop performance in 2018, but only for a short period.
The implications of achieving exascale computing are far-reaching and have significant potential to transform various fields such as climate modeling, materials science, and medicine. For instance, researchers can simulate complex phenomena like weather patterns or the behavior of molecules with unprecedented accuracy and detail. This, in turn, enables scientists to make more informed decisions and predictions, leading to breakthroughs in our understanding of the world.
One of the key challenges in achieving exascale computing is power consumption. As computers become faster, they also consume more energy, which can lead to significant heat generation and cooling requirements. To overcome this challenge, researchers are exploring new architectures and materials that can reduce power consumption while maintaining performance. For example, the use of 3D stacked processors and advanced memory technologies has shown promise in reducing power consumption.
The development of exascale computing also raises questions about the sustainability and scalability of high-performance computing. As computers become faster and more powerful, they require increasingly large amounts of energy to operate. This can lead to significant environmental concerns and challenges in scaling up computing infrastructure. To address these concerns, researchers are exploring new approaches to sustainable computing, such as using renewable energy sources or developing more efficient algorithms.
The exascale performance also has implications for the field of artificial intelligence (AI). With the ability to process vast amounts of data at unprecedented speeds, AI models can be trained and tested on a much larger scale. This can lead to significant breakthroughs in areas like image recognition, natural language processing, and predictive analytics. However, it also raises concerns about the potential for biased or inaccurate results due to the sheer scale of the data being processed.
The development of exascale computing is an ongoing effort that requires collaboration between governments, industry leaders, and researchers from various fields. The benefits of achieving exascale performance are significant, but so are the challenges. As scientists continue to push the boundaries of what is possible with high-performance computing, they must also consider the broader implications for society and the environment.
Architectural Innovations In Exascale
The Exascale Computing Initiative aims to develop supercomputers capable of performing one exaflop, or one billion billion calculations per second. This represents a significant increase in computing power over current top-performing systems, which typically operate at petaflop scales (one quadrillion calculations per second). To achieve this goal, researchers are exploring innovative architectural designs that can efficiently utilize the vast number of processing units and memory required for Exascale computing.
One key innovation is the use of heterogeneous architectures, where different types of processing units, such as CPUs, GPUs, and FPGAs, are combined to optimize performance for specific workloads. For example, a system might employ a CPU-centric design for general-purpose computations, while utilizing a GPU-based architecture for data-intensive tasks like deep learning or scientific simulations (Horton et al., 2019). This approach allows Exascale systems to adapt to diverse application requirements and maximize overall performance.
Another critical aspect of Exascale computing is the development of scalable storage solutions that can efficiently manage vast amounts of data. As Exascale systems generate enormous amounts of data, researchers are exploring novel storage technologies like phase-change memory (PCM) and 3D XPoint, which offer higher densities and lower latencies than traditional hard drives or solid-state drives (Kim et al., 2020). These innovations enable Exascale systems to store and retrieve large datasets quickly and efficiently.
In addition to these architectural advancements, researchers are also exploring new materials and technologies that can improve the energy efficiency of Exascale systems. For instance, the use of advanced cooling techniques like immersion cooling or air-side economization can significantly reduce power consumption while maintaining high performance (Bahl et al., 2018). These innovations will be crucial in enabling Exascale computing to operate within practical power and thermal constraints.
The integration of artificial intelligence (AI) and machine learning (ML) into Exascale systems is another area of research focus. By leveraging AI and ML algorithms, researchers can optimize system performance, predict and prevent failures, and improve overall reliability (Ghemawat et al., 2016). This synergy between Exascale computing and AI/ML will enable the development of more sophisticated and efficient supercomputing systems.
The convergence of these architectural innovations is expected to drive significant advancements in Exascale computing. As researchers continue to push the boundaries of what is possible, we can expect to see even more innovative designs emerge that further accelerate scientific discovery and technological progress.
Memory And Storage Challenges Ahead
The Memory and Storage Challenges Ahead
As the world transitions to exascale computing, memory and storage requirements are expected to increase exponentially. The International Energy Agency (IEA) estimates that by 2030, global data storage will reach 174 zettabytes, with a significant portion of this growth driven by the increasing demand for high-performance computing (HPC) applications (IEA, 2022). To put this into perspective, the current largest HPC system, Summit, uses approximately 1.5 exabytes of memory to achieve its peak performance (Summit Team, 2019).
The challenge lies in developing storage technologies that can keep pace with these demands. Traditional hard disk drives (HDDs) and solid-state drives (SSDs) are approaching their physical limits, making it difficult to scale up storage capacity without compromising performance (Kim et al., 2020). New storage technologies such as phase-change memory (PCM), spin-transfer torque magnetic recording (STT-MRAM), and 3D XPoint are being explored to address these challenges. However, the development of these technologies is still in its early stages, and significant technical hurdles need to be overcome before they can be deployed at scale.
One potential solution is the use of hybrid memory cubes (HMCs), which combine high-bandwidth memory with low-latency storage. HMCs have been shown to improve performance by up to 50% compared to traditional DDR4 memory, making them an attractive option for exascale computing applications (Lee et al., 2019). However, the cost and complexity of implementing HMCs in large-scale systems are significant concerns that need to be addressed.
Another challenge is the increasing importance of data persistence and reliability. As HPC applications become more complex and distributed, ensuring that data is accurately stored and retrieved becomes a critical concern (Kumar et al., 2020). New storage technologies such as non-volatile memory (NVM) and flash-based storage are being developed to address these concerns, but their adoption will depend on significant advances in manufacturing technology.
The development of exascale computing systems that can efficiently store and retrieve vast amounts of data is a pressing challenge. The industry must come together to develop new storage technologies and architectures that can keep pace with the demands of HPC applications. This requires significant investment in research and development, as well as collaboration between academia, industry, and government.
Power Consumption And Cooling Issues
Exascale computing systems are expected to consume significantly more power than current high-performance computing (HPC) systems, with estimates suggesting that the total power consumption could reach up to 20-30 megawatts (MW) . This is due in part to the increased number of processing units and memory required to achieve exascale performance.
The cooling requirements for these systems are also expected to be substantial, with some estimates suggesting that the heat generated by an exascale system could be equivalent to the power consumption of a small city . To mitigate this issue, researchers are exploring new cooling technologies such as immersion cooling and air-side economization. These approaches aim to reduce the energy required for cooling while maintaining acceptable temperatures within the system.
One potential solution being explored is the use of 3D stacked processors, which can help to reduce power consumption by minimizing the number of interconnects between processing units . However, this approach also presents challenges related to thermal management and heat transfer. Researchers are working to develop new materials and designs that can effectively manage heat generated by these systems.
Another area of focus is the development of more efficient computing architectures, such as neuromorphic processors and quantum-inspired computing . These approaches aim to reduce power consumption by mimicking the efficiency of biological systems or leveraging the principles of quantum mechanics. However, significant technical hurdles must be overcome before these technologies can be scaled up to exascale levels.
The integration of advanced cooling technologies with more efficient computing architectures is also being explored as a potential solution . This approach aims to create a synergistic effect where the reduced power consumption of the computing architecture is matched by the improved efficiency of the cooling system. However, significant research and development are required to make this vision a reality.
Exascale Computing Applications And Use Cases
Exascale computing applications are primarily focused on scientific simulations, data analytics, and artificial intelligence (AI). These applications require immense computational power to process vast amounts of data, often exceeding the capabilities of current high-performance computing systems.
The Exascale Computing Project, a joint initiative by the US Department of Energy’s National Nuclear Security Administration and the Department of Energy’s Office of Science, aims to develop exascale-capable supercomputers that can perform at least one exaflop (one billion billion calculations per second). This project has led to significant advancements in high-performance computing architectures, memory technologies, and software frameworks.
Exascale computing applications are being explored in various fields, including climate modeling, materials science, and genomics. For instance, the National Center for Atmospheric Research‘s (NCAR) Community Earth System Model (CESM) uses exascale computing to simulate global climate patterns and predict future climate scenarios. Similarly, researchers at the University of California, Los Angeles (UCLA), are utilizing exascale computing to study the behavior of complex materials and develop new materials with unique properties.
Exascale computing also has significant implications for AI applications, particularly in areas such as deep learning and natural language processing. The use of exascale-capable supercomputers can accelerate the training of large neural networks, enabling researchers to explore more complex models and improve the accuracy of AI-driven predictions. For example, researchers at the University of Chicago have used exascale computing to train a deep learning model that achieved state-of-the-art results in image classification tasks.
The development of exascale computing has also led to significant advancements in data analytics and visualization. Exascale-capable supercomputers can process vast amounts of data from various sources, enabling researchers to identify patterns and trends that would be difficult or impossible to detect using traditional computing systems. For instance, researchers at the Lawrence Berkeley National Laboratory have used exascale computing to analyze large datasets from particle colliders, leading to new insights into the behavior of subatomic particles.
Impact On Scientific Research And Discovery
Exascale computing, which refers to the ability to perform one exa-flop (one billion billion calculations per second), has been a long-standing goal for high-performance computing (HPC) researchers. This milestone was finally achieved with the launch of the Frontier supercomputer at Oak Ridge National Laboratory in 2023 . The Frontier system, developed by IBM and NVIDIA, is capable of delivering over 1.1 exaflops of performance, marking a significant breakthrough in HPC capabilities.
The advent of exascale computing has far-reaching implications for scientific research and discovery. With the ability to process vast amounts of data at unprecedented speeds, researchers can now tackle complex problems that were previously unsolvable . For instance, simulating the behavior of materials at the atomic level or modeling the dynamics of climate systems require enormous computational resources. Exascale computing enables scientists to run these simulations in a matter of hours, rather than weeks or months.
One of the key applications of exascale computing is in the field of materials science. Researchers can now use high-performance computers to simulate the behavior of materials at the atomic level, allowing for the discovery of new materials with unique properties . This has significant implications for fields such as energy storage and conversion, where new materials could lead to breakthroughs in battery technology or more efficient solar panels.
Exascale computing also has a profound impact on the field of climate science. Researchers can now run complex simulations of global climate models at unprecedented scales, allowing for a deeper understanding of the Earth’s climate system . This enables scientists to better predict future climate scenarios and make more informed decisions about mitigation strategies.
The exascale era is not without its challenges, however. As computing power increases, so does the complexity of the problems being tackled. Researchers must develop new algorithms and software tools to take advantage of these capabilities, which requires significant investment in research and development .
Economic Benefits And Job Creation Potential
The economic benefits of exascale computing are multifaceted, with potential job creation in various sectors being a significant aspect. According to a study by the International Data Corporation (IDC), the global high-performance computing (HPC) market is expected to reach $43 billion by 2025, with a significant portion of this growth attributed to exascale computing (IDC, 2020). This expansion will create new job opportunities in fields such as software development, system administration, and data analysis.
Exascale computing’s impact on the economy extends beyond job creation. A study by the University of California, Berkeley, found that for every dollar invested in HPC infrastructure, there is a return of $10 to $15 in economic benefits (Henderson et al., 2013). This significant return on investment can be attributed to the increased productivity and efficiency achieved through exascale computing. Furthermore, the energy efficiency of exascale systems can lead to substantial cost savings for organizations, as highlighted by a study published in the Journal of Supercomputing (Kunkel & Ludwig, 2018).
The job creation potential of exascale computing is not limited to traditional HPC sectors. A report by the National Science Foundation (NSF) noted that exascale computing has the potential to create new industries and jobs in areas such as artificial intelligence, machine learning, and data analytics (NSF, 2020). This expansion into new fields can lead to a more diverse and dynamic job market.
Exascale computing’s economic benefits also extend to small and medium-sized enterprises (SMEs). A study by the European Commission found that SMEs can benefit from exascale computing through increased productivity, improved competitiveness, and access to new markets (European Commission, 2019). This can lead to significant job creation opportunities for SMEs, as they take advantage of the economic benefits offered by exascale computing.
The integration of exascale computing into existing industries can also lead to significant job creation. A report by the McKinsey Global Institute found that the adoption of AI and automation technologies, enabled in part by exascale computing, has the potential to create up to 140 million new jobs globally by 2030 (Manyika et al., 2017). This significant job creation potential highlights the importance of exascale computing in driving economic growth and development.
Future Directions And Roadmaps For Exascale
Exascale computing, the next frontier in high-performance computing, aims to achieve a peak performance of one exaflop (1 billion billion calculations per second). This milestone is expected to be reached by the mid-2020s with the development of new architectures and technologies. The US Department of Energy’s Exascale Computing Project has been driving this effort, with the goal of creating a system that can efficiently solve complex scientific problems.
The key to achieving exascale performance lies in the development of novel computing architectures, such as 3D stacked processors and heterogeneous systems combining CPUs, GPUs, and FPGAs. These architectures will enable more efficient data processing and memory access, reducing power consumption and increasing overall performance (Horton, 2019; Sterling, 2020). Additionally, new programming models and software frameworks are being developed to take advantage of these emerging architectures.
One of the primary applications for exascale computing is in scientific simulations, such as climate modeling and <a href=”https://quantumzeitgeist.com/quantum-computing-unlocking-potential-for-global-challenges-and-revolutionizing-chemistry-materials-science/”>materials science. These simulations require massive amounts of computational power to accurately model complex systems and predict outcomes (Kahan, 2018; Shalf, 2020). Exascale computers will enable researchers to run these simulations at unprecedented scales, leading to breakthroughs in our understanding of the world.
Another area where exascale computing is expected to have a significant impact is in artificial intelligence and machine learning. The increasing complexity of AI models requires more powerful computing resources to train and optimize them (LeCun, 2015; Bengio, 2020). Exascale computers will enable researchers to develop more sophisticated AI models that can tackle complex problems in fields such as healthcare and finance.
The development of exascale computing is also driving innovation in areas such as materials science and chemistry. New computational methods and algorithms are being developed to take advantage of the increased performance and memory capacity of exascale systems (Tuckerman, 2019; Frenkel, 2020). These advances will enable researchers to simulate complex chemical reactions and materials properties at unprecedented scales.
The US Department of Energy’s Exascale Computing Project has identified several key challenges that must be addressed in order to achieve exascale performance. These include the development of new cooling technologies to manage heat generated by high-performance computing, as well as the creation of more efficient power supplies (Horton, 2019; Sterling, 2020).
- Bahl, P., et al. (2020). Energy-efficient cooling techniques for exascale data centers. Journal of Green Engineering, 10, 255-274.
- Bahl, P., et al. (2018). Immersion cooling for high-performance computing. IEEE Transactions on Components and Packaging Technologies, 41, 533-542.
- Bemer, R. W. (1965). The programming languages of the CDC 6600 and the IBM System/360. Communications of the ACM, 8, 147-155.
- Beygelis, R., et al. (2019). Data analytics and machine learning for exascale computing. Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis.
- Dagum, L., & Menon, R. (1998). OpenMP: An industry-standard API for shared-memory programming. ACM Transactions on Computational Logic (TOCL), 5(3).
- Datta, S. (2007). Quantum computation and quantum information. Cambridge University Press.
- Dongarra, J., & Meuer, H. W. (2019). The TOP500 project: A retrospective view. Journal of Computational Science, 41, 100754.
- European Commission. (n.d.). Exascale computing for SMEs.
- Exascale Computing Project. (n.d.). Exascale Computing Project: A joint effort between the US Department of Energy and industry partners.
- Frenkel, D. (2020). Simulation and modeling in materials science. Journal of Physics: Conference Series, 1668, 012002.
- Oak Ridge National Laboratory. (n.d.). Frontier Supercomputer. Oak Ridge National Laboratory. Retrieved from https://www.olcf.ornl.gov/frontier/
- Ghemawat, S., et al. (2016). The Hadoop distributed file system: Architecture and design. ACM Transactions on Computer Systems, 34, 1-26.
- Hartree, D. R., & Richardson, O. E. (1950). Methods of mathematical physics. Cambridge University Press.
- Henderson, K., et al. (2021). The economic benefits of high-performance computing.
- Henderson, R., et al. (2021). Kokkos: A unified interface for parallel programming across different hardware platforms. Journal of Parallel and Distributed Computing, 134, 102-113.
- Hillis, D. (1987). The connection machine. Scientific American, 256, 108-115.
- Hollingsworth, J., et al. (2019). Exascale computing: The next frontier. Journal of Parallel and Distributed Computing, 129(2), 151-162.
- Hollingsworth, P. K. (2017). Exascale computing: A new frontier in high-performance computing. IEEE Computer Society.
- Hollingsworth, P. K., & Copenhaver, S. A. (2018). Exascale computing: The next frontier in high-performance computing. Journal of Supercomputing, 75, 1-15.
- Hollingsworth, P. K., et al. (2018). Summit: A heterogeneous architecture for exascale computing. Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis.
- Hollingsworth, P., & Canning, R. (2018). Exascale computing: The next frontier in high-performance computing. Journal of Supercomputing, 74, 15-25.
- European Commission. (n.d.). Horizon 2020. Retrieved from https://ec.europa.eu/horizon2020/
- Horton, G. K., & Sterling, T. H. (2019). Exascale computing: The next frontier in high-performance computing. Journal of Supercomputing, 75, 1-15.
- Horton, G., et al. (2021). Heterogeneous architectures for exascale computing. Journal of Parallel and Distributed Computing, 128, 10-23.
- Horton, M., et al. (2021). Heterogeneous architectures for exascale systems. Journal of Parallel and Distributed Computing, 128, 102-113.
- HPCwire. (2023, June 15). Frontier supercomputer achieves 1 exaflop performance. Retrieved from https://www.hpcwire.com/2023/06/15/frontier-supercomputer-achieves-1-exaflop-performance/
- Singh, S. K., et al. (2020). Cooling exascale computers: A review of current and future technologies. International Journal of Thermal Sciences, 147.
- Shalf, J., et al. (2019). Exascale computing: The next frontier for scientific research. Journal of Computational Physics, 409, 109876.
- Lee, J. H., et al. (2021). 3D stacked processors for exascale computing. IEEE Transactions on Very Large Scale Integration Systems, 28.
- Singh, S. K., et al. (2020). Neuromorphic processors and quantum-inspired computing for exascale applications. Journal of Supercomputing, 77.
- Lee, J. H., et al. (2021). Integrated cooling and computing architectures for exascale systems. IEEE Transactions on Computers, 70.
- Singh, S. K., et al. (2020). The challenges and opportunities of exascale computing. IEEE Computer, 52(11), 14-17.
