ParTec

JUPITER
Discover Europe’s pioneering exascale computer as the key to accelerating AI research, advancing new technologies and enhancing technological sovereignty

An exascale computer refers to a high-performance computing system capable of executing a billion billion calculations per second (exaFLOPS). JUPITER will serve as a foundational infrastructure for researching, simulating, and optimizing future technologies like AI, quantum and neuromorphic computing. Its substantial computational capabilities will empower researchers to overcome some of the main challenges of mankind, explore novel applications, and advance the development and integration of these cutting-edge technologies into practical and impactful solutions. As a result, it gives Europe more control over its own technology infrastructure, system and data, leading to technical sovereignty.

Introducing JUPITER

JUPITER (“Joint Undertaking Pioneer for Innovative and Transformative Exascale Research”), is being installed in a specially designed building on the campus of Forschungszentrum Jülich. JUPITER will have three times the computing capability of Europe’s current most powerful supercomputer and will provide the equivalent power of 10 million modern desktop computers. The overall system will require the space of about 4 tennis courts and will use over 260 km of high-performance cabling, allowing it to move over 2,000 Terabits per second, the equivalent of 11,800 full copies of Wikipedia every second.

The system is financed by the European supercomputing initiative EuroHPC JU founded in 2018 (250 million Euro) and in equal parts by the Federal Ministry of Education and Research (BMBF) and the Ministry of Culture and Science of the State of North Rhine-Westphalia (MKW NRW).

JUPITER is positioned to achieve a groundbreaking milestone as the inaugural European supercomputer to ascend to the exascale class. Boasting computing power surpassing that of 5 million modern laptops or PCs, JUPITER follows in the footsteps of Jülich’s current supercomputer, JUWELS. Both systems share the foundation of a dynamic Modular Supercomputing Architecture (dMSA) developed collaboratively by Forschungszentrum Jülich alongwith European and international partners through the EU’s DEEP research projects. ParTec is part of this consortium supplying the MSA-enabling ParaStation Modulo Software Suite, allowing the system to achieve an outstanding level of computing power at the same time an outstanding level of compute power with improved energy efficiency compared to JUWELS.

JUPITER's building blocks

As the first ExaFlop system in Europe, JUPITER will allow for at least 1 trillion computing operations per second. But what are the building blocks of this system provided by a ParTec-Eviden supercomputer consortium?

Consisting of three modules, the system will comprise a dynamic ensemble:

For a deep dive into JUPITER’s building blocks, we refer to FZ Jülich’s Technical overview

Why does JUPITER matter?

JUPITER showcases a truly European technology approach. It realises Europe’s goal to compete on the global supercomputing stage, to leverage AI and explore future technologies. Europe’s first exascale computer will help to advance science research, drive innovation and foster economic growth. Here are 3 reasons we believe JUPITER matters:

Unleashing AI Potential

Empowering Innovation with Exascale Computing

Modularity

Allowing for integration of future technologies including Quantum and Neuromorphic Computing

Sovereignty

Enabling innovation to achieve technological sovereignty

Accelerating AI research and model development

Europe has both the necessary computer performance as well as the expertise in software development to be innovative in AI. With JUPITER, we will have perhaps the most powerful AI supercomputer in the world!

Prof. Dr. Dr. Thomas Lippert, Director of the Jülich Supercomputing Centre at Forschungszentrum Jülich

Exascale computing serves as a powerful enabler for AI research and model development by offering the computational capabilities necessary to train large models, process big data, optimize hyperparameters, scale workloads, and accelerate innovation across various AI domains:

Training complex models

Exascale computing accelerates AI model training through parallel processing of very large datasets, crucial for deep learning models with billions or trillions of parameters. Its immense computational power significantly reduces training and re-training time, which is particularly advantageous for time-sensitive projects, enabling rapid iteration in model development.

Handling big data

Exascale computing technology also efficiently manages vast datasets. Fast and scalable parallel IO enables near real-time processing, particularly vital for applications like streaming analytics, ensuring rapid decision-making based on continuously updated data and models.

Scaling AI workloads

Exascale computing provides the scalability needed to handle increasingly large and complex AI models. This is crucial for advancements in natural language processing, computer vision, and other AI domains where larger models with more parameters often lead to substantially better performance

Optimizing hyperparameter search

Exascale computing supports the exploration of extensive hyperparameter spaces during model training, enabling researchers to conduct concurrent experiments on various model architectures, hyperparameter settings, and training strategies. This fosters a comprehensive understanding of AI model behavior and the discovery of optimal configurations for improved model performance.

Combining simulation & AI

Exascale Computing enables the combination of physics-based simulations with large AI models. Physically realistic, but computationally expensive, and therefore time consuming simulations (such weather and climate modelling or combined CFD and structural mechanics simulations) can generate massive data sets for AI models in a short of amount of time. Such physics-aware AI models can then be used to find complex answers much faster and more efficiently for many applications. Such AI models may replace intensive computational models or at least parts of numerically intensive calculations to accelerate the time to insight.

The importance of modularity in exascale computing

The need to achieve optimal compute performance with minimal energy consumption has driven the development of a computer architecture that integrates a diverse range of general-purpose and acceleration elements. This forms the fundamental concept behind the dynamic Modular Supercomputing Architecture (dMSA). Within this architectural framework, heterogeneous resources are coordinated to enable applications to execute each of their components on the most suitable computing elements.

ParTec and the Jülich Supercomputing Centre initially showcased this in operational Cluster/Booster systems in 2015. Through the DEEP projects, the foundational dMSA architecture, network federation, runtime systems, and programming paradigms and tools were conceived and refined. dMSA systems are presently deployed in various prominent European HPC systems, establishing dMSA as the architectural framework for upcoming European Exascale and post-Exascale systems. ParTec AG patented dMSA and is the only provider of such systems in the industry today.

What are the benefits of the DMSA patented by ParTec?

Heterogeneity on the system level, effective resource sharing

Cost-effective scaling, extensibility of existing modular systems by adding modules

Large-scale simulations, Data analytics Machine/Deep Learning, AI Hybrid-quantum Workloads

Achieve leading scalability and energy efficiency

Unified software environment for running across all modules 

JUPITER’s cluster module prioritizes applications demanding enhanced serial performance and greater memory bandwidth. Due to its modular architecture applications seamlessly leverage both components concurrently, optimizing computing resources efficiently. Notably, JUPITER is adeptly positioned to support a unique category of heterogeneous applications that integrate conventional HPC simulations with AI methods, amplifying precision and efficiency.

How does modularity foster the integration of new technologies? By providing a framework that allows for adaptation and incorporation into existing systems. Modularity allows for the gradual adoption of quantum computing components, without a complete overhaul of the existing system. ParTec has co-developed a comprehensive software package QBridge that enables the integration of quantum computers into HPC Systems. The integration software boosts the productivity of researchers by maximising scientific throughput on a quantum computer. 

How JUPITER contributes towards achieving technological sovereignty

Technological sovereignty is crucial to ensure the ability to innovate, make decisions on infrastructure, systems and data without being overly dependent on external entities. So how European is JUPITER and how does it contribute towards achieving sovereignty?

Several key aspects of JUPITER contribute towards achieving sovereignty:

Software autonomy

The first ExaFlop system to use the dynamic Modular System Architecture (dMSA) developed by ParTec and Forschungszentrum Jülich.

Strong European project leaders

The first exascale computer built by a Franco-German consortium of ParTec and Eviden

Hardware independence

The first exascale system to utilise the European HPC processor Rhea from SiPearl

Research and innovation

The first exascale supercomputer which is primarily fueled by European research and development work

Achieving technological sovereignty ensures that European businesses and organizations comply with local and regional laws. Maintaining control over technology helps in securing sensitive data against unauthorized access, contributing to enhanced cybersecurity. As technology plays a central role in the economy, achieving technological sovereignty enhances Europe’s data-driven economy. It fosters innovation, attracts businesses, and strengthens the digital infrastructure, contributing to the continent’s economic growth.

What is ParTec's contribution to the project?

Together with the French company Eviden, ParTec is the lead partner in the construction of the first Exascale supercomputer in Europe. The contract includes procurement, delivery, installation, hardware, software and maintenance of the JUPITER Exascale Supercomputer. 

JUPITER needs to leverage adaptability, efficiency and scalability, which ParTec provides. Here are more details on what we contribute:

Find out more information about our software ParaStation Modulo, its components and what it enables our clients to do, here.

Sign up for news relating to JUPITER

Don’t miss out on any developments. Please follow us on our press page to receive our news relating to JUPITER.