The SEA projects, DEEP-SEA, RED-SEA, and IO-SEA, represent a collaborative effort between the Jülich Supercomputing Centre (JSC), the French Alternative Energies and Atomic Energy Commission (CEA), Eviden, and ParTec AG, amongst other esteemed institutions in Europe, to realize a dynamic Modular System Architecture (dMSA) for future Exascale High-Performance Computing (HPC) systems.
These projects bring together top European academic and industrial expertise to develop technologies for the next generation of supercomputers. DEEP-SEA focuses on programming environments and software stacks, RED-SEA enhances European interconnect technology and low-level software, and IO-SEA provides a data management and storage platform.
The IO-SEA project incorporates the following use cases for co-design, all of which are data-intensive applications expected to encounter significant I/O demands at the Exascale level. These use cases will leverage the various components of the IO-SEA stack. ParTec’s ParaStation Modulo Software Suite is a key component of the stack. ParaStation Management and the ParaStation HealthChecker which are part of the suite are now enhanced with the ability to work with “Ephemeral I/O services” on specialised Data Nodes in heterogeneous MSA systems running such use cases.
Use Cases
Astrophysical Simulations
Structural and Cellular Biology Research
Meteorological innovation
Earth System analysis
Lattice Quantum Chromodynamics Calculations
RAMSES: Astrophysical Plasma Flows
With RAMSES, an open-source code designed to unravel the complexities of compressible plasma flows in cosmic phenomena, we are diving right into the heart of astrophysical simulations. This powerful tool, boasting capabilities in self-gravitation, magnetism, and radiative processes, stands as a cornerstone in astrophysical research.
At its core, RAMSES utilizes the Adaptive Mesh Refinement (AMR) technique on a fully-threaded graded octree, leveraging the computational prowess of supercomputers to explore the vastness of space. Developed in Fortran 90 and intricately intertwined with the MPI library, RAMSES is optimized to harness the full potential of high-performance computing architectures.
However, as simulations scale up to embrace the challenges of the exascale era, traditional I/O methods begin to reveal their limitations. RAMSES, although optimized for computational intensity, encountered I/O bottlenecks when confronted with the “files per process” paradigm, particularly evident beyond 8,000 cores.
Hercule, a pioneering parallel I/O library is championed in IO-SEA. With seamless integration into RAMSES, Hercule revitalizes the system’s I/O infrastructure, facilitating a seamless transition into the exascale era. This integration marks a significant milestone, empowering RAMSES to not only produce checkpoints and restarts but also to generate post-processed outputs tailored to specific research needs.
With this upgrade, RAMSES embarks on a new frontier of discovery, equipped with enhanced capabilities to explore the depths of astrophysical phenomena and unlock the mysteries of the cosmos.
Analysis of Electron Microscopy Images
In the ever-evolving landscape of Structural and Cellular Biology research, cryo-electron microscopy (cryo-EM) stands as a beacon of innovation. This groundbreaking technique enables the derivation of precise 3D models of macromolecules from vast arrays of 2D projection images captured by electron microscopes.
The volume of data generated during the acquisition of electron micrographs for single particle analysis is staggering. Moreover, the computational demands for deriving accurate 3D models are substantial, requiring advanced processing resources.
CEITEC operates several transmission electron microscopes, including the cutting-edge Titan Krios. This state-of-the-art instrument produces an astounding 1-2TB of raw data per day, operating tirelessly in a 24/7 capacity. Currently, this raw data is stored locally on HDD storage, awaiting further processing to unlock the secrets of the studied macromolecules. Processing takes place on dedicated bare-metal machines equipped with GPU, all housed within CEITEC’s premises.
Within the IO-SEA framework, the primary goal in this case is to harness the HPC and storage resources provided by IT4I. By leveraging these resources, it is aimed to significantly enhance the processing efficiency of raw data produced by electron microscopes. Additionally, the use case seeks to streamline data publication mechanisms, facilitating the dissemination of groundbreaking discoveries to the scientific community and beyond.
ECMWF: Empowering weather prediction
At the forefront of meteorological innovation stands the European Centre for Medium-Range Weather Forecasts (ECMWF), an intergovernmental organization supported by 23 member states and 11 cooperating states. As both a research institute and a 24/7 operational service, ECMWF plays a pivotal role in producing numerical weather predictions and data vital to the weather and climate communities worldwide.
ECMWF’s forecasts are the backbone of weather prediction, delivering invaluable insights through a suite of high-resolution deterministic forecasts and ensemble forecasts. These forecasts, covering the entire globe, are generated multiple times daily, providing critical information for industries ranging from agriculture to transportation. However, the value of weather prediction data lies in its timeliness, requiring swift and efficient delivery to downstream consumers.
Central to ECMWF’s operational workflow is its tape library, where data flows seamlessly from observations to assimilation into the Integrated Forecasting System (IFS). The output undergoes post-processing and distribution, as well as archival for future reference. During the time of the IO-SEA project, each operational run produced approximately 30 TiB of model data, with the time-critical aspect of the workflow demanding completion within an hour. This includes generating high-resolution forecasts and ensemble forecasts, alongside post-processing for product generation.
As ECMWF anticipates a sustained increase in data volume, adaptation in data sharing and access methods becomes imperative. The operational pipeline’s efficiency hinges on mitigating concurrent read/write contention on the filesystem, necessitating meticulous tuning of the I/O pipeline to ensure seamless operations.
In collaboration with IO-SEA, ECMWF is poised to navigate the evolving landscape of weather prediction, leveraging cutting-edge technologies to optimize data processing and distribution. Together, we embark on a journey to enhance the resilience and efficacy of weather forecasting, empowering decision-makers and communities worldwide to better prepare for the future.
TSMP: Multi-physics Regional Earth System Model
Terrestrial Systems Modelling Platform (TSMP) is a sophisticated Earth System Model meticulously crafted to unravel the complexities of our planet’s interconnected processes.
TSMP stands as a beacon of innovation, offering a fully coupled, scale-consistent, and massively parallel regional Earth System Model. At its core, TSMP integrates three foundational model components: the COSMO model for atmospheric simulations, the CLM land surface model, and the ParFlow hydrological model. These components are seamlessly interconnected through the OASIS coupler, facilitating comprehensive analyses of Earth’s systems.
Enabled for Data Assimilation (DA) through the Parallel Data Assimilation Framework (PDAF), TSMP empowers researchers to simulate intricate interactions and feedback loops across terrestrial compartments. From land surface dynamics to subsurface hydrology and atmospheric phenomena, TSMP enables the simulation of mass, energy, and momentum fluxes with unparalleled precision.
Maintained by the Simulation and Data Lab Terrestrial Systems (SDLTS) at JSC, TSMP represents a collaborative effort driven by a commitment to advancing scientific understanding. As an open-source software, TSMP is publicly available on GitHub, inviting collaboration and innovation from the global research community.
LQCD: Unveiling the Secrets of Quarks and Gluons
Embark on a journey into the subatomic realm with numerical Lattice Quantum Chromodynamics (LQCD), a powerful computational framework designed to unravel the enigmatic properties of hadrons – particles composed of quarks and gluons. At its core lies the QCD action, an equation that encapsulates the intricate interactions of quark and gluon quantum fields governed by the strong force.
In the realm of quantum chromodynamics, quarks and gluons possess a unique “color” charge, giving rise to complex dynamics that defy conventional intuition. Leveraging clever mathematical transformations, LQCD treats time as an imaginary variable and discretizes spacetime into a finite 4-D lattice, enabling the simulation of quark and gluon fields as a statistical physics problem.
The LQCD simulation process unfolds in two distinct stages. First, a vast ensemble of background gluon field configurations, known as “gauge configurations,” is generated through Monte Carlo simulation. Then, these configurations are meticulously analyzed, with operators corresponding to physical observables calculated on each configuration. The ensemble average of these operators serves as the expectation value of the observable, akin to measurements in a controlled experiment.
Within the realm of LQCD, the IO-SEA solution heralds a new era of efficiency and scientific output. With its suite of features tailored to the unique demands of LQCD calculations, IO-SEA promises to streamline computational workflows, enhance data management capabilities, and accelerate scientific discoveries in the realm of particle physics.
Learn more about the research projects that ParTec is involved in here.
Subscribe to our quarterly newsletter to receive in-depth insights, key takeaways from events, and the latest press releases directly to your inbox: