Bavarian Supercomputing is achieved through a network of universities, technical vocational schools and compute centers. Below are our members.

Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU)

Nanotechnology, mechatronics, medical technology: The focal points of the technical faculty of the Friedrich Alexander University (FAU), founded in 1743, are shaped by the regional economy around Erlangen and Nuremberg. Many renowned machine and plant manufacturers, automotive suppliers as well as specialists for precision mechanics, medical, nano and nuclear technology have settled here: The FAU’s Department of Computer Science provides many innovative solutions for computer technology and for the processing and analysis of large amounts of data in these industries.

The Department of Computer Science at the FAU deals in particular with computer architectures and applications, operates the Regional Computer Center Erlangen (RRZE) and plans the further development of the supercomputer. A thematic focus of the chair is the work in the field of Node-Level Performance Engineering (PE), the development of high-performance prototype codes and libraries as well as the methods for the performance analysis of codes and hardware platforms. The Execution Cache Memory (ECM) model, a further development of the Roofline model, was developed in Erlangen.

The FAU has also made a name for itself in the High Performance Computing (HPC) community with its performance tool suite LIKWID, an open source monitoring tool for supercomputers and applications, which will be released in 2019 in its fifth full version, will be constantly supplemented with new open source programs and will be used in industry, research and teaching. The kernel analysis and performance modeling tool Kerncraft Loop is also appreciated internationally in research and companies.

Due to the focus of the FAU Faculty of Technology, the Chair of Computer Science has a lot of experience with applications and algorithms for the processing and modelling of medical, scientific and technical data. A speciality in Erlangen is the research of embedded computer systems, which deliver different data from machines, devices, even from the human body. Besides analysis, modelling and simulation of data, the computer scientists of the FAU also explore their visualisation: virtual and 3D worlds, facial recognition are also focal points of work in the HPC group of the FAU and its high performance computer centre RRZE.

Leibniz-Rechenzentrum (LRZ) / Leibniz Supercomputing Centre Garching near Munich

Bavaria’s leading computing centre and initiator of the BSA is making several contributions to the science network. It is currently working intensively on the computing of the future and is observing that the next generation of processors, circuits and networks, also in conjunction with artificial intelligence, will enormously expand the possibilities for data processing. Open Multi Processing (OpenMP) already provides an interface for parallel programming on different machines, and high bandwidths also improve the storage of large amounts of data. The LRZ expects an increase in artificial intelligence (AI) and analytics in research work already in the coming months and is therefore looking for cooperation partners to develop applications for the next supercomputers and to research the performance of AI more closely.

The increase in data and the need for analysis are already changing the user structure at the LRZ. Today, data from different sources flows into research projects, new cloud structures are needed to bring together information from different systems and to bridge silos, and applications should be simplified. The LRZ has set itself the ambitious goal of making HPC as easy as working on a laptop – by standardizing applications, by continuously checking programs and applications, by coordinating computing time more efficiently, not least by training, advising and accompanying potential LRZ users.

With the Datacentre Database (DCDB) the LRZ presents a tool for monitoring and optimizing the performance and energy efficiency of supercomputers and their applications. The open source program has a modular structure and records data from a wide variety of sources relating to the hardware, software and infrastructure of the supercomputers. The analysis of the data is intended to point to the set screws with which the work of supercomputers can become more energy-efficient, more powerful and more reliable. DCDB also provides data for application standardization.

Ludwig-Maxmilians-Universität (LMU) / University of Munich

The chair of Communication Systems and System Programming focuses on distributed systems and platforms, Internet services, virtualization, cooperative IT infrastructure planning, system programming and IT security. The LMU contributes experience in parallel programming, data processing on differently structured computers and data exchange in distributed computing structures to maximum and high-performance computing in Bavaria. These research questions have recently led to the development of tools such as the Task Profiler for coordinating computing projects or the Flowgraph for controlling and monitoring performance data from computer systems. Another platform developed under the lead of LMU: Process, short for Providing Computing Solutions for Exascale Challenges, is the name of the service platform promoted through a EU-funded project until 2020 with tools, applications and information relating to HPC and next-generation computers. Like Lego bricks, storage systems are integrated with software and tools, authentication systems, clouds and containers. Process has already proven itself in various research applications, such as the storage of highly sensitive medical data or the analysis of 35 terabytes of radio data from astronomy.

LMU Munich cooperates with the universities in Dresden, Stockholm, Aachen, the Technical University of Munich, as well as with institutions such as the High Performance Computing Centre Stuttgart (HRLS), the National Energy Research Scientific Supercomputing Centre (NERSC) and the Helmholtz Centre Dresden (HZDR).

Regionales Rechenzentrum Erlangen (RRZE) / Erlangen Regional Computing Center

The Regional Computer Center Erlangen (RRZE) belongs to the Friedrich-Alexander-University Erlangen Nuremberg (FAU), was founded in 1979 and supports seven universities in Bavaria with its technology and competence. The RRZE operates the parallel computers LiMa-Shutdown, Emmy and Meggie, as well as a throughput cluster (Woody/TinyEth) and the special systems TinyFAT, TinyGPU and Windows HPC-Shutdown. The RRZE offers more than 1,800 computer nodes with 30,000 cores and almost 100 terabytes of main memory, as well as around two petabytes of disk storage in five storage systems and a transparently managed tape library for offline storage.

As a technical service provider, the RRZE supports scientists and cooperation partners from industry. In 2018, the RRZE had around 600 active accounts on its HPC systems. The users include researchers and students from all FAU disciplines, as well as users from partner universities and cooperation partners. More than 200 million core hours were used by them. An important task is therefore the training and consulting of users of supercomputers. Together with the Department of Computer Science of the FAU, the RRZE offers training and information around High Performance Computing (HPC) and advises users on the integration of their applications into the multi-processor computer environment in Erlangen.

Together with the FAU, the RRZE also develops and supports practical tools for measuring the performance of HPC systems and individually programmed applications for processing large amounts of data. LIKWID, the useful performance tool suite is one example for valuable software for HPC. The RRZE has gained special experience in the evaluation of information from medicine, nanotechnology, embedded systems as well as mechanical and plant engineering. The proximity to leading automotive suppliers in the region is one reason why the RRZE and the FAU also deal with the use of artificial intelligence and machine learning in vehicles and with autonomous driving.

Technische Hochschule Deggendorf (THD) / Technical University Deggendorf

Located between the Danube and the Bavarian Forest, the Deggendorf Technical University (THD) attracts students and scientists from all over the world. They appreciate the inspiring working atmosphere and the family atmosphere at the university, which was founded in 1984 and is currently the fastest growing university in Bavaria.

Around 7000 students are trained here in the fields of economics, technology and health. The university also has 8 technology campuses with different focal points, which are supplied with computer power, storage services and services by the university’s IT centre. The practice-oriented teaching and research attracts scientists, computer specialists and engineers to Deggendorf.

The university has made an international name for itself as “Embedded Valley”, also because it develops application-related special solutions in close cooperation with high-tech companies from abroad and attaches importance to the implementation of research results in applications for economics. The thematic focus is on digital production processes and automation, artificial intelligence, machine learning and the handling of big data, cybersecurity as well as autonomic driving, bionics, sensor technology and mechatronics.

Supercomputing is now part of the IT centre and closely collaring with the Faculty of Applied Informatics, that development has resulted in a significant increase in AI and security focused projects. The young HPC group at THD offers a wide range of support for the use of central “Arber”-Cluster. The services include the usage of a bare metal machine as well as consulting and training, the team also provides support with correctness analysis and runtime analysis. The THD is involved in several research collaborations on methods and tools for parallel programming and performance and correctness analysis of parallel programs.

Technische Universität München (TUM) / Technical University of Munich

The university participates in the BSA with four chairs and institutes.

The Chair of Computer Architecture and Parallel Systems is currently working on improving the performance of the SuperMUC-NG at the LRZ. The research work focuses on monitoring performance data from computers and applications, parallel programming on different systems and energy efficiency. The Periscope platform is available for the new generation of supercomputers. It contains various tools, software, plug-ins for optimizing algorithms and codes. The TUM offers know-how and software for the visualization and processing of large amounts of data on high-performance computers, especially for simulations of weather, climate, gases and liquids.

The Institute of Computational Mechanics at TUM researches techniques and methods to visualize flow dynamics and other research results from physics and natural sciences. It develops applications and optimizes them for use on supercomputers and high-performance computers such as the SuperMUC-NG. In addition to interesting simulations on thermodynamics, flows or lung functions that are difficult to represent, a platform with various tools was created that simplifies the programming of simulation applications and optimizes the computing power of such programs: It is based on the C++ programming language and the Message Passing Interface (MPI) for data exchange between computer systems.

TUM’s Scientific Computing in Computer Science (SCCS) provides the software for complex spatial visualizations and simulations, for which different media data are often processed from different sources. One example for the harmonization and visualization of research data is Seissol, a simulation for predicting earthquakes. The result is Precice, a library of display aids for flowing structures, such as flows or temperature developments, which is now used by 23 universities, companies and institutions in Europe. In addition, the SCCS is preparing the supercomputers of the exascale and quantum generation – machines that are likely to be required above all by seismology and astrophysics.

Julius-Maximilians-Universität (JMU), Würzburg

The university operates the youngest of Bavaria’s data centers. The Julia Cluster has been running since November 2017 and offers external researchers a total of 1336 CPU cores from Intel, 10 Nvidia-Tesla P100 cards, around 33,664 gigabytes of storage capacity and 400 terabytes of user memory for simulations and calculations. The system is networked with the cloud infrastructure OpenStack, organizes computing jobs with the Slurm and relies on its own programs/applications and those of its users.

Universität Regensburg / University of Regensburg, Bavaria

Bioinformatics, in particular genome and cancer research, as well as basic research from physics are the specialities of the University of Regensburg and its compute centre. The „Gitter-Eich“-Theory for the representation of quarks und Glounen (LQCD) (lattice calibration theory for the representation of quarks and globes), the “Urteilchen der Materie” (verdicts of matter), gave rise to various applications at the University of Regensburg for simulating research results from the natural sciences and medicine, as well as helpful tools for checking codes, algorithms and computer technology. For example, Regensburg has developed scalable Advanced Vector Extensions between 126 and 2048 bits, which exploit the possibilities of Intel and AMD processors and will be available to users of the Regensburg supercomputer from 2020. Since 2010, the university has also been teaching methods and techniques related to HPC in the interdisciplinary bachelor and master courses “Computational Science”. Its first task will be to research solutions for the existing bottlenecks in the processing of medical data.