HIGH-LEVEL SUPPORT FOR SIMULATIONS IN ASTRO- AND ELEMENTARY PARTICLE PHYSICS

Similar documents
EarthCube Conceptual Design: Enterprise Architecture for Transformative Research and Collaboration Across the Geosciences

Parallel Computing 2020: Preparing for the Post-Moore Era. Marc Snir

RF CAVITY SIMULATIONS FOR SUPERCONDUCTING CYCLOTRON C400 Y. Jongen, M. Abs, W. Kleeven, S. Zaremba IBA, Louvain-la-Neuve, Belgium

Programming and Optimization with Intel Xeon Phi Coprocessors. Colfax Developer Training One-day Labs CDT 102

December 10, Why HPC? Daniel Lucio.

The role of prototyping in the overall PRACE strategy

Programming and Optimization with Intel Xeon Phi Coprocessors. Colfax Developer Training One-day Boot Camp

Exascale Initiatives in Europe

Challenges in Transition

The LinkSCEEM FP7 Infrastructure Project:

First Experience with PCP in the PRACE Project: PCP at any cost? F. Berberich, Forschungszentrum Jülich, May 8, 2012, IHK Düsseldorf

Post K Supercomputer of. FLAGSHIP 2020 Project. FLAGSHIP 2020 Project. Schedule

Computing center for research and Technology - CCRT

PRACE PATC Course: Intel MIC Programming Workshop & Scientific Workshop: HPC for natural hazard assessment and disaster mitigation, June 2017,

NRC Workshop on NASA s Modeling, Simulation, and Information Systems and Processing Technology

EXTEREME LIGHT INFRASTRUCTURE. ELI and experience with international cooperation. Date:

Overview of FP7 activities and perspectives for Horizon Wim Jansen DG CONNECT e-infrastructure

High Performance Computing in Europe A view from the European Commission

CAPACITIES. 7FRDP Specific Programme ECTRI INPUT. 14 June REPORT ECTRI number

Foundations for Knowledge Management Practices for the Nuclear Fusion Sector

The Spanish Supercomputing Network (RES)

Facts and Figures. RESEARCH TEACHING INNOVATION

The Bump in the Road to Exaflops and Rethinking LINPACK

S-BPM ONE 2009 Constitutional convention

Sourcing in Scientific Computing

ANTIMATTER. A beam of particles is a very useful tool. Antimatter! 1

e-infrastructures in FP7: Call 9 (WP 2011)

11/11/ PARTNERSHIP FOR ADVANCED COMPUTING IN EUROPE

High Performance Computing Scientific Discovery and the Importance of Collaboration

FAST RADIX 2, 3, 4, AND 5 KERNELS FOR FAST FOURIER TRANSFORMATIONS ON COMPUTERS WITH OVERLAPPING MULTIPLY ADD INSTRUCTIONS

DEISA Mini-Symposium on Extreme Computing in an Advanced Supercomputing Environment

e-infrastructures for open science

European View on Supercomputing

Special Contribution Japan s K computer Project

Eurolab-4-HPC Roadmap. Paul Carpenter Barcelona Supercomputing Center Theo Ungerer University of Augsburg

Enabling Scientific Breakthroughs at the Petascale

H2FC European Infrastructure Project

escience/lhc-expts integrated t infrastructure

Research infrastructure project. HIBEF-Polska

National e-infrastructure for Science. Jacko Koster UNINETT Sigma

SCIENCE, TECHNOLOGY AND INNOVATION SCIENCE, TECHNOLOGY AND INNOVATION FOR A FUTURE SOCIETY FOR A FUTURE SOCIETY

The end of Moore s law and the race for performance

IESP AND APPLICATIONS. IESP BOF, SC09 Portland, Oregon Paul Messina November 18, 2009

RADIATION MONITORING OF THE GEM MUON DETECTORS AT CMS

High Performance Computing Systems and Scalable Networks for. Information Technology. Joint White Paper from the

Future and Emerging Technologies. Ales Fiala, Head of Unit C2 European Commission - DG CONNECT Directorate C - Excellence in Science

with permission from World Scientific Publishing Co. Pte. Ltd.

Knowledge Transfer at CERN

Innovative Approaches in Collaborative Planning

Future Perspectives. Maria Grazia Pia, INFN Genova in rappresentanza del gruppo Geant4-INFN.

TECHNOLOGY AND INNOVATION TRANSFER THROUGH TRANSNATIONAL NETWORKING

Research Data - Infrastructure and Services Wim Jansen European Commission DG CONNECT einfrastructure

Monte Carlo integration and event generation on GPU and their application to particle physics

ISMI Industry Productivity Driver

Supercomputers have become critically important tools for driving innovation and discovery

Making Smart Robotics Smarter. Brian Mason West Coast Business Development Manager, Elmo Motion Control, Inc.


On-chip Networks in Multi-core era

Europe Needs Science

IMPLEMENTING AGREEMENT NON-NUCLEAR ENERGY SCIENTIFIC AND TECHNOLOGICAL CO-OPERATION

The ELI-ALPS project ELI: Extreme Light Infrastructure ALPS: Attosecond Light Pulse Source. Zsolt Fülöp

What can POP do for you?

Michael Budde Head of Physics Design group Danfysik Taastrup, Denmark

Status Report. Design report of a 3 MW power amplifier

Impact from Industrial use of HPC HPC User Forum #59 Munich, Germany October 2015

DIGITAL TECHNOLOGIES FOR A BETTER WORLD. NanoPC HPC

Facts and Figures. RESEARCH TEACHING INNOVATION. KIT The Research University in the Helmholtz Association

TERMS OF REFERENCE FOR CONSULTANTS

Slide Title. Bulleted Text

THE ADVANCED RESEARCH COMPUTING LANDSCAPE IN BRITISH COLUMBIA AND CANADA

Exascale-related EC activities

Measurement of SQUID noise levels for SuperCDMS SNOLAB detectors

European Research Infrastructures Framework Programme 7

H2020 and HPC: A view from INFN

Korean Grand Plan for Industrial SuperComputing

FP7-INFRASTRUCTURES

FROM BRAIN RESEARCH TO FUTURE TECHNOLOGIES. Dirk Pleiter Post-H2020 Vision for HPC Workshop, Frankfurt

Earth Cube Technical Solution Paper the Open Science Grid Example Miron Livny 1, Brooklin Gore 1 and Terry Millar 2

RF STATUS OF SUPERCONDUCTING MODULE DEVELOPMENT SUITABLE FOR CW OPERATION: ELBE CRYOSTATS

Early Science on Theta

The actors in the research system are led by the following principles:

Education and Training in Nuclear Decommissioning

NATIONAL MINISTRIES. GSO Framework for Global Infrastructures G8 + 5

THE NUMBERS OPENING SEPTEMBER BE PART OF IT

European Strategy for Particle Physics and its Update Plan

Super Comp. Group Coordinator, R&D in IT Department of Electronics & IT. gramaraju. CDAC Pune, 8 Feb

CERN-PH-ADO-MN For Internal Discussion. ATTRACT Initiative. Markus Nordberg Marzio Nessi

THE GLOBAL ACCELERATOR NETWORK GLOBALISATION OF ACCELERATOR OPERATION AND CONTROL

High Performance Computing Facility for North East India through Information and Communication Technology

Stagnation in Physical Layer Research an Industry Perspective

Application of Maxwell Equations to Human Body Modelling

GPU-accelerated track reconstruction in the ALICE High Level Trigger

Visualization. Discovering immersive and interactive 3D simulation environments for data analysis and integrated software frameworks.

CP2K PERFORMANCE FROM CRAY XT3 TO XC30. Iain Bethune Fiona Reid Alfio Lazzaro

Principles of Modern Radar

High Performance Computing and Visualization at the School of Health Information Sciences

Research Infrastructures and Innovation

IAEA activities in support of accelerator-based research and applications

1. PUBLISHABLE SUMMARY

Barcelona Supercomputing Center

Transcription:

ˆ ˆŠ Œ ˆ ˆ Œ ƒ Ÿ 2015.. 46.. 5 HIGH-LEVEL SUPPORT FOR SIMULATIONS IN ASTRO- AND ELEMENTARY PARTICLE PHYSICS G. Poghosyan Steinbuch Centre for Computing, Karlsruhe Institute of Technology, Karlsruhe, Germany To fully exploit the potential of modern supercomputing systems, when performing particle and astrophysics simulations, the special user support instrument called the Simulation Laboratory Elementary Particle and Astroparticle Physicsª (SimLab E&A Particle) is established at Steinbuch Centre for Computing of the Karlsruhe Institute of Technology. SimLab is providing advanced support to scientiˇc groups in developing the simulation software and porting them into up-to-date supercomputing infrastructures. Some details of how the SimLab is governed and cooperation with developers and users of codes are presented in this work including examples about code THiSMPI for simulation of particle accelerations in supernovae shock fronts and the KB3D code for solving quantum kinematic equations that could be used in simulations of heavy-ion collisions or laser excited semiconductor systems. PACS: 95.75.Pq INTRODUCTION Computational Software during the past decades became a fundamental tool for scientiˇc research in particle and astrophysics. Rarely, solution of modern scientiˇc problems in astro- and elementary particle physics is achieved without development of computer simulation software. But adaptation of even offthe-shelf scientiˇc simulation codes into up-to-date high-performance computing (HPC) infrastructures and/or distributed computing infrastructures like Grid or Cloud Computing often requires not only substantial knowledge of how to use those modern computing systems, but it leads to necessary essential changes in the scientiˇc simulation code as well. In addition, the necessity of optimal and effective use of limited and expensive computational time on any HPC systems is crucial and seldom can be E-mail: gevorg.poghosyan@kit.edu

HIGH-LEVEL SUPPORT FOR SIMULATIONS 1473 reached without long-term re-engineering of the scientiˇc simulation codes, resulting in changes of the scientiˇc concepts and architecture, practically leading to fundamentally new software and maybe scientiˇc results. Hence, HPC hard- and software installation need to be complemented by a brainware component, i.e., trained HPC specialists supporting performance optimization of users' codes [1]. Simulation Laboratories recently established in Jéulich and Karlsruhe Computing Centres [2] are aiming to provide the necessary brainware and support or even to fully take over the part of research activities of scientiˇc groups related to the mentioned computational, often initially unplanned subjects. In particular, SimLab E&A Particle support scientiˇc groups from particle and astrophysics in mastering S.P.O.R.A.D.I.C. changes in their simulation codes and conduct necessary tests and productive runs for improving scientiˇc codes [3]. We deˇne S.P.O.R.A.D.I.C. changes for scientiˇc simulation software as Standardization Å making codes object-oriented, adapting Input-Output to modern standards, Parallelization Å scientiˇc exploitation of the code to ˇnd parallelization strategies, Optimization Å infrastructure-dependent performance-analysis, code proˇling, Release Å publicly available deduced parts of transfer code and simulated results to libraries, Adaptation Å porting code to up-to-date HPC, Grid/Cloud, Data Å managing huge amount of data produced by large-exascale simulations, Intensive Å data mining, visualization, statistic, Computing Å monitoring of computing time used, bookkeeping for large scale parameter studies. 1. STEERING AND OPTIMALLY USING THE BRAINWARE RESOURCES OF SimLab The persons involved in a SimLab have been actively engaged with user groups from their respective communities during many years, through joint research and development activities, cooperations and workshops. To channel these activities and optimally use the limited man and computing resources provided for SimLabs by computing centres, a common governance metrics and workow for managing SimLab activities is established. These must help to optimally use the brainware in productive long-term simulations, joint research and development activities as well as identify interest of computing centres to generate new explicit labs out of existing ones, when particular scientiˇc community based projects are

1474 POGHOSYAN G. becoming big or important enough with potential to consumeª future supercomputing power at exascales level, one of the main challenges in Horizon 2020 Å research and innovation framework programme of the European Union. SimLabs could be considered as exible expert pools where from the task forceª teams could be established, governed and sunset depending on the current needs of a particular research ˇeld facing the challenge to solve problems arising when supercomputing facilities must often be optimally utilized and counted in limited amount of CPU Hours available for a particular scientiˇc community. Potentially such teams consist of both experts in supercomputing and in a particular scientiˇc domain, e.g., visiting scientist working for a given period of time in supercomputing centres jointly with computing experts. The formation of teams could often be realized through 3rd party funding that SimLab in joint applications with scientiˇc communities aspire to get. Workow to manage the brainware in SimLabs is presented in diagram in Fig. 1. The element Joint R&D Callª deˇnes the process of receiving and arranging of requests/applications from different scientiˇc communities. Here the selection of corresponding SimLab responsible for the topic of scientiˇc community is especially made. The next stage is Analyzing Performance and Size of Simulationª, where the brainware-manpower allocated in the computing centre is used to exploit potential scalability and estimate the size of the problem, e.g., in Fig. 1. Workow for optimally channelling activities of an advanced support

HIGH-LEVEL SUPPORT FOR SIMULATIONS 1475 CPU Hours or amount of Floating-point Operations (FloP). Depending on the performance and need of future optimization for scientiˇc problem, the next stage Å long-term R&D cooperation Å is started to port and perform simulations on HPC systems in Europe like PRACE in national supercomputing centres like GCS. In case the code is not scaling or even not yet parallelized to be used on supercomputers, the development has to be organized in joint projects with scientiˇc groups to be able to use present and especially future supercomputing facilities. The last two steps in workow are most manpower and time consuming, and so could be organized only based on the procurement of the third-party ˇnancial support for joint research activities of SimLab experts and groups of scientists from a particular domain, e.g., particle physics, astrophysics or nuclear physics at SimLab E&A Particle. Steering and governance of SimLab based on such a workow allows brainware to act dynamically for many different scientiˇc topics Å building the task forceª groups depending on the needs of scientists from a given domain. Also accumulation of knowledge in SimLab, not only in thematic but in computational science as well, allows other scientiˇc communities to proˇt from it, without necessity to spendª resources and accumulate the same knowledge within their own scientiˇc domain. The leader of SimLab is often a scientist involved more in the scientiˇc domain to be able to speakª the language of scientists because the author is very often an important portionª of the simulation code whith rare documentation and at suboptimal level, from the point of view of computational science. 2. EXAMPLES OF COOPERATIONS IN SIMULATIOIN LABORATORY FOR ELEMENTARY AND ASTROPARTICLES In cooperation with many groups in KIT and beyond, SimLab E&A Particle is acting in joint research in the ˇelds of Cosmic Rays, Nuclear Physics, Heavy-Ion Collisions and Astrophysics. Brainware of SimLab is not only active as an HPC tuning expert, but in many different forms closing the gap between HPC and topical software. As a representative example of support, we discuss next the works in progress concerning two scientiˇc simulation codes: THiSMPI Å Two-and-a- Half-Dimensional Stanford code with Message Passing Interface for simulation of fully electromagnetic and relativistic particle dynamics, and KB3D Å code for solution of KadanoffÄBaym equations. These are two representative cases showing different stages of codes Å one simply needs to be optimized and go into productive runs on supercomputers, whereas the other needs long-term development to be able at all to run on up-to-date HPC systems. 2.1. THiSMPIª Simulation of Acceleration of Particles in Supernovae Shock Fronts. THiSMPI is fully relativistic Particle-In-Cell code used for plasma physics computations. The code self-consistently solves the full set of Maxwell's

1476 POGHOSYAN G. Fig. 2 (color online). Process Summary using VAMPIR for THiSMPI on 64 parallel Cores and 8 8 cells

HIGH-LEVEL SUPPORT FOR SIMULATIONS 1477 equations, along with the relativistic equations of motion for the charged particles [4]. Analyzing the performance of the code by using VAMPIR tool, we were able to identify the necessity of changing the strategies for balancing computation on different cells. As one can see in Fig. 2, initial and ˇnal 8 processors are mainly in waiting non-computing state (red color) as of mesh of cells where the simulation of plasma and particles is performed is simply directly distributed to available computational units without a priori estimation of computation need of each cell. Also, data optimal transfer bandwidth has to be developed for automated tracking and storage of bookkeeping information between cells, which promises to be a major advantage for improving the efˇciency when simulation at large scales would be done. 2.2. KB3Dª Code for Solution of KadanoffÄBaym Equations. Solving KadanoffÄBaym 3D equations, one can simulate evolution in excited quantum kinematic systems [5]. For instance, simulation of a ash of laser on the semiconductor or heavy-ion collisions (HIC), when spin-polarized protons accelerated up to 100 GeV are colliding with another proton or gold or copper atoms. In KB3D the process of equilibration Å thermalization Å is modelled using nonequilibrium Green functions. Current version is an old code from 1999 written in Fortran77, where Green functions convolutions were computed using Fast Fourier Transformation functions of Mathkeisan commercial libraries, especially available only on Cray, SGI, NEC supercomputers. Here the use of other standard FFT libraries would be implemented and also a potential speedup when use of GPU for FFT calculations is planned. CONCLUSIONS The efˇciency of brainware could be increased if this would be allocated into SimLabs, where special S.P.O.R.A.D.I.C. actions could be performed Å like providing Key Performance Indicators or Benchmarks for different simulation codes, showing the time efˇciency or potential amount of possible productive simulations. As well as performing long-term joint research for particular scientiˇc code, it is crucial to be able to reach in short time highly effective usage of supercomputing facilities especially at large scale that would be available in next decades at supercomputing centres as well as distributed in many dedicated or remote computing units. Accumulated brainware in SimLabs the time scales for performing code optimization could be reduced as of necessary knowledge would be available and do not need to be accumulated again. Acknowledgements. This work was supported by the Helmholtz Association via the Supercomputingª programme. The S.P.O.R.A.D.I.C. changes in codes THiSMPI and KB3D for identifying the bottlenecks and non-optimal usage of

1478 POGHOSYAN G. resources during simulations were possible by using massively parallel system Cray XE6 Hermit at High Performance Computing Center HLRS in Stuttgart (Project ACID 12863). G. P. thanks the University of Wroclaw for hospitality during several collaboration visits. REFERENCES 1. Bischof C. et al. Brainware for Green HPC // Computer Science Å Research and Development. 2012. V. 27, No. 4. P. 227Ä233. 2. Attig N. et al. Simulation Laboratories: An Innovative Community-Oriented Research and Support Structure // Proc. of the Cracow Grid Workshop (CGW'07), Cracow, Poland, Oct. 16Ä18, 2007. 3. Poghosyan G. et al. Simulation Laboratory Astro- and Elementary Particle Physics // SimLabs@KIT: Workshop on Comp. Methods in Science and Engineering, Karlsruhe, Nov. 29Ä30, 2010. 4. TRIdimensional STANford Å Massively Parallel Code. Oct. 2013. Available: http://tristan-mp.wikispaces.com/. Accessed 29 Oct. 2014. 5. Kéohler H. S. et al. A Fortran Code for Solving the KadanoffÄBaym Equations for a Homogeneous Fermion System // Comp. Phys. Commun. 1999. V. 1Ä3. P. 123Ä142.