This paper aims to report on the open multi-processing (OpenMP) parallel implementation of a fully unstructured high-order discontinuous Galerkin (DG) solver for computational fluid dynamics and computational aeroacoustics applications. Even if the use of OpenMP paradigm is confined to shared memory systems, it has some advantages over the use of the message passing interface (MPI) library, and getting the best of this approach potentially improves the parallel efficiency of codes running on clusters of multi-core nodes. While with MPI the use of a domain decomposition algorithm is almost unavoidable, the OpenMP shared memory context offers several opportunities. Three strategies, here optimised for a DG solver, are presented and compared: the first refers to a customization of a colouring approach, the second mimics an MPI implementation in the OpenMP context, while the third method is somehow half way between the previous two. The numerical tests performed on both inviscid and viscous test cases indicate that, thanks to the compactness of the DG discretization, all the code versions perform quite satisfactory. In particular, the domain decomposition algorithm reaches the highest level of parallel efficiency at low computational loads while the colouring approach excels at larger computational loads and it can be easily implemented within an existing MPI code. Moreover, colouring is very well suited to deal with hardware accelerators, an opportunity given by the OpenMP 4.0 standard. Finally, the performance gain observed in using a hybrid MPI/OpenMP version of the DG code on high performance computing facilities is demonstrated.

OpenMP Parallelization Strategies for a Discontinuous Galerkin Solver / Crivellini, A.; Franciolini, M.; Colombo, A.; Bassi, F.. - In: INTERNATIONAL JOURNAL OF PARALLEL PROGRAMMING. - ISSN 0885-7458. - 47:5-6(2019), pp. 838-873. [10.1007/s10766-018-0589-3]

OpenMP Parallelization Strategies for a Discontinuous Galerkin Solver

Crivellini A.
;
Franciolini M.;Colombo A.;Bassi F.
2019-01-01

Abstract

This paper aims to report on the open multi-processing (OpenMP) parallel implementation of a fully unstructured high-order discontinuous Galerkin (DG) solver for computational fluid dynamics and computational aeroacoustics applications. Even if the use of OpenMP paradigm is confined to shared memory systems, it has some advantages over the use of the message passing interface (MPI) library, and getting the best of this approach potentially improves the parallel efficiency of codes running on clusters of multi-core nodes. While with MPI the use of a domain decomposition algorithm is almost unavoidable, the OpenMP shared memory context offers several opportunities. Three strategies, here optimised for a DG solver, are presented and compared: the first refers to a customization of a colouring approach, the second mimics an MPI implementation in the OpenMP context, while the third method is somehow half way between the previous two. The numerical tests performed on both inviscid and viscous test cases indicate that, thanks to the compactness of the DG discretization, all the code versions perform quite satisfactory. In particular, the domain decomposition algorithm reaches the highest level of parallel efficiency at low computational loads while the colouring approach excels at larger computational loads and it can be easily implemented within an existing MPI code. Moreover, colouring is very well suited to deal with hardware accelerators, an opportunity given by the OpenMP 4.0 standard. Finally, the performance gain observed in using a hybrid MPI/OpenMP version of the DG code on high performance computing facilities is demonstrated.
2019
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11566/273328
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? 3
social impact