Abstract
Abstract
The modern CPU’s design, including the deep memory hierarchies and SIMD/vectorization capability have a more significant impact on algorithms’ efficiency than the modest frequency increase observed recently. The current introduction of wide vector instruction set extensions (AVX and SVE) motivated vectorization to become a critical software component to increase efficiency and close the gap to peak performance.
In this paper, we investigate the impact of the vectorization of MPI reduction operations. We propose an implementation of predefined MPI reduction operations using vector intrinsics (AVX and SVE) to improve the time-to-solution of the predefined MPI reduction operations. The evaluation of the resulting software stack under different scenarios demonstrates that the approach is not only efficient but also generalizable to many vector architectures. Experiments conducted on varied architectures (Intel Xeon Gold, AMD Zen 2, and Arm A64FX), show that the proposed vector extension optimized reduction operations significantly reduce completion time for collective communication reductions. With these optimizations, we achieve higher memory bandwidth and an increased efficiency for local computations, which directly benefit the overall cost of collective reductions and applications based on them.
Highlights
• | Design and investigation of vector-based reduction operation for MPI reduction. | ||||
• | Implementation using Intel AVXs and Arm SVE to demonstrate the efficiency of our vectorized reduction operation. | ||||
• | Experiments with MPI benchmarks, performance tool, HPC and deep learning application. | ||||
• | Experiments with different architectures (x86 and aarch64) and processors including Intel Xeon Gold, AMD Zen 2, and Arm A64FX. |
- [1] , Performance and energy effects on task-based parallelized applications, J. Supercomput. 74 (6) (2018) 2627–2637.Google Scholar
- [2] ,
Validation of hardware events for successful performance pattern identification in high performance computing , in: (Eds.), Tools for High Performance Computing 2015, Springer International Publishing, Cham, 2016, pp. 17–28.Google Scholar - [3] R. Espasa, M. Valero, J.E. Smith, Vector architectures: past, present and future, in: Proceedings of the 12th International Conference on Supercomputing, 1998, pp. 425–432.Google Scholar
- [4] W.J. Watson, The TI ASC: a highly modular and flexible super computer architecture, in: AFIPS ’72 (Fall, Part I), 1972.Google Scholar
- [5] , Flexible workload generation for HPC cluster efficiency benchmarking, Comput. Sci. - Res. Dev. 27 (4) (2012) 235–243.Google Scholar
- [6] ,
Vectorizing compilers: A test suite and results , in: Proceedings of the 1988 ACM/IEEE Conference on Supercomputing, Supercomputing ’88, IEEE Computer Society Press, Washington, DC, USA, 1988, pp. 98–105.Google Scholar - [7] , A comparative study of automatic vectorizing compilers,
Benchmarking of High Performance Supercomputers , Parallel Comput. 17 (10) (1991) 1223–1244, 10.1016/S0167-8191(05)80035-3. URL http://www.sciencedirect.com/science/article/pii/S0167819105800353.Google ScholarDigital Library - [8] G. Mitra, B. Johnston, A.P. Rendell, E. McCreath, J. Zhou, Use of SIMD vector operations to accelerate application code performance on low-powered arm and intel platforms, in: 2013 IEEE International Symposium on Parallel Distributed Processing, Workshops and Phd Forum, 2013, pp. 1107–1116.Google Scholar
- [9] , Implementing streaming SIMD extensions on the pentium III processor, IEEE Micro 20 (04) (2000) 47–57, 10.1109/40.865866.Google ScholarDigital Library
- [10] , Haswell: The fourth-generation intel core processor, IEEE Micro 34 (2) (2014) 6–20.Google Scholar
- [11] , Knights landing: Second-generation intel xeon phi product, IEEE Micro 36 (2) (2016) 34–46, 10.1109/MM.2016.25.Google ScholarDigital Library
- [12] Intel, Intel 64 and IA-32 architectures software developer’s manual volume 1: Basic architecture, 2019, URL https://software.intel.com/en-us/download/intel-64-and-ia-32-architectures-software-developers-manual-volume-1-basic-architecture.Google Scholar
- [13] Intel, Intel 64 and IA-32 architectures software developer manuals, 2016, URL https://software.intel.com/en-us/articles/intel-sdm.Google Scholar
- [14] ,
Automatic SIMD vectorization of fast Fourier transforms for the larrabee and AVX instruction sets , in: Proceedings of the International Conference on Supercomputing, ICS’11, Association for Computing Machinery, New York, NY, USA, 2011, pp. 265–274, 10.1145/1995896.1995938.Google ScholarDigital Library - [15] Intel, 64-Ia-32-architectures instruction set extensions reference manual, 2019, URL https://software.intel.com/en-us/articles/intel-sdm.Google Scholar
- [16] Arm, Arm architecture reference manual armv8, for Armv8-A architecture profile, 2018, URL https://developer.arm.com/docs/ddi0487/latest/arm-architecture-reference-manual-armv8-for-armv8-a-architecture-profile.Google Scholar
- [17] ,
Modelling the Armv8 architecture, operationally: Concurrency and ISA , in: Proceedings of the 43rd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL ’16, ACM, New York, NY, USA, 2016, pp. 608–621, 10.1145/2837614.2837615. URL http://doi.acm.org/10.1145/2837614.2837615.Google ScholarDigital Library - [18] ,
Advanced SIMD: Extending the reach of contemporary simd architectures , in: 2014 Design, Automation Test in Europe Conference Exhibition (DATE), 2014, pp. 1–4, 10.7873/DATE.2014.037.Google Scholar - [19] ,
Stencil codes on a vector length agnostic architecture , in: Proceedings of the 27th International Conference on Parallel Architectures and Compilation Techniques, PACT ’18, ACM, New York, NY, USA, 2018, pp. 13:1–13:12, 10.1145/3243176.3243192. URL http://doi.acm.org/10.1145/3243176.3243192.Google ScholarDigital Library - [20] M. P. I. Forum, MPI: A message-passing interface standard version 4.0, 2020, URL https://www.mpi-forum.org.Google Scholar
- [21] ,
Large-scale machine learning with stochastic gradient descent , in: (Eds.), Proceedings of COMPSTAT’2010, Physica-Verlag HD, Heidelberg, 2010, pp. 177–186.Google Scholar - [22] , An efficient task-based all-reduce for machine learning applications, 2017, pp. 1–8, 10.1145/3146347.3146350.Google ScholarDigital Library
- [23] ,
ImageNet classification with deep convolutional neural networks , in: (Eds.), Advances in Neural Information Processing Systems 25, Curran Associates, Inc., 2012, pp. 1097–1105. URL http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf.Google Scholar - [24] , SparkNet: Training deep networks in spark, 2015, arXiv:1511.06051.Google Scholar
- [25] , An implementation of matrix–matrix multiplication on the Intel KNL processor with AVX-512, Cluster Comput. 21 (4) (2018) 1785–1795.Google Scholar
- [26] ,
Optimizing parallel GEMM routines using auto-tuning with intel AVX-512 , in: Proceedings of the International Conference on High Performance Computing in Asia-Pacific Region, HPC Asia 2019, Association for Computing Machinery, New York, NY, USA, 2019, pp. 101–110, 10.1145/3293320.3293334.Google ScholarDigital Library - [27] , A novel hybrid quicksort algorithm vectorized using AVX-512 on Intel Skylake, Int. J. Adv. Comput. Sci. Appl. 8 (10) (2017), 10.14569/ijacsa.2017.081044.Google Scholar
- [28] ,
Fuzzy matching: Hardware accelerated MPI communication middleware , in: 2019 19th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID), 2019, pp. 210–220, 10.1109/CCGRID.2019.00035.Google Scholar - [29] , Using arm’s scalable vector extension on stencil codes, J. Supercomput. (2019).Google Scholar
- [30] , Arm scalable vector extension and application to machine learning, 2018, URL https://developer.arm.com/solutions/hpc/resources/hpc-white-papers/arm-scalable-vector-extensions-and-application-to-machine-learning.Google Scholar
- [31] D. Zhong, P. Shamis, Q. Cao, G. Bosilca, S. Sumimoto, K. Miura, J. Dongarra, Using arm scalable vector extension to optimize OPEN MPI, in: 2020 20th IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing (CCGRID), 2020, pp. 222–231.Google Scholar
- [32] ,
Using advanced vector extensions AVX-512 for MPI reductions, eurompi/usa ’20 , Association for Computing Machinery, New York, NY, USA, 2020, pp. 1–10, 10.1145/3416315.3416316.Google ScholarDigital Library - [33] ,
Efficiently running SpMV on long vector architectures , in: Proceedings of the 26th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPoPP ’21, Association for Computing Machinery, New York, NY, USA, 2021, pp. 292–303, 10.1145/3437801.3441592.Google ScholarDigital Library - [34] ,
Transparent neutral element elimination in MPI reduction operations , in: (Eds.), Recent Advances in the Message Passing Interface, Springer Berlin Heidelberg, Berlin, Heidelberg, 2010, pp. 275–284.Google Scholar - [35] ,
MPI Reduction operations for sparse floating-point data , in: (Eds.), Recent Advances in Parallel Virtual Machine and Message Passing Interface, Springer Berlin Heidelberg, Berlin, Heidelberg, 2008, pp. 94–101.Google Scholar - [36] ,
CUDA kernel based collective reduction operations on large-scale GPU clusters , in: 2016 16th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid), 2016, pp. 726–735, 10.1109/CCGrid.2016.111.Google ScholarDigital Library - [37] ,
HAN: a hierarchical AutotuNed collective communication framework , in: 2020 IEEE International Conference on Cluster Computing (CLUSTER), 2020, pp. 23–34, 10.1109/CLUSTER49012.2020.00013.Google Scholar - [38] , Bandwidth optimal all-reduce algorithms for clusters of workstations, J. Parallel Distrib. Comput. 69 (2) (2009) 117–124, 10.1016/j.jpdc.2008.09.002.Google ScholarDigital Library
- [39] H. Shan, S. Williams, C.W. Johnson, Improving MPI Reduction Performance for Manycore Architectures with OpenMP and Data Compression, in: 2018 IEEE/ACM Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems (PMBS), 2018, pp. 1–11.Google Scholar
- [40] Arm, Porting and optimizing HPC applications for arm SVE version 2.1, 2020, URL https://developer.arm.com/documentation/101726/0210/Port-and-Optimize-your-Application-to-SVE-enabled-Arm-based-processors.Google Scholar
- [41] E. Gabriel, G.E. Fagg, G. Bosilca, T. Angskun, J.J. Dongarra, J.M. Squyres, V. Sahay, P. Kambadur, B. Barrett, A. Lumsdaine, R.H. Castain, D.J. Daniel, R.L. Graham, T.S. Woodall, Open MPI: Goals, concept, and design of a next generation MPI implementation, in: Proceedings, 11th European PVM/MPI Users’ Group Meeting, Budapest, Hungary, 2004, pp. 97–104.Google Scholar
- [42] ,
Runtime level failure detection and propagation in HPC systems , in: Proceedings of the 26th European MPI Users’ Group Meeting, EuroMPI ’19, Association for Computing Machinery, New York, NY, USA, 2019,10.1145/3343211.3343225.Google ScholarDigital Library - [43] S. Maleki, Y. Gao, M.J. Garzar’n, T. Wong, D.A. Padua, An evaluation of vectorizing compilers, in: 2011 International Conference on Parallel Architectures and Compilation Techniques, 2011, pp. 372–382.Google Scholar
- [44] Open MPI main development repository, URL https://github.com/open-mpi/ompi.Google Scholar
- [45] ,
Collecting performance data with PAPI-c , in: (Eds.), Tools for High Performance Computing 2009, Springer Berlin Heidelberg, Berlin, Heidelberg, 2010, pp. 157–173.Google Scholar - [46] , Fast parallel algorithms for short-range molecular dynamics, J. Comput. Phys. 117 (1) (1995) 1–19, 10.1006/jcph.1995.1039. URL http://www.sciencedirect.com/science/article/pii/S002199918571039X.Google ScholarDigital Library
- [47] , Horovod: fast and easy distributed deep learning in TensorFlow, 2018, arXiv preprint arXiv:1802.05799.Google Scholar
- [48] A benchmark framework for Tensorflow, URL https://github.com/tensorflow/benchmarks.Google Scholar
Index Terms
(auto-classified)Using long vector extensions for MPI reductions
Comments