Empreu aquest identificador per citar o enllaçar aquest ítem:
http://elartu.tntu.edu.ua/handle/lib/51450| Títol: | Comparative Study of MPI vs. OpenMP in High-Performance Computing |
| Autor: | Sadat, Rauf |
| Affiliation: | ТНТУ ім. І. Пулюя, Факультет комп’ютерно-інформаційних систем і програмної інженерії, Кафедра комп’ютерних наук, м. Тернопіль, Україна |
| Bibliographic description (Ukraine): | Sadat R. Comparative Study of MPI vs. OpenMP in High-Performance Computing : Bachelor’s qualification thesis in specialty 122 Computer Science / supervisor R. Zolotyi. — Ternopil : Ternopil Ivan Puluj National Technical University, 2026. — 62 p. |
| Bibliographic reference (2015): | Sadat R. Comparative Study of MPI vs. OpenMP in High-Performance Computing: Bachelor’s qualification thesis in specialty 122 Computer Science / supervisor R. Zolotyi. Ternopil: Ternopil Ivan Puluj National Technical University, 2026. 62 p. |
| Data de publicació: | 26-de -2026 |
| Submitted date: | 12-de -2026 |
| Date of entry: | 28-de -2026 |
| Editorial: | ТНТУ ім. І.Пулюя, ФІС, м. Тернопіль, Україна |
| Country (code): | UA |
| Place of the edition/event: | Тернопіль |
| Supervisor: | Золотий, Роман Захарійович Zolotyi, Roman |
| Committee members: | Голотенко, Олександр Сергійович |
| UDC: | 004.272.2:004.41 |
| Paraules clau: | 122 комп'ютерні науки бакалаврська робота багатопотоковість високопродуктивні обчислення паралельне програмування порівняльний аналіз розподілені системи hpc mpi multi-core openmp parallel algorithms scalability |
| Page range: | 62 |
| Resum: | The qualification work is devoted to a comparative analysis of two leading parallel programming models: MPI (Message Passing Interface) and OpenMP (Open Multi-Processing). The first chapter examines the architectural features of shared and distributed memory systems, as well as the theoretical foundations of high-performance computing (HPC). The second chapter is dedicated to a detailed study of the syntax, synchronization mechanisms, and message passing in each model. The third chapter presents a series of computational experiments using linear algebra problems as an example. Speedup and scalability of algorithms are evaluated, and the communication overhead between nodes is analyzed. The research results allow for determining the optimal conditions for using each technology depending on the architecture of the computing cluster. Additionally, aspects of the economic efficiency of HPC resource usage and safety rules for working with equipment are considered |
| Descripció: | Роботу виконано на кафедрі комп'ютерних наук Тернопільського національного технічного університету імені Івана Пулюя. Захист відбудеться 26.01.2026р. на засіданні екзаменаційної комісії №32 у Тернопільському національному технічному університеті імені Івана Пулюя |
| Content: | INTRODUCTION 1 FUNDAMENTALS OF HIGH-PERFORMANCE COMPUTING 1.1 Classification of parallel computing architectures 1.2 Shared vs. Distributed memory paradigms 1.3 Performance metrics in parallel systems 2 PROGRAMMING MODELS FOR PARALLELISM 2.1 OpenMP: Principles of multi-threaded execution 2.2 MPI: Communication and process coordination 2.3 Hybrid programming approaches (MPI + OpenMP) 3 PERFORMANCE ANALYSIS AND EXPERIMENTAL RESULTS 3.1 Implementation of test algorithms 3.2 Benchmarking speedup and efficiency 3.3 Comparative summary of MPI and OpenMP performance 4 ECONOMIC JUSTIFICATION OF THE PROPOSED SOLUTIONS 5 OCCUPATIONAL HEALTH AND SAFETY IN EMERGENCY SITUATIONS CONCLUSIONS REFERENCES |
| URI: | http://elartu.tntu.edu.ua/handle/lib/51450 |
| Copyright owner: | © Sadat Rauf, 2026 |
| References (Ukraine): | 1. MPI Forum. MPI: A Message-Passing Interface Standard, Version 4.0. June 2021. URL: https://www.mpi-forum.org/docs/ (date of access: 25.01.2026). 2. OpenMP Architecture Review Board. OpenMP Application Programming Interface, Version 5.2. November 2021. URL: https://www.openmp.org/specifications/ (date of access: 25.01.2026). 3. Pacheco P. An Introduction to Parallel Programming. Morgan Kaufmann Publishers, 2011. 464 p. 4. Gropp W., Lusk E., Skjellum A. Using MPI: Portable Parallel Programming with the Message-Passing Interface. 3rd ed. MIT Press, 2014. 448 p. 5. Chapman B., Jost G., van der Pas R. Using OpenMP: Portable Shared Memory Parallel Programming. MIT Press, 2007. 392 p. 6. Dongarra J., Foster I., Fox G. et al. Sourcebook of Parallel Computing. Morgan Kaufmann Publishers, 2003. 840 p. 7. Grama A., Gupta A., Karypis G., Kumar V. Introduction to Parallel Computing. 2nd ed. Addison-Wesley, 2003. 656 p. 8. Hoefler T., Belli R. Scientific Benchmarking of Parallel Computing Systems: Twelve Ways to Tell the Masses when Reporting Performance Results. Proceedings of SC15. ACM, 2015. DOI: 10.1145/2807591.2807644. 9. Shalf J., Dosanjh S., Morrison J. Exascale Computing Technology Challenges. Proceedings of HPCC 2010. 2010. P. 1–25. 10. Rabenseifner R., Hager G., Jost G. Hybrid MPI/OpenMP Parallel Programming on Clusters of Multi-Core SMP Nodes. Proceedings of PDP 2009. 2009. P. 427–436. 11. Hager G., Wellein G. Introduction to High Performance Computing for Scientists and Engineers. CRC Press, 2010. 356 p. 12. Williams S., Waterman A., Patterson D. Roofline: An Insightful Visual Performance Model for Multicore Architectures. Communications of the ACM. 2009. Vol. 52, No. 4. P. 65–76. 13. Balaji P., Buntinas D., Goodell D. et al. MPI on Millions of Cores. Parallel Processing Letters. 2011. Vol. 21, No. 1. P. 45–60. 14. Smith L., Bull M. Development of Mixed Mode MPI/OpenMP Applications. Scientific Programming. 2001. Vol. 9, No. 2-3. P. 83–98. 15. Plimpton S. Fast Parallel Algorithms for Short-Range Molecular Dynamics. Journal of Computational Physics. 1995. Vol. 117. P. 1–19. 16. Barker K., Benner A., Hoisie A. et al. On the Feasibility of Optical Circuit Switching for High Performance Computing Systems. Proceedings of SC05. ACM, 2005. 17. Cappello F., Geist A., Gropp W. et al. Toward Exascale Resilience: 2014 Update. Supercomputing Frontiers and Innovations. 2014. Vol. 1, No. 1. 18. Shan H., Oliker L. Comparison of Three Programming Models for Adaptive Applications on the Cray XT4. Proceedings of PDP 2009. 2009. P. 279–286. 19. Bailey D., Barszcz E., Barton J. et al. The NAS Parallel Benchmarks. International Journal of Supercomputer Applications. 1991. Vol. 5, No. 3. P. 63–73. 20. Adams M., Brown J., Shalf J. et al. HPGMG 1.0: A Benchmark for Ranking High Performance Computing Systems. Technical Report LBNL-6630E. Lawrence Berkeley National Laboratory, 2014. |
| Content type: | Bachelor Thesis |
| Apareix a les col·leccions: | 122 — Компʼютерні науки (бакалаври) |
Arxius per aquest ítem:
| Arxiu | Descripció | Mida | Format | |
|---|---|---|---|---|
| KRB_2026_ISN-43_Sadat_R.pdf | Дипломна робота | 1,89 MB | Adobe PDF | Veure/Obrir |
Els ítems de DSpace es troben protegits per copyright, amb tots els drets reservats, sempre i quan no s’indiqui el contrari.
Eines d'Administrador