Empreu aquest identificador per citar o enllaçar aquest ítem: http://elartu.tntu.edu.ua/handle/lib/51450
Registre complet de metadades
Camp DCValorLengua/Idioma
dc.contributor.advisorЗолотий, Роман Захарійович-
dc.contributor.advisorZolotyi, Roman-
dc.contributor.authorSadat, Rauf-
dc.date.accessioned2026-01-28T21:55:00Z-
dc.date.available2026-01-28T21:55:00Z-
dc.date.issued2026-01-26-
dc.date.submitted2026-01-12-
dc.identifier.citationSadat R. Comparative Study of MPI vs. OpenMP in High-Performance Computing : Bachelor’s qualification thesis in specialty 122 Computer Science / supervisor R. Zolotyi. — Ternopil : Ternopil Ivan Puluj National Technical University, 2026. — 62 p.uk_UA
dc.identifier.urihttp://elartu.tntu.edu.ua/handle/lib/51450-
dc.descriptionРоботу виконано на кафедрі комп'ютерних наук Тернопільського національного технічного університету імені Івана Пулюя. Захист відбудеться 26.01.2026р. на засіданні екзаменаційної комісії №32 у Тернопільському національному технічному університеті імені Івана Пулюяuk_UA
dc.description.abstractThe qualification work is devoted to a comparative analysis of two leading parallel programming models: MPI (Message Passing Interface) and OpenMP (Open Multi-Processing). The first chapter examines the architectural features of shared and distributed memory systems, as well as the theoretical foundations of high-performance computing (HPC). The second chapter is dedicated to a detailed study of the syntax, synchronization mechanisms, and message passing in each model. The third chapter presents a series of computational experiments using linear algebra problems as an example. Speedup and scalability of algorithms are evaluated, and the communication overhead between nodes is analyzed. The research results allow for determining the optimal conditions for using each technology depending on the architecture of the computing cluster. Additionally, aspects of the economic efficiency of HPC resource usage and safety rules for working with equipment are considereduk_UA
dc.description.tableofcontentsINTRODUCTION 1 FUNDAMENTALS OF HIGH-PERFORMANCE COMPUTING 1.1 Classification of parallel computing architectures 1.2 Shared vs. Distributed memory paradigms 1.3 Performance metrics in parallel systems 2 PROGRAMMING MODELS FOR PARALLELISM 2.1 OpenMP: Principles of multi-threaded execution 2.2 MPI: Communication and process coordination 2.3 Hybrid programming approaches (MPI + OpenMP) 3 PERFORMANCE ANALYSIS AND EXPERIMENTAL RESULTS 3.1 Implementation of test algorithms 3.2 Benchmarking speedup and efficiency 3.3 Comparative summary of MPI and OpenMP performance 4 ECONOMIC JUSTIFICATION OF THE PROPOSED SOLUTIONS 5 OCCUPATIONAL HEALTH AND SAFETY IN EMERGENCY SITUATIONS CONCLUSIONS REFERENCESuk_UA
dc.format.extent62-
dc.publisherТНТУ ім. І.Пулюя, ФІС, м. Тернопіль, Українаuk_UA
dc.subject122uk_UA
dc.subjectкомп'ютерні наукиuk_UA
dc.subjectбакалаврська роботаuk_UA
dc.subjectбагатопотоковістьuk_UA
dc.subjectвисокопродуктивні обчисленняuk_UA
dc.subjectпаралельне програмуванняuk_UA
dc.subjectпорівняльний аналізuk_UA
dc.subjectрозподілені системиuk_UA
dc.subjecthpcuk_UA
dc.subjectmpiuk_UA
dc.subjectmulti-coreuk_UA
dc.subjectopenmpuk_UA
dc.subjectparallel algorithmsuk_UA
dc.subjectscalabilityuk_UA
dc.titleComparative Study of MPI vs. OpenMP in High-Performance Computinguk_UA
dc.typeBachelor Thesisuk_UA
dc.rights.holder© Sadat Rauf, 2026uk_UA
dc.contributor.committeeMemberГолотенко, Олександр Сергійович-
dc.coverage.placenameТернопільuk_UA
dc.subject.udc004.272.2:004.41uk_UA
dc.relation.references1. MPI Forum. MPI: A Message-Passing Interface Standard, Version 4.0. June 2021. URL: https://www.mpi-forum.org/docs/ (date of access: 25.01.2026).uk_UA
dc.relation.references2. OpenMP Architecture Review Board. OpenMP Application Programming Interface, Version 5.2. November 2021. URL: https://www.openmp.org/specifications/ (date of access: 25.01.2026).uk_UA
dc.relation.references3. Pacheco P. An Introduction to Parallel Programming. Morgan Kaufmann Publishers, 2011. 464 p.uk_UA
dc.relation.references4. Gropp W., Lusk E., Skjellum A. Using MPI: Portable Parallel Programming with the Message-Passing Interface. 3rd ed. MIT Press, 2014. 448 p.uk_UA
dc.relation.references5. Chapman B., Jost G., van der Pas R. Using OpenMP: Portable Shared Memory Parallel Programming. MIT Press, 2007. 392 p.uk_UA
dc.relation.references6. Dongarra J., Foster I., Fox G. et al. Sourcebook of Parallel Computing. Morgan Kaufmann Publishers, 2003. 840 p.uk_UA
dc.relation.references7. Grama A., Gupta A., Karypis G., Kumar V. Introduction to Parallel Computing. 2nd ed. Addison-Wesley, 2003. 656 p.uk_UA
dc.relation.references8. Hoefler T., Belli R. Scientific Benchmarking of Parallel Computing Systems: Twelve Ways to Tell the Masses when Reporting Performance Results. Proceedings of SC15. ACM, 2015. DOI: 10.1145/2807591.2807644.uk_UA
dc.relation.references9. Shalf J., Dosanjh S., Morrison J. Exascale Computing Technology Challenges. Proceedings of HPCC 2010. 2010. P. 1–25.uk_UA
dc.relation.references10. Rabenseifner R., Hager G., Jost G. Hybrid MPI/OpenMP Parallel Programming on Clusters of Multi-Core SMP Nodes. Proceedings of PDP 2009. 2009. P. 427–436.uk_UA
dc.relation.references11. Hager G., Wellein G. Introduction to High Performance Computing for Scientists and Engineers. CRC Press, 2010. 356 p.uk_UA
dc.relation.references12. Williams S., Waterman A., Patterson D. Roofline: An Insightful Visual Performance Model for Multicore Architectures. Communications of the ACM. 2009. Vol. 52, No. 4. P. 65–76.uk_UA
dc.relation.references13. Balaji P., Buntinas D., Goodell D. et al. MPI on Millions of Cores. Parallel Processing Letters. 2011. Vol. 21, No. 1. P. 45–60.uk_UA
dc.relation.references14. Smith L., Bull M. Development of Mixed Mode MPI/OpenMP Applications. Scientific Programming. 2001. Vol. 9, No. 2-3. P. 83–98.uk_UA
dc.relation.references15. Plimpton S. Fast Parallel Algorithms for Short-Range Molecular Dynamics. Journal of Computational Physics. 1995. Vol. 117. P. 1–19.uk_UA
dc.relation.references16. Barker K., Benner A., Hoisie A. et al. On the Feasibility of Optical Circuit Switching for High Performance Computing Systems. Proceedings of SC05. ACM, 2005.uk_UA
dc.relation.references17. Cappello F., Geist A., Gropp W. et al. Toward Exascale Resilience: 2014 Update. Supercomputing Frontiers and Innovations. 2014. Vol. 1, No. 1.uk_UA
dc.relation.references18. Shan H., Oliker L. Comparison of Three Programming Models for Adaptive Applications on the Cray XT4. Proceedings of PDP 2009. 2009. P. 279–286.uk_UA
dc.relation.references19. Bailey D., Barszcz E., Barton J. et al. The NAS Parallel Benchmarks. International Journal of Supercomputer Applications. 1991. Vol. 5, No. 3. P. 63–73.uk_UA
dc.relation.references20. Adams M., Brown J., Shalf J. et al. HPGMG 1.0: A Benchmark for Ranking High Performance Computing Systems. Technical Report LBNL-6630E. Lawrence Berkeley National Laboratory, 2014.uk_UA
dc.contributor.affiliationТНТУ ім. І. Пулюя, Факультет комп’ютерно-інформаційних систем і програмної інженерії, Кафедра комп’ютерних наук, м. Тернопіль, Українаuk_UA
dc.coverage.countryUAuk_UA
dc.identifier.citation2015Sadat R. Comparative Study of MPI vs. OpenMP in High-Performance Computing: Bachelor’s qualification thesis in specialty 122 Computer Science / supervisor R. Zolotyi. Ternopil: Ternopil Ivan Puluj National Technical University, 2026. 62 p.uk_UA
Apareix a les col·leccions:122 — Компʼютерні науки (бакалаври)

Arxius per aquest ítem:
Arxiu Descripció MidaFormat 
KRB_2026_ISN-43_Sadat_R.pdfДипломна робота1,89 MBAdobe PDFVeure/Obrir


Els ítems de DSpace es troben protegits per copyright, amb tots els drets reservats, sempre i quan no s’indiqui el contrari.

Eines d'Administrador