IKC-MH.57 Introduction to High Performance and Parallel Computing
2022-2023 Spring
TIME |
MONDAY |
TUESDAY |
WEDNESDAY |
THURSDAY |
FRIDAY |
Contents |
|
---|---|---|---|---|---|---|---|
8:30 |
|
|
|
|
|
||
9:30 |
|
|
|
|
|
||
10:30 |
|
|
|
|
|
||
11:30 |
|
|
|
|
|
||
12:30 |
|
|
|
|
|
||
13:30 |
|
|
|
|
|
||
14:00 |
|
|
|
|
|
||
15:00 |
|
|
|
|
|
|
|
16:00 |
|
|
|
|
IKC-MH.57 |
|
|
17:00 |
|
|
|
|
IKC-MH.57 |
|
Instructor office: Faculty of Engineering and Architecture Department of
Engineering Sciences, H1-33 |
TA Not Available office: |
Watch this space for the latest updates (If the characters do not show properly, please try viewing this page with Unicode (UTF-8) encoding). Last updated:
Almost all computer systems today are multi-core processors systems. Parallel programming must be used to take benefit of the full performance of such systems. Parallel programming also describes the processes and instructions for dividing a larger problem into smaller steps. The instructions are passed to multiple processors for required calculations to be executed in parallel. A practical approach to parallel program design and development will be presented in the course content. Awareness of potential design and performance concepts in heterogeneous computer architectures will be gained. Important announcements will be posted to the Announcements section of this web page, so please check this page frequently. You are responsible for all such announcements, as well as announcements made in lecture.
IKC-MH.57 is intended to provide students an introduction to parallel programming by using C and/or similar programming languages. In this course, the basics of high performance and parallel computing will be given. OpenMP (Open Multi-Processing) in multi-core systems and MPI (Message Passing Interface) message passing in distributed memory systems will be taught for parallel programming. MPI is the industry standardized parallelization paradigm in high-performance computing and enables programs to be written that run on distributed memory machines. OpenMP is a thread-based approach to parallelize a program over a single shared memory machine. An introduction to the basic concepts of hybrid and accelerated paradigms as Cuda OpenCL programming will be given. The course consists of theoretical topics and hands-on practical exercises on parallel programming.
Upon completion of this course the students will be able to understand/explain/apply;
Learn how to work in a scientific computing environment.
Gain awareness of Parallel and High Performance Computing concepts for systems with shared/distributed memory.
Can write parallel programs both for systems with shared memory using threading (OpenMP ) and systems with distributed memory using message passing (MPI).
May define the MPI message passing standard’s control of communication between processes, subroutines, or functions within the program.
Gains basic knowledge of Cuda OpenCL hybrid and accelerated paradigms.
Gains the ability and understanding to develop parallel programs to solve a given big numerical/engineering/scientific problem.
Lecture material will be based on them. It is strongly advised that student should read textbooks rather than only content with the lecture material supplied from the lecturer.
Required |
Recommended |
||
|
An Introduction to Parallel Programming by Peter Pacheco and Matthew Malensek, Morgan Kaufmann, 2nd edition, 2021 |
Paralel Algoritmalar: Modeller ve Yöntemler (Yüksek Başarımlı Hesaplama) by Abdulsamet Haşıloğlu 2020 |
Parallel Programming: Techniques and Application Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen 2nd edition, 2005 |
|
|
|
The following resources are available online. Please inform me about the usefullness of the materials. Check this place for updates.
Midterms & Final Exams: There will be one midterm and one final exam, will count 30% each and 40% of your grade, respectively.
Homeworks/Assignments (or Term Project): 30%.
Attendance is not compulsory (30%), but you are responsible for everything said in class.
You are allowed to work in groups of two students on the homework unless otherwise mentioned.
You can use ideas from the literature (with proper citation).
You can use anything from the textbook/notes.
Exams: 1 page of notes (double sides) is allowed.
I encourage you to ask questions in class. You are supposed to ask questions. Don't guess, ask a question!
The following schedule is tentative; it may be updated later in the semester, so check back here frequently.
Week |
Topic |
Lecture Notes |
Quizes |
Homeworks |
|||
---|---|---|---|---|---|---|---|
html |
|||||||
Grades |
Grades |
||||||
Lectures |
|||||||
1 |
First Meeting Lecture Information. Installation of Linux system and required tools/programs using VirtualBox in Windows environment (kubuntu-22.04.3-desktop-amd64.iso & VboxGuestAdditions_6.1.38.iso) |
html |
NA |
||||
2 |
Introduction I Four Decades of Computing. Parallel Computers. Computing Clusters, Flynn’s Taxonomy of Computer Architecture |
NA |
|||||
3 |
Introduction II SIMD Architecture, MIMD Architecture. Shared Memory Organization. Message Passing Organization |
NA |
|||||
4 |
Performance Analysis Computational Models. Skeptic Postulates For Parallel Architectures. Amdahl's Law |
NA |
|||||
5 |
Programming Using the Message-Passing Paradigm I Principles of Message-Passing Programming. Structure of Message-Passing Programs. The Building Blocks: Send and Receive Operations |
NA |
|||||
6 |
Programming Using the Message-Passing Paradigm II MPI: the Message Passing Interface. Starting and Terminating the MPI Library. Communicators and communication in MPI. Getting Information. Sending and Receiving Messages. Avoiding Deadlocks. Sending and Receiving Messages Simultaneously |
NA |
|||||
8 |
Programming Using the Message-Passing Paradigm III Parallelization Application Example - Pi Computation |
|
NA |
||||
9 |
Programming Using the Message-Passing Paradigm IV MPI: Message Passing Interface. Overlapping Communication with Computation. Collective Communication and Computation Operations. Broadcast, Reduction, Gather, Scatter, All-to-All |
NA |
|||||
10 |
Programming Using the Shared Memory Paradigm I What is a Thread? Threads Model. Why Threads? Thread Basics: Creation and Termination, Passing Arguments, Joining |
NA |
|||||
11 |
Programming Using the Shared Memory Paradigm II Getting started with OpenMP: a Standard for Directive Based Parallel Programming. Worksharing directives |
NA |
|||||
12 |
Beyond OpenMP and MPI GPU parallelization. GPUs: Introduction and Architecture. Execution and programming models. Introduction to CUDA |
NA |
|||||
13 |
|
html |
NA |
||||
14 |
|
html |
NA |
||||
Exams |
|||||||
7 |
Take-home Midterm Examination |
|
|
||||
html |
|
||||||
15 |
Take-home Final Examination |
|
|||||
html |