Next:
Contents
Contents
IKC-MH.57
Introduction to High Performance and Parallel Computing
Lecture Notes
Cem Özdoğan
Date:
19 December 2023
Contents
List of Tables
List of Figures
Preliminaries
First Meeting
Lecture Information
Course Overview
Text Book
Online Resourcess
Grading Criteria
Policies
Installation of Required Tools/Programs
Linux System
Others
Introduction
View of the Field
Four Decades of Computing
Flynn's Taxonomy of Computer Architecture
Parallel and Distributed Computers
MPI Hands-On; Performance Analysis
Analysis of Parallel Summation with Point-to-Point Communications
SIMD Architecture
MIMD Architecture
Shared Memory Organization
Message Passing Organization
MPI Hands-On - Introduction to MPI
Parallel Computing
Communicating with other processes
What is MPI?
MPI Implementations
Is MPI Large or Small?
Where to use MPI?
How To Use MPI? Essential!!
Getting started
Writing MPI programs I
Writing MPI programs II
Writing MPI programs III
Exercise - Getting Started
Performance Metrics, Postulates
Performance Analysis
Computational Models
Equal Duration Model
Parallel Computation with Serial Sections Model
Skeptic Postulates For Parallel Architectures
Amdahl's Law
MPI Hands-On - Sending and Receiving Messages I
Current Message-Passing
The Buffer
MPI Basic Send/Receive
Exercises/Examples
Message-Passing Paradigm
Programming Using the Message-Passing Paradigm
Principles of Message-Passing Programming
Structure of Message-Passing Programs
The Building Blocks: Send and Receive Operations
Blocking Message Passing Operations
Non-Blocking Message Passing Operations
MPI Hands-On; Sending and Receiving Messages II
MPI: the Message Passing Interface
Starting and Terminating the MPI Library
Communicators
Getting Information
Sending and Receiving Messages
Avoiding Deadlocks
Sending and Receiving Messages Simultaneously
MPI Hands-On; Sending and Receiving Messages III
Parallelization Application Example
Pi Computation
Overlapping Communication with Computation
Non-Blocking Communication Operations
Collective Communication and Computation Operations
Broadcast
Reduction
Gather
Scatter
All-to-All
MPI Hands-On; Collective Communications I
Shared Memory Paradigm
Programming Shared Memory
What is a Thread?
Threads Model
Why Threads?
Thread Basics: Creation and Termination
Thread Creation
Thread Termination
Hands-on; Shared Memory I; Threads
OpenMP: a Standard for Directive Based Parallel Programming
The OpenMP Programming Model
The OpenMP Design Concepts
Hands-on; Shared Memory II; OpenMP
Parallelization Application Example-OpenMP
Computing
GPU parallelization
Exploring the GPU Architecture
Execution and Programming Models
Hands-on; GPU parallelization
References:
About this document ...