Parallel and Distributed Computing
- Tremendously large problems and the solution may be needed almost instantaneously. (real time)
- Computer systems (mostly) run their instructions in sequence
- Super computers
billions of operations per second
not fast enough
- First technique; pipelining: performing a second instruction within the CPU before the previous instruction is completed. Pipelining permits a speed up by a factor of two or more.
- Second technique; build vector processing operations in the CPU. To solve sets of equations involve may multiplications of a vector by another vector. Improvement by a factor of 5 or 10, not by the factor of 10,000. (Eventhough the cost increases considerably)
- Third technique (current trend); parallel processing: Put several machines to work on a single problem, dividing the steps of the solution process into many steps that can be performed simultaneously
- Massively parallel computers; employ a massive number of low cost processors (1000 Pentium Pros
1.3 Teraflops)
- Beowulf class; PCs joined (cluster) to work together to create a modestly priced supercomputer
- Fourth technique; distributed computing: to connect many different computers, which can work separately on their own tasks as well as in conjunction with each other. Asynchronous operations (interrupts constantly occur through the system to coordinate the actions)
- data can flow from one computer to another
- OS, software, and connecting the computers are major challenges
Subsections
Cem Ozdogan
2011-12-27