Concurrent Software Applications
Introduction
The world is parallel
- Internet/web
- communications systems
- robotics
- airline tracking systems
- large simulations (scientific research)
- large service data bases (client-server)
- real-time control
- traffic control
- mobile computing
- cardiac rhythm management systems (pacemakers)
- video games, simulation trainers, car brakes and
stability control
The Parallel Programming
Problem:
- Fitting a naturally parallel real-world model
onto the most logical concurrent programming model on the most
compute-effective parallel architecture
- Reuse, adaptation and extension of OO concurrent
program patterns to real-world problems
- Solving a problem in reasonable time.
The primary goal of this
course is:
to learn to model, design, construct, and verify
concurrent programs
model-driven
shared memory (shared
variables)
distributed memory (disjoint
variables)
Primary
programming languages and tools for this course:
- Programming
Languages
- Java
- C/C++ with PThreads
Library
- Others we will not use:
- Ada, ConcurrentC,
and C#
- Model-checking systems
- Promela/SPIN
- Java
Pathfinder (JPF)
- MPI/SPIN
Levels of Parallelism in Computing
Level Executed,
Object,
Example System
A.
program, job or task,
multitasking OS
B. procedure,
process or thread, MIMD
C. expression,
instruction,
SIMD
D.
bit
within
instr., Von
Neuman
A. Multiple terminal windows on a
time-sharing system
B. Multiple processes running simultaneously
on independent processors
C. Matrix A + Matrix B - pairwise parallel
addition
D. Parallel "and" operation on bits
in an instruction
Designing Concurrent Software
- Course-grained vs. fine-grained: determined by
the frequency of thread synchronization or communication relative to the
amount of computation done (sometimes medium-grained is also of interest
- Shared memory vs. distributed memory: determined by
whether multiple CPUs have direct access to common memory or data must be
moved between processors
- Message-passing vs. semaphore, monitor, barrier,
etc. mechanisms which are based on shared memory access. However, you can
also do message-passing in shared memory.
- Worker Crew (replicated workers, thread pool)
with a bag of tasks and a fixed number of worker threads (perhaps based on
the number of available CPUs allocated to the program
- Patterns of message passing
- totally connected where every thread talks to
every other thread
- star shaped where there is a central thread that
talks to all other threads
- grid shaped where each thread only communicates
with its neighbors
- pipeline if each thread only communicates with
its left and right neighbors