• Introduction
    • Why parallelism?
    • Dimensions of parallel programming
    • A little history
    • Designing and verifying parallel programs
  • Programming Models
    • Common models
    • Classical shared memory issues
    • Classical message passing issues
    • Classical parallelizing compiler issues
  • Parallel machines/architectures
    • Pipelining, superscalars, and VLIW
    • SIMD and vector machines
    • MIMD: multicomputers and multiprocessors
  • Synchronization
    • Spin locks
    • Barriers
    • Scheduler-based locks
    • Semaphores
    • Monitors
    • Conditional critical regions
    • Classical synchronization problems
  • Message passing
    • Connecting processes
    • Sending and receiving messages
    • Higher-level constructs: RPC and rendezvous
    • Design of message-passing programs
  • Parallel programming constructs and techniques
    • Spin locks
    • Barriers
    • Reductions
    • Broadcasts
    • Task queues
  • Parallel programming languages and runtime systems
    • HPF
    • SR
    • TreadMarks
    • PVM, MPI
    • Linda
  • Parallel programming environments and tools
    • Environments
    • Debugging
    • Performance monitoring and analysis
  • Performance considerations
    • Performance metrics
    • Scalability
    • Overhead tolerance
    • Efficient scheduling