Skip to content
  • Parallel Processing
    • dividing program instruction/data streams among multiple processors to execute the program in less time.
  • Multi-programming
    • multiple programs allowed to use the processor for a short time
  • The above concepts brought in several new problems among with the advancements they brougwht:
    1. Resource contention ( fight for which process gets access to the resource first)
    2. Explicit resource requests led to problem of Deadlock
    3. Critical Section Routine

SISD/SIMD/MISD/MIMD - Single/Multiple Instruction stream Single/Multiple Data Stream - Among the above four, only two are relevant to parallel computers - SIMD and MIMD - SIMD - Single Instruction stream is sent to multiple processors which may work on their own respective data streams - Easier to program (Single Thread execution) - Processor Arrays and Vector Pipelines - MIMD - Each Processor gets it's own Instruction Stream and Data Stream - MIMD Systems can also have SIMD execution sub-components! - Most powerful computer system that covers the range of multiprocessor systems - More efficient as it can utilize the full machine power - Every processor may be working with a different data stream, execution here can be synchronous/asynchronous, deteministic/non-deterministic, etc. - These machines can be - Shared Memory - Distributed Memory

Parallel Programming Models - Shared Memory - Tasks share a common address space - locks/semaphores may be used to control access to the shared memory - Advantage - Simplified programming as there is no need to setup and use protocols for sending/receiving messages - Disadvantage - Scalability beyond thirty two processors is difficult - Less flexible than distributed memory model

  • Threads

    • A subroutine in the main program
    • Communicate with each other through a global memory
  • Message Passing

    • Own local memory used by tasks for their computing and data exchange occurs over a specified protocol (MPI1, MPI-2 Message Passing Interface)
  • Data Parallel

    • Set of tasks work on sam Data structure with each task working on a different partition
      • (Comment -> You can do this with MergeSort | add an base condition to stop creating threads at your max thread amount)
  • Hybrid -> As the name says

Parallel Computing
- Evolution of serial computing where jobs are broken into discrete parts - A single program is divided into multiple fragments, executed at different processors simultaneously. - ARCHITECTURES - Shared Memory -> All processors access the same memory - Uniform Memory Access(UMA) - All Processors CPUs/GPUs access memory with similar latency

Von Neumann Architecture
- Control Unit fetches instructions/data from the memory - Decodes the instructions and sequentially coordinates the operations to accomplish the programmed task


Research Paper

TAGS:

VonNeumann #UMA #ParallelComputing #SIMD #MIMD #MultiProgramming