Skip to content

SarthakDandotiya/Parallel-and-Distributed-Computing

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Parallel & Distributed Computing Programs

Programs written in C with OpenMP or MPI.


To run OpenMP programs

    $ gcc <filename>.c -fopenmp
    $ ./a.out

To run MPI programs

    $ mpicc -o <filename> <filename>.c
    $ mpirun <filename>



Questions:

  1. a) Using OpenMP, Design, develop and run a multi-threaded program to perform and print vector addition.

    b) Using OpenMP, Design, develop and run a multi-threaded program to perform Loop work Sharing.

    c) Using OpenMP, Design, develop and run a multi-threaded program to perform Section work sharing.


  1. a) Using OpenMP, design, develop and run a multi-threaded program to generate and print Fibonacci series. One thread has to generate the numbers up to the specified limit and another thread has to print them.

    b) Using OpenMP, Design, develop and run a multi-threaded program to perform matrix multiplication.


  1. a) Using OpenMP, Design, develop and run a multi-threaded program to perform Combined parallel loop reduction.

    b) Using OpenMP, Design, develop and run a multi-threaded program to perform and Orphaned parallel loop reduction.

    c) Write a parallel loop that computes the maximum and minimum values in an array.


  1. a) Using MPI, Design, develop and run a simple send/receive communication program. initializes MPI, Transfer the data from source to destination, then Finalizes (Quits) MPI.

    b) Using MPI in visual studio, Design, develop and run message passing mechanisms.


  1. a) Using MPI, Design, develop and run Broadcast communication (MPI_Bcast) using MPI_Send and MPI_Recv.

    b) Using MPI, Design, develop and run reduce communication for vector addition (MPI_Reduce) using MPI_Send and MPI_Recv.


  1. a) Using MPI, Design, develop and run matrix multiplication using MPI_Send and MPI_Recv. In this code, the master task distributes a matrix multiply operation to numtasks-1 worker tasks.

    b) Using MPI, Design, develop and compute pi value using MPI_Send and MPI_Recv.

About

Fundamental Parallel & Distributed Computing Programs

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages