User Tools

Site Tools

Advanced Scientific Programming in Python

a Summer School by the G-Node and the Physik-Institut, University of Zurich


parallel

Pragmatic Concurrency for Python

Video of this Lecture
Tutors
  • Eilif Muller (eilif dot mueller at epfl dot ch)
  • Zbigniew Jędrzejewski-Szmek
  • Francesc Alted
Topics covered
  • Parallel programming concepts
  • ipython (ipcluster)
  • mpi4py (Message Passing Interface for Python)
  • multiprocessing

Exercises

The purpose of these exercises is not to amount to killer speed-ups (a laptop is not the right hardware for that), but rather to run and modify a few examples, become comfortable with APIs, and implement some simple parallel programs.

  • Source code for the exercises is here:

parallel_materials.tar.gz

Running MPI programs

Write a simple python program using the mpi4py module which imports mpi4py.MPI and displays the COMM_WORLD.rank, size and MPI.Get_processor_name() on each process. It is always handy to have such a program around to verify that the MPI environment is working as expected. In a distributed environment, the processor name will further inform you that your MPI execution was spawned accross machine boundaries, and how many processes are allocated per machine.

Note: To run your program mpi4py, it must be started as if it was any MPI program, i.e. as follows:

  $ mpiexec -n X python <program.py>

Matrix Multiplication

Four implementations of matrix multiply are available in the source tar-ball (subdir “matmul”). “ipython_” is a ipython version, “mpi_” is an mpi version, “mp_*” are multiprocessing versions, using shared numpy arrays or not.

  • Configure them to have the same matrix sizes, and compare speeds. It would be nice to also look at speedup and scaling, for those of you with (remote) access to machines with more than 4 cores (true cores, not hyper-threads) with the appropriate software installed.
  • For the ipython version, you need to start an ipcluster:
  $ ipcluster start -n X

Where -n X is the number of slave processes to start.

Parallelization of mandelbrot

In the source tar-ball under “mandelbrot” is a serial implementation of a mandelbrot plotter.

a) Using similar decomposition techniques to the Matrix Multiplication example, parallelize the serial implementation of the mandelbrot plotter provided in the examples, using mpi4py, ipython and multiprocessing.

b) Load balancing - the mandelbrot compuation has the property that computing some pixels take much longer than others.

First, quantify the degree of inbalance by gathering and plotting the distribution of execution times per pixel. Assuming you used chunked decomposition as for matrix multiplication, how does this per-pixel imbalance translate into a per-chunk inbalance?

Second, Can you modify the decomposition of the problem to provide each worker with work-loads which are more equal?

Hints

IPython map-reduce

Using the ipython approach, get a collection of processes to count the occurrences of a word in a collection of documents, and then reduce the results to a total count per word on the master process.

See also: http://en.wikipedia.org/wiki/MapReduce, http://labs.google.com/papers/mapreduce.html

Lecture material

talk.pdf Disclaimer: Slides may not be exactly as presented.

parallel.txt · Last modified: 2014/01/16 13:54 by nicola

Page Tools