MPI: Calculation of Pi Using the Monte Carlo Method

The “Monte Carlo Method” is a stochastic method of solving a problem.

I use this method in calculating the value of PI for my next MPI code. But first, I will explain the idea behind the method and how I implemented it in MPI and the performance of my code on my Hackintosh.

If a circle of radius R is inscribed inside a square with side length 2R, then the area of the circle will be \pi*R^2 and the area of the square will be (2R)^2. So the ratio of the area of the circle to the area of the square will be \frac{pi}{4}.

So we pick N points at random inside the square. The random selection of points makes the method stochastic. We checkt to see of the point is in fact inside the circle, this is done by checking the following inequality

x^2 + y^2 < R^2

We keep track of the number of points that reside inside the circle, M. PI is them approximated as

\pi \approx \frac{4*M}{N}

The Monte Carlo method is often used to approximate solutions to problems in physics and mathematics. There is a draw back using this method however, it is very slow. So I thought to parallelize this method using MPI. It is quite simple. It only took me 5min to code and debug (I’m not a great programmer either) :)

The MPI code is in C as alway:


#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>
#include <math.h>
#include <time.h>

int main(int argc, char* argv[]){
int i,id, np,N;
double x, y,double_N,eTime,sTime,pTime;
int lhit;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &id);
MPI_Comm_size(MPI_COMM_WORLD, &np);

if( argc !=2){
if (id==0){
fprintf(stderr,"Incorrect NARGIN\n");
fflush(stderr);
}
MPI_Abort(MPI_COMM_WORLD,1);
}

sscanf(argv[1], "%lf", &double_N);
N = lround(double_N);
MPI_Barrier(MPI_COMM_WORLD);
sTime = MPI_Wtime();
lhit = 0;
srand((unsigned)(time(0)));
int lN = N/np;

for(i = 0; i<lN;i++){
x = ((double)rand())/((double)RAND_MAX);
y = ((double)rand())/((double)RAND_MAX);
if (((x*x) + (y*y)) <= 1) lhit++;
}

int hit=0;
MPI_Allreduce(&lhit,&hit,1,MPI_INT,MPI_SUM,MPI_COMM_WORLD);
double est;
est = (hit*4)/((double)N);
MPI_Barrier(MPI_COMM_WORLD);

eTime = MPI_Wtime();
pTime = fabs(eTime - sTime);

if (id == 0) {
printf("Number of Points Used:		%d\n",N);
printf("Estimate of Pi:			%24.16f\n",est);
printf("Elapsed Wall time:		%5.3e\n",pTime);
}

MPI_Finalize();
return 0;
}

The runtime result of the above code on my quad-core I7
It is obvious fromt the graph that this method is much faster running on multiple processors. The second graph is the same code, only using GSL random number generator. It appears to be more efficient, and it has a better approximation to PI. One thing troubles me though. The I7 is hyperthreads, so I can use up to 8-cores, but the result is not “stable”
regardless, my machine still rocks :)

About these ads
This entry was posted in C/C++, GSL, Manabus, MPI, Parallel Programming. Bookmark the permalink.

8 Responses to MPI: Calculation of Pi Using the Monte Carlo Method

  1. Husyn says:

    Thankyou. Also mention how to compile this code using MPI

    • Sidafa Conde says:

      Thank you for the comment. I usually write a Makefile for all my code so that it’s easy to compile them over and over again. The contents to my makefile is:

      OBJS := main.o
      EXECUTABLE := PI
      CC := mpicc
      CFLAGS := -O3 -c99 -Minform=warn
      DEFS := -DPARALLEL
      INCLUDES :=
      LDFLAGS := -lm
      %.o: %.c %.h
      $(CC) $(CFLAGS) $(DEFS) $(INCLUDES) -c $< -o $@

      $(EXECUTABLE): $(OBJS)
      $(CC) $(CFLAGS) $(DEFS) $(INCLUDES) $(OBJS) -o $@ $(LDFLAGS)

      clean:
      -rm -f *.o $(EXECUTABLE)

      Let me know if you need anything else

  2. Sidafa Conde says:

    Thank you for the comment. I usually write a Makefile for all my code so that it’s easy to compile them over and over again. The contents to my makefile is:

    OBJS := main.o
    EXECUTABLE := PI
    CC := mpicc
    CFLAGS := -O3 -c99 -Minform=warn
    DEFS := -DPARALLEL
    INCLUDES :=
    LDFLAGS := -lm
    %.o: %.c %.h
    $(CC) $(CFLAGS) $(DEFS) $(INCLUDES) -c $< -o $@

    $(EXECUTABLE): $(OBJS)
    $(CC) $(CFLAGS) $(DEFS) $(INCLUDES) $(OBJS) -o $@ $(LDFLAGS)

    clean:
    -rm -f *.o $(EXECUTABLE)

    Let me know if you need anything else

  3. Zak says:

    Greetings,
    I would like to thank you for the great post and the great explanation.
    I have a question, what did you use to plot those graphs?

    Thank you so much,

    Regards

  4. Zak says:

    Hello,

    Thank you so much for your prompt reply.
    I truly appreciate,

    Best regards,

  5. AL says:

    Hi .. thank u for the post.. it was helpful but can u give more information that how I can plot the result on Matlab or whatever it’s possible
    Thank u

  6. medjo nna sinclair roger says:

    bonjour je mappelle Medjo nna sinclair de LA FSEG de l universite de yaounde II cameroun: je rvoudrais savoir a base de quelle logiciel et comment se calcul l ipm

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s