CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   OpenFOAM Running, Solving & CFD (https://www.cfd-online.com/Forums/openfoam-solving/)
-   -   Parallel processing of OpenFOAM cases on multicore processor??? (https://www.cfd-online.com/Forums/openfoam-solving/72876-parallel-processing-openfoam-cases-multicore-processor.html)

g.akbari February 20, 2010 03:41

Parallel processing of OpenFOAM cases on multicore processor???
 
Dear experts,
I'm going to run a huge OpenFOAM case in a quad-core processor. I don't know if it is possible in OpenFOAM to apply parallel processing for shared memory computers. As I know OpenFOAM uses MPI for parallel processing and MPI is desirable for distributed memory computers. By the way, I have not the possibility of using clusters.
Is there any way for implementation of parallel processing in my shared memory computer and using all cores of cpu in calculations?

Best regards.

wyldckat February 20, 2010 06:58

Greetings g.akbari,

With OpenFOAM's ThirdParty package, comes OpenMPI. And by what I know, OpenMPI can automatically decide what communication protocol to use when comunicating between processes, whether they are in the same computer or in different computers.
Nonetheless, you can search in the OpenMPI manual how to specifically define what protocol to use, including "shared memory".
Then you can edit the script $WM_PROJECT_DIR/etc/foamJob and tweak the running options for OpenMPI!

Sadly I don't have much experience with OpenMPI, so I only know it's possible.

Best regards,
Bruno Santos

xisto February 20, 2010 07:44

You just need to use the decomposeParDict to make the partitions.

The user manual have a good tutorial in page 63.

I run all my cases in my mini cluster with two quad core xeon processors and 8g ram.

Good luck

CX

g.akbari February 20, 2010 08:47

Thanks alot. I performed the damBreak tutorial and it worked fine. Now, my problem is that I gain no speed-up and the serial execution time is shorter than parallel execution time. What 's the reason for this behavior?

xisto February 20, 2010 09:05

I don't have a answer for that question.

The only thing I can say is that you will certanly attained a faster convergence with the parallel execution.

Do you try to run the dam break without the mpi?

CX

g.akbari February 20, 2010 09:32

Yes, I executed damBreak two times
using MPI and by setting the number of processors as 4, calculations take 122 sec,
without MPI and in serial mode, it takes 112 sec.
May be I shoud use more finer mesh to obtain more better speed-up.

g.akbari February 20, 2010 09:43

After using a finer mesh, the speed-up becomes larger than unity. Thanks very much about your comments wyldckat and xisto.
Sincerely

dancfd November 18, 2010 17:52

Multi-Core Processors: Execution Time vs Clock Time
 
Hello all,

I have a single dual-core processor, and I am attempting to determine if it is possible to use parallel processing with multiple cores vice multiple processors. I ran wingMotion2D_pimpleDyMFoam three times, with 1, 2 and 3 "processors" identified each time in the /system/decomposeParDict. The results were predictable, in that the single "processor" run required 1.5x the time required by the dual "processor" run. I then tried 3 processors just to see what would happen, and found that this required 99.9% of the execution time of the dual-core run, but 150% of the clock time of the dual-core run.

I did not know what to expect, running a 3-processor parallel simulation on a dual-core single-processor system, but this anomaly was definitely not expected. Can anyone tell me why the clock and execution times would be so different, but only when #processors in decomposeParDict > #cores in the computer? Sure, it was a silly test, but now that I have strange results it does make me wonder.

Thanks,
Dan

akidess November 19, 2010 04:48

My guess: You're still doing the same amount of computations, so the CPU time is similar, but you're wasting lot's of time on communication and process-switching so the wall clock time is larger.

Prash May 27, 2011 19:57

Hey Guys ,

I am facing similar problems here, my parallel run with 6 processors is taking a way lot more time than the serial one. Any one has a clue that what might be happening ??

wyldckat May 28, 2011 19:18

Greetings Prashant,

From the top of my head, here are a few pointers:
  1. The case is too small for running in parallel. A rule of thumb is around 50kcells/core minimum, but it will also depend on the combinations of: solver, matrix solver and preconditioner.
  2. The case is too big and chaotic. Try running renumberMesh in parallel mode first, so the decomposed mesh is sorted out. I.e., run something like this:
    Code:

    foamJob -s -p renumberMesh
    Or something like this:
    Code:

    mpirun -np 6 renumberMesh -parallel
  3. The case is still too big, even when the mesh is renumbered. By this I mean that the real processor is taking too long to fetch from very different sections of memory, which leads to a seriously non-optimized memory access scheme. In other words: the 6 cores are mostly fetching directly from RAM, instead of taking advantage of the on dye cache system (you know, L1, L2 and L3 cache).
    How to fix this? I don't know yet :( All I know is that it's at least 10x speed difference of cache vs direct RAM access.
    I would suggest splitting the case in 2,3,4,5,6 and 12 sub-domains, to try and isolate if it's a CPU cache problem. I've had a situation where a 6 core CPU was faster with 16 sub-domains than 6 sub-domains :rolleyes:
  4. The decomposition method was not properly chosen/configured. Try the other decomposition methods and/or learn more about each one. If your geometry isn't too complex, then metis/scotch won't help.
  5. Reproduce a benchmarked case, even if not official. For example: Report from thread http://www.cfd-online.com/Forums/ope...v-cluster.html - this can help you figure out if it's a solver related problem, or configuration problem or something that you overlooked.
  6. Try disabling connection options in mpirun. For example:
    Quote:

    Originally Posted by pkr (Post 292700)
    When using MPI_reduce, the OpenMPI was trying to establish TCP through a different interface. The problem is solved if the following command is used:
    mpirun --mca btl_tcp_if_exclude lo,virbr0 -hostfile machines -np 2 /home/rphull/OpenFOAM/OpenFOAM-1.6/bin/foamExec interFoam -parallel

    The above command will restrict MPI to use certain networks (lo, vibro in this case).

  7. Check out the mental notes I've got on my blog: Notes about running OpenFOAM in parallel - they are just notes for when I get some free time to write at openfoamwiki.net about this.
Best regards,
Bruno

ali jafari December 15, 2012 13:10

hi

I decided to buy a multi core (core=6;thread=12 intel) computer.

my question : does openfoam use to threads of CPU for processing ?

wyldckat December 15, 2012 14:41

Greetings Ali Jafari,

I know this has been discussed here on the forum, but I'm not in the mood to go searching :rolleyes:

A summary is as follows:
  1. HyperThreading (HT) was designed with user usability in mind. Users want a responsive system, even when certain applications are using a lot of processing power.
  2. This means that HT only provides many capabilities that the CPU has, except for some of the more powerful features, such as the FPU - Floating-point Unit. Which means that each core is somewhat split into 2 HT parts, sharing a single FPU.
  3. OpenFOAM (and pretty much any other CFD application) will need almost absolute access to as many FPUs there are on the machine as possible.
  4. Using HT will only lead to having 2 threads trying to shove numbers into a single FPU at nearly the same time. Which actually isn't completely bad, since each thread will prepare the data for the FPU to handle right after the previous thread, which basically only leads to an improvement of about... I don't know for sure, but maybe somewhere between 1 to 10 %, depending on several details. Such an example can be seen on this simple (and unofficial) benchmark case: http://code.google.com/p/bluecfd-sin...SE_12.1_x86_64 - the 8 core column was actually 8 threads of an "i7 950 CPU, with 4 cores and with Hyper-Threading (HT) turned on"
  5. Another bottleneck is the RAM, cache and memory controller, which can lead to a simple issue: having 6 threads or 12 threads accessing the RAM at nearly the same time, can lead to ... well, barely any improvement.
  6. Which leads to the usual final conclusion: turn off HT in the BIOS/UEFI. Perhaps even overclock the CPU as well, which will then give you actual performance increase, at the cost of additional power consumption and increased heat production by the CPU... although if done incorrectly, will lead to a substantial stability decrease and reduce the life of the processor and/or motherboard.
For more about HT: http://en.wikipedia.org/wiki/Hyper-threading



Best regards,
Bruno

ali jafari December 16, 2012 01:37

Dear wyldckat

your explain was very useful .;) Thank you very much !!!

eddi0907 March 4, 2013 09:19

Dear all,

it is a little bit late but I want to share my findings on parallel runs in OpenFOAM:

I used the lid driven cavity for benchmarking.

I found out that for this case the RAM-CPU communication was the bottleneck.

The best scaling I found is using only 2 cores per CPU (8 with 4 core each and no HT) with corebinding especially if you have Dualcore-machines.

Doing so the case scaled almost linear up to 16 cores, and was faster than using all available 32 cores.

Kind Regards.

Edmund

wyldckat March 4, 2013 17:16

Hi Edmund,

Can you share some more information about the characteristics of the machines you've used? Such as:
  • What processor models and speed?
    • With or without overclock?
  • What RAM types and speeds?
  • Was it over an normal Ethernet connection? 1 Gbps?
Best regards,
Bruno

eddi0907 March 5, 2013 02:07

Hi Bruno,

The Processors are XEON W5580 (4 cores) or X5680 (6 cores) with 3.2 respectively 3.33 GHz without overclocking.
The Memory is DDR3-1333.
I used normal Ethernet 1GbpS.

The modelsize was 1 Million cells.

Running on 2 cores the Speedup is 2 as well.
Using 4 cores on the same CPU the speedup is only ~2.6 but Using 4 cores on 2 CPU's the speedup will be nearly 4!
It seems to be the same when looking at the unofficial benchmarks (http://code.google.com/p/bluecfd-sin...SE_12.1_x86_64)

So on a cluster where you use machines with more than one CPU you need to do corbinding and define which task on which cpu-core with an additional rankfile in openmpi.

Example 2 Dual CPU machines (no matter if 4 or 6 cores):

mpirun -np 8 -hostfile ./hostfile.txt -rankfile ./rankfile.txt icoFoam -parallel

hostfile:

host_1
host_2

rankfile:

rank 0 =host_1 slot=0:0
rank 1 =host_1 slot=0:1
rank 2 =host_1 slot=1:0
rank 3 =host_1 slot=1:1
rank 4 =host_2 slot=0:0
rank 5 =host_2 slot=0:1
rank 6 =host_2 slot=1:0
rank 7 =host_2 slot=1:1

Now the job runs on 8 cores distributed on cores 0 and 1 on any of the 4 CPU's with a speedup of more than 7.

Perhaps the newest generation of CPU's show a faster CPU-RAM communication and one can use 3 cores per CPU.

Kind Regards.

Edmund

wyldckat March 5, 2013 16:30

Hi Edmund,

Many thanks for sharing the information!

But I'm still wondering if there isn't some specific detail we're missing. I did some searching and:
So my question is: do you know how many memory channels your machines are using? Or in other words, are all of the memory slots filled up with evenly sized RAM modules?
Because from correlating all of this information, my guess is that your machines only have 2 RAM modules assigned per socket... 4 modules in total per machine.

Either that or 1 million cells is not enough for a full test! ;)

Best regards,
Bruno

eddi0907 March 6, 2013 03:05

2 Attachment(s)
Hi Bruno,

the slots are all filled with equal sized DIMM's.

What do you mean with "full test"?
Up to 20 cores 1 Million cells will be at least 50kcells/core.

I don't want to tell stories.

Attached you can find an overview of the timings I found and the test case I used in zip-format.

Could you please crosscheck the speedup from 1 to 2 and 4 Cores to see if only my hardware behaves ugly?


Kind Regards.

Edmund

wyldckat March 6, 2013 05:32

Hi Edmund,

Thanks for sharing. I'll give it a try when I get an opening on our clusters.

In the meantime, check the following:
Best regards,
Bruno

smraniaki November 20, 2013 20:06

I'm not sure why you are having longer computation time but I have a guess:
Your longer processing time could be due to unoptimized selection of number of decomposition domain. when you decomposed the domain the communication between each threads during parallel computation also take process which correspondingly demands more time. In your case I believe if you decompose you domain into 5 or 3 instead of 4, you should be facing different processing time as the communication between threads might decrease or increase. it is not always efficient to decompose the domain into several parts.

Alish1984 May 30, 2015 03:15

Quote:

Originally Posted by eddi0907 (Post 411558)
Hi Bruno,

The Processors are XEON W5580 (4 cores) or X5680 (6 cores) with 3.2 respectively 3.33 GHz without overclocking.
The Memory is DDR3-1333.
I used normal Ethernet 1GbpS.

The modelsize was 1 Million cells.

Running on 2 cores the Speedup is 2 as well.
Using 4 cores on the same CPU the speedup is only ~2.6 but Using 4 cores on 2 CPU's the speedup will be nearly 4!
It seems to be the same when looking at the unofficial benchmarks (http://code.google.com/p/bluecfd-sin...SE_12.1_x86_64)

So on a cluster where you use machines with more than one CPU you need to do corbinding and define which task on which cpu-core with an additional rankfile in openmpi.

Example 2 Dual CPU machines (no matter if 4 or 6 cores):

mpirun -np 8 -hostfile ./hostfile.txt -rankfile ./rankfile.txt icoFoam -parallel

hostfile:

host_1
host_2

rankfile:

rank 0 =host_1 slot=0:0
rank 1 =host_1 slot=0:1
rank 2 =host_1 slot=1:0
rank 3 =host_1 slot=1:1
rank 4 =host_2 slot=0:0
rank 5 =host_2 slot=0:1
rank 6 =host_2 slot=1:0
rank 7 =host_2 slot=1:1

Now the job runs on 8 cores distributed on cores 0 and 1 on any of the 4 CPU's with a speedup of more than 7.

Perhaps the newest generation of CPU's show a faster CPU-RAM communication and one can use 3 cores per CPU.

Kind Regards.

Edmund


Dear Edmund and Bruno,

It seems that Open MPI rank file can not detect multi threads, I mean when u have cores with HT enabled, in a rankfile u can only include physical processors. Is there any solution?

Regards,
Ali

wyldckat May 30, 2015 08:20

Quote:

Originally Posted by Alish1984 (Post 548263)
It seems that Open MPI rank file can not detect multi threads, I mean when u have cores with HT enabled, in a rankfile u can only include physical processors. Is there any solution?

Quick answer: You will not see a substantial performance increase when using HyperThreading with OpenFOAM. It's best that you only use the physical cores.

Beyond that, a very quick search lead me to this answer: http://stackoverflow.com/a/11761943

Alish1984 May 31, 2015 07:35

Quote:

Originally Posted by wyldckat (Post 548288)
Quick answer: You will not see a substantial performance increase when using HyperThreading with OpenFOAM. It's best that you only use the physical cores.

Beyond that, a very quick search lead me to this answer: http://stackoverflow.com/a/11761943

Dear Bruno,

Tnx for quick response. It was helpful.
I know that the maximum speedup would be 10-30% in some cases, when some processors become idle e.g. in combustion probs. I refer u to this paper, "An Empirical Study of Hyper-Threading in High Performance Computing Clusters".

Ok lets forget the HT for the moment. I have another question, is there any report of OpenFOAM scalability above 32 processors like this "https://www.hpc.ntnu.no/display/hpc/...mance+on+Vilje" but without infiniband communication? I mean with Ethernet communication among nodes?

The question may seem weird but let me describe it more, I'm not a pro in computer science so excuse me for probable mistakes. We have 3 Supermicro servers, each has 2 Intel Xeon E5-2690 (2*10 cores). I connected them via ethernet with Cat6 cables and a high speed switch.
The problem is that I cant reproduce the result of "https://www.hpc.ntnu.no/display/hpc/...mance+on+Vilje" in 1M cells cavity case using 32 processors.
The solution in 1 node is scalable, however increasing the nodes to 2 and 3 (40 and 60 processors respectively) there is no substantial speedup.

When I change the problem to the combustion case (PDE+ODE solutions) an interesting behavior is seen. The scalability of ODE solution part is linear. But the PDEs solution time is still the same like cavity case.

So it comes to my mind that maybe this is the prblem of communication among nodes. Since ODE solution part doesn't need any synchronization while the PDEs do.

The conclusion: since the only major difference btw me and the cluster in "https://www.hpc.ntnu.no/display/hpc/...mance+on+Vilje" is the type of communication (ethernet VS infiniband) it seems that this is the source of lack of scalability under the same conditions.

Is it true? Is there any report of significant speedup by using ethernet communication among nodes in clusters?

Regards,

Ali

wyldckat May 31, 2015 17:58

Hi Ali,

Quote:

Originally Posted by Alish1984 (Post 548329)
I have another question, is there any report of OpenFOAM scalability above 32 processors like this "https://www.hpc.ntnu.no/display/hpc/...mance+on+Vilje" but without infiniband communication? I mean with Ethernet communication among nodes?

I know there are more examples on the Hardware forum, but I can't find them right now. The one I found after a quick search is in the attached image on this post:
http://www.cfd-online.com/Forums/har...tml#post518234 - post #8
Your cluster already falls within the details given in the image, namely that 1Gbps connection is not enough to support so many processors.

Best regards,
Bruno

KateEisenhower October 29, 2015 06:31

Quote:

Originally Posted by wyldckat (Post 309646)
  1. I would suggest splitting the case in 2,3,4,5,6 and 12 sub-domains, to try and isolate if it's a CPU cache problem. I've had a situation where a 6 core CPU was faster with 16 sub-domains than 6 sub-domains :rolleyes:

Hi Bruno,


would you mind to explain this part of your quote in more detail? How can you tell then if it's a CPU cache problem? What should be saved in the cache? I can't imagine even the L3 cache is big enough to hold the whole mesh.


Do you know of some tutorial or description of how to use the hierarcial decomposition method? I searched the user guide and the forum but didn't get a clue.

Best regards,

Kate

wyldckat October 31, 2015 08:44

Hi Kate,

Quote:

Originally Posted by KateEisenhower (Post 570825)
would you mind to explain this part of your quote in more detail? How can you tell then if it's a CPU cache problem? What should be saved in the cache? I can't imagine even the L3 cache is big enough to hold the whole mesh.

The logic in my thought process is that when we have over-scheduling going on, it can eventually end up in a situation of "least effort" as a result of the bottlenecking effect, namely where:
  • all processes are either accessing neighbouring memory regions that are common to various processes;
  • or all processes only need to access a particular region in the memory for each process, that is needed for communicating between various processes.
For example, if 4 processes are dealing with a corner in their decompositions that are common to all sub-domains, this would mean that this memory region would be used as data source for each process to communicate with 2 or more processes at the same time.



Quote:

Originally Posted by KateEisenhower (Post 570825)
Do you know of some tutorial or description of how to use the hierarcial decomposition method? I searched the user guide and the forum but didn't get a clue.

Fortunately I believe/hope you've already found some more details about this: http://www.cfd-online.com/Forums/ope...mulations.html :)

Best regards,
Bruno

KateEisenhower November 2, 2015 04:28

Hi Bruno,

I understand your thought process. But what does this mean for a real simulation. The problem is you can't actually see what is slowing down your parallel simulation, can you?
My current way of procedure on a 2 socket machine with each having 6 cores and 3 memory channels is the following:

1) Run case in serial to have a reference
2) Run 2 threads on different sockets core-binded
3) Run 4 threads, 2 on every socket, core-binded
4) The same with 6, 8, 10 and 12 threads

I run these test cases for 10 iterations each (is that enough), see which one finishes the fastet and go with this configuration for this case. Is there any other method?

Regarding the hierarcial decomposition method. Not really. I don't understand what it is supposed to do. A quick example:

Code:

28  hierarchicalCoeffs
29  {
30      n              ( 3 1 2 );
31      delta          0.001;
32      order          xyz;
33  }

would look as follows:


Code:

----------------------
I      I      I      I
----------------------
I      I      I      I
----------------------

Î: z-direction ->: x-direction

How does the order of splitting effect the outcome?

Best regards,

Kate

wyldckat November 2, 2015 17:34

Hi Kate,

Quote:

Originally Posted by KateEisenhower (Post 571388)
The problem is you can't actually see what is slowing down your parallel simulation, can you?

I know that there are MPI profiling tools that can try to give you this kind of information, but I've never used them myself.

Quote:

Originally Posted by KateEisenhower (Post 571388)
My current way of procedure on a 2 socket machine with each having 6 cores and 3 memory channels is the following:

1) Run case in serial to have a reference
2) Run 2 threads on different sockets core-binded
3) Run 4 threads, 2 on every socket, core-binded
4) The same with 6, 8, 10 and 12 threads

For a particular type of test cases, this is usually the way to do this. Your mileage can vary depending on the type of simulation (e.g. simpleFoam or reactingFoam), mesh configuration, and on the matrix solver settings defined in "fvSolution".

Quote:

Originally Posted by KateEisenhower (Post 571388)
I run these test cases for 10 iterations each (is that enough), see which one finishes the fastet and go with this configuration for this case. Is there any other method?

The number of subdomains and the way the subdomains were divided, can affect the necessary number of iterations for the simulation to converge. This to say that 10 iterations might not be enough to give you a good enough comparison. For example, comparing 10 vs 11 vs 12 seconds isn't as good as comparing 101 vs 113 vs 118 seconds.

Keep in mind that OpenFOAM technically uses boundary conditions of type "processor" for communicating the data between subdomains. And since small changes in a boundary condition can affect the solution, this means that more or less iterations might be needed to reach convergence. Keep in mind that this can either be iterations at the level of the matrix solvers (e.g. GAMG) or at the level of the outer iterations of the application solver (e.g. simpleFoam).

Quote:

Originally Posted by KateEisenhower (Post 571388)
Regarding the hierarcial decomposition method. Not really. I don't understand what it is supposed to do. A quick example:

Code:

28  hierarchicalCoeffs
29  {
30      n              ( 3 1 2 );
31      delta          0.001;
32      order          xyz;
33  }

would look as follows:


Code:

----------------------
I      I      I      I
----------------------
I      I      I      I
----------------------

Î: z-direction ->: x-direction

How does the order of splitting effect the outcome?

The standard objective is simple enough: keep the number of faces shared between subdomains down to the smallest possible number. Because the fewer the shared faces, the less time is spent communicating between processes.

To a lesser extend, the other objective is to have the simulation be solved in the most efficient way possible, simultaneously if possible. This can be tested by modifying the "incompressible/icoFoam/cavity" tutorial case to be 3D and then test the various orders of decomposition. In theory, if we can have all of the domains work though the equation matrices in the same exact order in parallel, this should be the most optimum way to process the data.
From your ASCII drawing, the efficient way would be to have all 6 processes work from the left to the right, then one line down and left to the right again, within their own subdomains, so that they are working side-by-side on solving the same parts of the matrices, at least for each pair of processes.

I'm oversimplifying this, but this should become more apparent when testing with a 3D cavity case with a uniform mesh and uniform mesh distribution between processes.

Translating this to a real simulation isn't as straight forward, but it can at least help you reduce the number of tests you need to do when looking for the best decomposition.
But for more complex meshes, the usual decomposition to go with is Scotch or Metis, since they use graph theory (I can't remember the exact terminology) for trying to minimize the number of faces needed for communicating between subdomains.

Best regards,
Bruno

ht2017 October 8, 2017 00:08

Can you help me. the errors appear when I run parallel. the comment "reconstructPar"
 
1 Attachment(s)
hi, everyone.
I am running parallel in OpenFoam. When I comment "reconstructPar - latestTime", it appears the errors.

the first: there are the coordinates of the face in the Polymesh have "word" in the number.
the second: in the file P appear the symbol as "^,$,&" in the number in here.

I hope everyone helps me.

Attachment 58859

smraniaki October 8, 2017 13:18

Quote:

Originally Posted by ht2017 (Post 666869)
hi, everyone.
I am running parallel in OpenFoam. When I comment "reconstructPar - latestTime", it appears the errors.

the first: there are the coordinates of the face in the Polymesh have "word" in the number.
the second: in the file P appear the symbol as "^,$,&" in the number in here.

I hope everyone helps me.

Attachment 58859


what solver did you use? It appears to me that your mesh has reformed, in this case you need to reconstract the mesh first, then reconstruct the fields.

OpenFoamlove November 1, 2017 09:25

Quote:

Originally Posted by eddi0907 (Post 411558)
Hi Bruno,

The Processors are XEON W5580 (4 cores) or X5680 (6 cores) with 3.2 respectively 3.33 GHz without overclocking.
The Memory is DDR3-1333.
I used normal Ethernet 1GbpS.

The modelsize was 1 Million cells.

Running on 2 cores the Speedup is 2 as well.
Using 4 cores on the same CPU the speedup is only ~2.6 but Using 4 cores on 2 CPU's the speedup will be nearly 4!
It seems to be the same when looking at the unofficial benchmarks (http://code.google.com/p/bluecfd-sin...SE_12.1_x86_64)

So on a cluster where you use machines with more than one CPU you need to do corbinding and define which task on which cpu-core with an additional rankfile in openmpi.

Example 2 Dual CPU machines (no matter if 4 or 6 cores):

mpirun -np 8 -hostfile ./hostfile.txt -rankfile ./rankfile.txt icoFoam -parallel

hostfile:

host_1
host_2

rankfile:

rank 0 =host_1 slot=0:0
rank 1 =host_1 slot=0:1
rank 2 =host_1 slot=1:0
rank 3 =host_1 slot=1:1
rank 4 =host_2 slot=0:0
rank 5 =host_2 slot=0:1
rank 6 =host_2 slot=1:0
rank 7 =host_2 slot=1:1

Now the job runs on 8 cores distributed on cores 0 and 1 on any of the 4 CPU's with a speedup of more than 7.

Perhaps the newest generation of CPU's show a faster CPU-RAM communication and one can use 3 cores per CPU.

Kind Regards.

Edmund


Hi Edmund I tried to do parallel calculation in two network pc by simulation does not run further it is stock as below please help me to find my failure


[15:18][tec0683@rue-l020:/disk1/krishna/EinfacheRohre/bendtubeparalle/bendingtube]$ mpirun -np 8 -hostfile machines simpleFoam -parallel
/*---------------------------------------------------------------------------*\
| ========= | |
| \\ / F ield | OpenFOAM: The Open Source CFD Toolbox |
| \\ / O peration | Version: 2.1.1 |
| \\ / A nd | Web: www.OpenFOAM.org |
| \\/ M anipulation | |
\*---------------------------------------------------------------------------*/
Build : 2.1.1-221db2718bbb
Exec : simpleFoam -parallel
Date : Nov 01 2017
Time : 15:18:49
Host : "linxuman"
PID : 13714


with regards Anna


All times are GMT -4. The time now is 01:32.