CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   OpenFOAM (https://www.cfd-online.com/Forums/openfoam/)
-   -   Lies Damn Lies and Benchmarking (https://www.cfd-online.com/Forums/openfoam/60893-lies-damn-lies-benchmarking.html)

gschaider January 16, 2006 11:31

We're in the process of buying
 
We're in the process of buying new hardware for a small cluster in the next months. Evaluating the hardware by looking at published results is a bit difficult because benchmarks tend to fall into three categories:

- SPECmarks (which are OK but IMHO not 100% applicable for CFD-computations)
- Framerates in Quake 3/Doom 3 (which are interesting, but I don't think my boss would approve if I took this as the basis for a decision)
- other benchmarks which tend to be interger/low memory

So to compare hardware we get for tests I wrote a Python-Script that runs various tutorial-cases from the OpenFOAM-distribution and compares the execution time with that of a reference machine. It then computes an average speedup to that machine.

The script can also be used to run the cases in parallel.

I KNOW that trying to get a single number to gauge the performance of a computer system is the sign of extreme simple-mindedness but I'm trying it anyway (and of course it's not the only number I'm using)

The script is discussed in more detail at

http://openfoamwiki.net/index.php/Contrib_benchFo am

and is part of

http://openfoamwiki.net/index.php/Contrib_PyFoam

The script is quite stable (at least at my site). The problem is the benchmark suite where half of the cases fail for parallel execution (due to problems during decomposePar, problems with the boundary conditions, seems that some of these cases were never run in parallel). I'm planning to have a stable version by the end of the month.

Any comments on the approach/the script/the benchmark suite would be greatly apprechiated (even "You got it all wrong")

(On thing that especially interests me is: what is faster a) one DualCore-Opteron or b) two equivalent SingleCore-CPUs on one board; the last time I looked the price for these two configurations was almost the same)

eugene January 16, 2006 12:11

Cant comment on the reliabilit
 
Cant comment on the reliability of the benchmarks, but dual core vs single core depends entirely on the interconnect you plan to use.

Friend of mine did a bunch of benchmarks (using STAR admittedly) with a cheap gigabit ethernet and 3 AMD 3800 X2s. Using only 2 machines he got near 90% efficiency. Adding the third machine however dropped him down to around 60%. From this and other experiences I would say unless you can afford a myrinet or equivalent interconnect, stick with single core cpus. A gigabit backbone just doesn't have the capacity or low enough latency to carry two compute units per nic. Even if you etherbond 2 or more nics per box, you will still have latency issues.

mattijs January 16, 2006 13:56

Can you report any problems wi
 
Can you report any problems with decomposePar?

gschaider January 16, 2006 16:15

It's not a problem with decomp
 
It's not a problem with decomposePar per se: for instance in the dieselFoam/aachenBomb case there are two files (ft, fu) that don't have sufficient boundary conditions according to decomposePar:

--> FOAM FATAL IO ERROR : keyword walls is undefined in dictionary "/.automount/werner/Werner/bgschaid/bgschaid-foamStuff/Benchmark/dieselFoam_aach enBomb_standard.gcds07.cdratfd.unileoben.ac.at.cas e.runDir/0/ft::boundaryField"

(I'm fully aware that lagrangian particles usually do not parallelize very well, but that was the reason why I included that case)

Similar things happen with the other cases that fail (except for dnsFoam/boxTurb16: "FOAM FATAL ERROR : calculated number of cells is incorrect" when running dnsFoam).

I'll let you know if I find a real problem with decomposePar (and not a problem that has to do with model set-up)

traumflug January 16, 2006 16:50

Eugene wrote: > Cant comment
 
Eugene wrote:
> Cant comment on the reliability of the benchmarks, but
> dual core vs single core depends entirely on the
> interconnect you plan to use.

Aren't you confusing a computer with a dual core processor with a cluster with two nodes here? Dual core is always better than single core, same processor frequency assumed.


Markus

gschaider January 16, 2006 17:49

I think what Eugene meant was:
 
I think what Eugene meant was: "if there are two CPUs on a board (in whatever form) as soon as you need a third CPU for your task you'll see that it would have been wiser to invest in good networking instead of fancy SMP-hardware"

@"dual core always better": if there's only one CPU you're right, but compared to a Dual-CPU-SingleCore-Board I'm not 100% sure, because, if I interpret the Processor diagrams I've seen correctly, on a DualCore the two cores have to share the same MemoryBus which could be a bottleneck. But nobody can tell me for sure whether this has an impact. That's why I want to benchmark.

eugene January 17, 2006 08:15

The AMD Hypertransport memory
 
The AMD Hypertransport memory bus is good enough that dual core cpus only take about a 10% hit in performance when running a 2 processor job.

The comment about the number of cpus per nic stands though. It all depends on the number of foam processes that have to share the same communications interface. Basically 2 cores/cpus/processors per comms interface can potentially produce a bottleneck due to the doubling in the volume of interprocessor communications that the nic has to handle compared to a single processes.

mprinkey January 17, 2006 08:32

Channel bonding gigabit ethern
 
Channel bonding gigabit ethernet is a waste of time. Performance is not doubled and latency actually becomes worse. Since many (most?) dual Opteron motherboard include dual gigabit interfaces onboard, a useful approach is to buy a bigger network switch and connect both NICs on each node to it. This is key--you need to assign different IP addresses to each interface and basically make the single node look like two nodes by assigning it two host names.

So, nodeX would each have two host names nodeXa and nodeXb. When you launch your parallel runs on dual-CPU, dual-core Opteron nodes, you would use each hostname twice:

node1a
node1a
node1b
node1b
node2a
node2a....

This will give each pair of processors one independent network interface to use as its own and avoids network contention issues. Latency for this setup is the same as for a dual-CPU single-core with one NIC. This same approach could be used for single-core dual-CPU nodes or for dual-core dual-CPU configurations with four NICs.

While this is a possible solution, in my mind, the cost of a dual-core, dual-CPU Opteron node is at the breaking point for investing in higher-end networking. Specifically, our new dual-dual cluster uses Infiniband. The cost of each node itself was on the order of $4k. The networking added roughly $1k per node over plain gigabit. I think that is a reasonable investment for significantly higher bandwidth and lower latency.

eugene January 17, 2006 09:03

This is very interesting. So t
 
This is very interesting. So traffic between 2 or more nics on a single machine will be balanced automatically or is it managed by lam/mpi?

I have two 8way opteron boxes here that I would really like to improve the interconnect for. If as you say I can just stick in more nics and cables, that would be awsome. For some reason I had never considered this a possibility and fiddling around with channel bonding got me nowhere.

mprinkey January 17, 2006 09:24

There is no balancing to do.
 
There is no balancing to do. The IP addresses/hostnames are the identifier that MPI/PVM uses to identify processes. By doing as I outlined, you will be giving different pairs of parallel processes different IP addresses. Let's just talk about one PE per NIC for clarity for now. In MPI talk, it might be like this:

# Node 2
PE0 192.168.1.3
PE1 192.168.1.4

# Node 3
PE2 192.168.1.5
PE3 192.168.1.6

If the nodes have two CPUs each, when we launch this job, the each PE will have its own IP address and hence, its own network interface to use. Traffic moving between PE0 and PE1 will not be sent to the switch. The IP stack will bounce it right back just like it would if the processes shared the same IP address, so no performance is lost. With dual cores, it is the same...just two PEs will share an IP address and its corresponding network interface and your hosts file will list the hostnames twice.

On your 8-way boxes, you have a few ways to go. You can buy a few cheap Intel e1000 cards and populate as many PCI slots as you can. Or you can spend a bit more and get the Intel dual- or quad-interface cards. I would also recommend that you at least check on Infiniband prices. You can "end-to-end" them so you would only need to buy two cards and a cable. That shouldn't be much more than $1200-$1500.

BTW, make sure that you add this line to the modules.conf file if you are running the e1000 cards under Linux:

options e1000 InterruptThrottleRate=80000,80000

Add an "80000" for each e1000 you have. That line above is for two interfaces. This greatly reduces network latency and gave about another 150 Mbps in bandwidth. With this tuning, I got latency numbers in the 25-ms range on our xeon cluster. That is down from about 160 ms using the default settings.

eugene January 17, 2006 10:00

Now why didnt I think of that?
 
Now why didnt I think of that? Thanks for the info.

I will see about getting a few PCI-X multi-channel cards as soon as I can get these monsters stable.

I know this is not really the forum for this but I have to ask since my patience is wearing thin: has anyone managed to get any of the opteron 8-way systems stable under load for protracted periods (week+)?

gschaider January 17, 2006 10:21

Hello Mattijs! I didn't fin
 
Hello Mattijs!

I didn't find any problems with decomposePar. The only two cases in the suite that I didn't get to run are

- dieselFoam/aachenBomb: the same problem as the one described by thomas in
http://www.cfd-online.com/OpenFOAM_D.../126/1634.html

- dnsFoam/boxTurb16: dnsFoam says

<snip>
--> FOAM FATAL ERROR : calculated number of cells is incorrect

From function Kmesh::Kmesh(const fvMesh& mesh)
in file Kmesh/Kmesh.C at line 87.
</snip>

no matter how I decompose the grid (simple/metis). My stupid question: does dnsFoam run in parallel?

hjasak January 17, 2006 10:30

Nope. It uses fast Fourier tr
 
Nope. It uses fast Fourier transforms and a regular uniform mesh (KMesh) to do it on for the forcing and that does not parallelise. If you throw away the forcing, the solver will run parallel.

Sorry,

Hrv

niklas January 17, 2006 10:32

decomposePar needs proper boun
 
decomposePar needs proper boundary conditions like any other code, but ft is not used anymore by dieselFoam, so that file can simply be removed.
Since decomposePar tries to decompose every file it finds in the directory it will obviously not work if the boundary conditions are wrong.

Has anyone tested to correct the bc, or simply remove ft, and then run decomposePar for the aachenBomb????

N

gschaider January 17, 2006 10:56

@dnsFoam: I've marked it as no
 
@dnsFoam: I've marked it as non-parallel in the Benchmark-suite.

@dieselFoam: my script does that (remove ft and fu) and then the grid get's correctly decomposed. But as soon as dieselFoam runs in parallel I get the error described in the other thread.

duderino January 23, 2006 03:49

Hy Bernahard, the benchmak-
 
Hy Bernahard,

the benchmak-script is what I was looking for, since there are almost no cfd-benchmarks available. I am also interested in a dualcore vs. two-cpu comparison: espacially AMD Athlon 64 X2 4800+ vs.2 x AMD Opteron 248 2.20GHz vs. AMD Opteron 265 2x 1.80GHz. Which are all about the same im in price.

The problem is I don'get your PyFoam-0.2.2 script to run if I do a: python setup.py install
I get: error: invalid Python installation: unable to open /usr/lib/python2.4/config/Makefile (No such file or directory). Any idea?

Python seems to be installed since /usr/lib/python2.4/ does exist but not the config directory.

Thanks for your help!

Jens

P.S. I never used python before!

gschaider January 23, 2006 07:03

Hi Duderino! Which Linux-di
 
Hi Duderino!

Which Linux-distribution are you using (I assume it's Linux)? (Python2.4 is only included in the most recent distributions)

Anyway: your python2.4-installation seems to be broken. To find out how badly broken it is just type 'python' on the command line. You then get an "interactive python shell". If you don't the installation is very badly broken.
If you're lucky there is an older version of python still installed (call 'python2.3' or 'python23', 2.2 won't work with my scripts). Try that.

Feel free to contact me by EMail (if we find a solution we can post it to this forum but I don't think it is necessary to bother people with the intermediate steps)

gschaider January 24, 2006 11:34

Hello all! First concerning
 
Hello all!

First concerning Jens' (Duderino) problem: it seems that Ubuntu-Linux only installs the files that are necessary for a successfull setup.py with the development stuff for Python (try 'apt-get install python2.4-dev' or something similar)

The benchmark-script and suite are now sufficently stable to be thought of as 'beta quality'.

Some prelimiary results can be found at
http://openfoamwiki.net/index.php/Benchmarks_stan dard_v1

The parallel results are not too good (some would even say they're bad), but this had to be expected with cases in the suite that only use 11MB of memory (King Amdahl says Hello). But I think some of the results are quite interesting (Good speedup for Opteron-SMP compared to Xeon-SMP (with MultiThreading; thats not so good))

Feel free to add your results.

And of course: I'm still open to suggestions concerning the benchmark-suite.

duderino January 31, 2006 10:31

Hello all I am looking for
 
Hello all

I am looking for some volunteers who help me on comparing some machines. You just need to use Bernhards python script collection you get at the links of the first message in this thread.

I really would like to see some benchmark results on Opteron 250 and above system. So if somebody happens to have such system. Please run the benchmark and publish it at the wiki. This will definetly help me (and also others) with choosing a new system.

Best regards

hani February 4, 2006 08:34

Hi, We are also planning on
 
Hi,

We are also planning on purchasing a new Linux cluster. It has basically already been decided to be an AMD Opteron Dual Node, Dual Core, 2.2GHz. I will start doing some benchmaring during next week on a Dual Node Dual Core Opteron 280, 2.4GHz for up to 16 CPU's/cores. I will benchmark both with Gigabit network and Infiniband. Later on (in a week or so) I will have the opportunity to also try out a similar system but with InfiniPath and up to 32 CPU's/cores.

I will try to use your Python script, but I will also run a test with a 1M cell testcase in simpleFoam (A water turbine draft tube, anyone who would like the case can contact me to get it). As you have already mentioned, the testcases in the python-script are most likely way too small to say anything about real applications.

Does anyone have any suggestion on special settings I should use concerning domain decomposition (I plan to use automatic metis) or any specific settings that can be done in OpenFOAM, which could influence the benchmarking?

Håkan.

gschaider February 6, 2006 12:38

Hi Håkan! Now I can tell yo
 
Hi Håkan!

Now I can tell you about the hidden agenda I had with starting this thread: the things you're talking (GigaBit vs InfinBand) are exactly what interests me and the more hard data there is available the better.

About the size of the test-cases: I was thinking about splitting the cases into sizes (the way the Fluent-people do it with their Benchmarks). And I'm still open to suggestions which cases from the tutorials would fit the purpose better (BTW: I'm planning to extend the script in order to make it possible to use it with cases that are not in $FOAM_TUTORIALS; should have that in a few days)

@decomposition: my approach when putting the benchMark-suite together was: "let the computer do all the work" (-> metis) but I got some errors when applying this strategy (I think some physical boundary conditions don't like to be splitted by a processor boundary). About the performance-merits of the decomposition strategies I can't comment. Sorry.

Bernhard

jens_klostermann February 7, 2006 04:12

Hi Håkan! We are also plann
 
Hi Håkan!

We are also planning to buy a new cluster, and there will be an opportunity to benchmark. Since there is a lack on large cases yet, I would like to take your offer running the 1M cell testcase in simpleFoam (A water turbine draft tube). If you compare Gigabit Ethernet vs. Infiniband, will you have chance to try the suggestion by Michael Prinkey (every PE will have its own IP and also NIC)?

Waiting for results

Jens

hani February 7, 2006 04:39

Hi Jens, I am planning to d
 
Hi Jens,

I am planning to do the tests tomorrow together with our Linux cluster provider (Gridcore). I guess that the suggestion by Michael requires some extra hardware, so it might be difficult to convince Gridcore to make this effort but I will discuss it with them.

As soon as I have the results I will post them here. If you would like to have the complete set-up of the test case after wednesday, send me an e-mail. Find my e-mail address by clicking on my name in the forum.

Håkan.

hani February 7, 2006 07:26

I have made some preliminary t
 
I have made some preliminary tests on my own dual AMD to find out which settings I should use for the benchmarking.

Can anyone tell me if the parallellization in OpenFOAM is made so that a parallel run should give exactly (more or less) the same convergence as a sequential run?

I have tried using the AMD solver for the pressure, and I get the same clockTime for both 1 and 2 CPUS, i.e. zero speedup. The reason for this could be that there might be communication at each grid level, which slows down the computations. I'm not yet sure that the problem has its origin in the AMG solver, but that is my sophisticated guess.

Does anyone have any idea what solvers I should use to get good parallel speedup. I know that choosing a solver for good parallel speedup might not be good for the convergence, since the more advanced solvers has a much better convergence, but I would like to test it since zero speedup is not very good.

Håkan.

gschaider February 7, 2006 07:48

Håkan, your question might be
 
Håkan, your question might be answered by the Posting "New Releases" which was released on "Anouncments" 5 minutes after your posting.

Quote: "- rewriting the AMG solver has improved performance in parallel"

hani February 7, 2006 08:43

I saw the new release. However
 
I saw the new release. However, I have now tested ICCG with the same unsatisfactory result, which indicates that it wasn't the AMG solver. I must be doing something else wrong.

About the Python script - do I have to be root to install it?

Håkan.

gschaider February 7, 2006 09:30

About root: A very good questi
 
About root: A very good question. Never thought of that because in my small world I have the root-PW.

The easy way to install the script is being root. Then the stuff gets installed to a place where Python automagically finds it.

But of course it's a short-sight on my side to assume that everyone has a root-password (or is allowed to sudo).

So the way to it as a non-root would be to:

1. create a directory for the Python-Libs (for instance /home/me/PythonLibs
2. call the installation script with 'python setup.py install --prefix=/home/me/PythonLibs'
3. set the environment variable PYTHONPATH to /home/me/PythonLibs/lib/python2.3/site-packages (the second portion of the path may vary depending on your Python-installation)
4. check the installation by just typing python on the command line. In the python-shell type 'import PyFoam'. If you don't get an error-message all is well

I'll change the Wiki-page accordingly

About the release: I just couldn't resist. Because of the 5 min gap between your question and the release anouncement I thought: "These guys are really fast with their fixes" :-)

hani February 7, 2006 10:48

BIG WARNING! I would like t
 
BIG WARNING!

I would like to post a BIG WARNING not to forget making sure that the number of 1's in decomposeParDict for the processorWeights in metisCoeffs corresponds to the numberOfSubdomains specified in the same file. The problem I had in the previous discussion was that I had specified numberOfSubdomains 2, but I had 10 1's left from a previous computation. OpenFOAM interpreted it as processor0 had processorWeight 1 and processor1 had a processor weight corresponding to the sum of the rest of the 1's.

I realized what was the problem first when I ran the case for 2 CPUs on two different dual machines. Then the CPU-usage was much lower on one of them. This was not the fact when running both processes on the same dual machine. Of course I could have looked at the numbers when doing the decomposePar, but in this case I didn't.

It would be nice if I only had to specify the number of processes once in decomposeParDict, as long as I am running on a homogeneous cluster (which most people usually are).

Now a preliminary parallel speedup is:
Running on Dual Intel(R) Xeon(TM) CPU 2.40GHz, 100Mbps Ethernet network, 500MB RAM/CPU, 0.5MB cache/CPU.
1 CPU: speedup 1 (normalized)
2 CPUs on one dual node: 1.2 (!!!???)
2 CPUs on two dual nodes: 2.0 (great!)
4 CPUs on two dual nodes: 2.2
4 CPUs on four dual nodes: 3
I did not check the influence on the convergence.
I used the ICCG solver for the pressure.
The comparison is based on clock time for four iterations (normalization factor 494s)

Can someone tell me why it is better to run over this slow network than to stay as much as possible to the same nodes? I guess that was what was discussed earlier in this thread? I'm surpriced (and scared) of the effect when running on two CPUs. I will discuss it with Gridcore tomorrow.

I will get back with the 'real' investigation soon.

Håkan.

hani February 7, 2006 11:07

Hi Bernhard, I think I did
 
Hi Bernhard,

I think I did as you said with the Python script, except that the default config file was named defaultBench.cfg instead of default.cfg

When running ./benchFoam.py defaultBench.cfg i got the following error message:

Traceback (most recent call last):
File "./benchFoam.py", line 7, in ?
from PyFoam.Execution.BasicRunner import BasicRunner
ImportError: No module named PyFoam.Execution.BasicRunner

I have no idea what this means.

Håkan.

gschaider February 7, 2006 12:18

Hi Håkan! @the benchmarks:
 
Hi Håkan!

@the benchmarks: Our Xeon-SMP machines scaled shitty compared with the Opteron-machine, but that shitty? One wild guess would be Hyperthreading. Is it enabled? If yes: get rid of it (I've never done tests, but I've heard that it can impact persormance on SMP-machines). BTW: the speedup you get (1.2) is approx. the same you should get for a single Xeon with two processes running with Hyperthreading. If HT can be ruled out I would start blaming the MotherBoard, then the kernel.
But I'm not a hardware-expert so al of these are guesses.

@python script: The error message means that he is trying to import a submodule of PyFoam and can't find it.
Please check the following:
1. the python you get when typing python in the shell ('which python') is the same as the one expected by the script (/usr/bin/python), but this should only matter if you installed PyFoam as root
2. PYTHONPATH is set to the right directory (in the directory PYTHONPATH points to should be a folder PyFoam in which there are several folders one of them called Execution)

Should both of these test be OK do the following:
- on the shell type 'python', the Python interactive shell appears (can be left with control-D)
- on the Python shell type 'import sys' then 'print sys.path': a list of directory names should appear, on of them the Directory you set with PYTHONPATH (assuming you installed as non-root)
- try the offending line (from PyFoam.Execution.BasicRunner import BasicRunner) on the shell (should raise the same error the script gives you)

If things still aren't working feel free to contact me via EMail (we could do it here to, but I think in this thread there are already approx. 3 different discussions going on (all of them interesting) so I think we'll sort out that problem seperatly and I'll distribute the gathered knowledge at the appropriate position (Wiki or script or here) )

mattijs February 7, 2006 13:16

decomposition: we find (on sim
 
decomposition: we find (on simplish cases) that hierarchical or simple can give as good results as or better results than Metis.

For trivial cases (e.g. lid-driven cavity) Metis produces a funny decomposition. Run with the -cellDist argument to have decomposePar dump the decomposition.

mattijs February 7, 2006 13:19

> Can anyone tell me if the pa
 
> Can anyone tell me if the parallellization in OpenFOAM is made so that a parallel run should give exactly (more or less) the same convergence as a sequential run?

Not exactly. (IC) Preconditioning is non-parallellizable so there will be slight differences. The AMG solver uses ICCG at the coarsest level so will also give slightly different results.
The only solver that parallelizes perfectly is the diagonally preconditioned cg (DCG). Unfortunately it generally is much much worse than iccg.

jens_klostermann February 7, 2006 13:31

> Can anyone tell me if the pa
 
> Can anyone tell me if the parallellization in OpenFOAM is made so that a parallel run should give exactly (more or less) the same convergence as a sequential run?

@ slight differences:
I ran an interFoam case which diverged in parallel (2processes). When I restarted the same case from the last time dump of the parallel case on a single cpu the case ran just over timestep where it diverged in parallel.

hani February 8, 2006 02:36

Hi Mattijs, My question reg
 
Hi Mattijs,

My question regarding if OpenFOAM is made so that a parallel run should give exactly (more or less) the same convergence as a sequential run was about at what level the parallelization is made. If every single operation is parallelized, I guess that it should be possible to get the same convergence? Consider a parallel run where each cell belongs to a separate CPU. For all operations that need information from a neighbour you have to send the information with mpi instead of when you do it sequential, where you get it through pointing at the information. The same information should be used in both cases, and the parallel convergence would be the same. However, this is not a good way to parallelize the code since it will run very slowly, so you choose another level of parallelization where you say that a certain amount of operations are made in each CPU before exchanging information with your neighbour.

Take the AMG solver as an example: I could choose to do all AMG levels on each CPU before exchanging the information. I could choose to exchange information at all AMG levels. I could choose to exchange information at each sweep of the solver (in this case the ICCG solver). I could choose to exchange information exactly at the time I need it (as decribed above).

Of course - I can find the answer in the source code :-) ... In the future.

Håkan.

mattijs February 8, 2006 04:32

Everything is fully/exactly pa
 
Everything is fully/exactly parallelized but for the IC (incomplete cholesky) preconditioning. This means that iccg and amg will behave slightly different in parallel. Only the diagonal preconditioning cg solver (DCG) should behave exactly identical.

gschaider March 31, 2006 06:30

I redid the benchmark suite (h
 
I redid the benchmark suite (http://openfoamwiki.net/index.php/Benchmarks_stan dard_v1) with version 1.3 on three machines. For the Intel machines it's more than 10% faster (average, with some solvers even 30%). For the AMD-machine the performance increase doesn't seem to be that dramatic (but I've got to recheck these results). The only solver in the suite that seems to be consistently slower than version 1.2 is Xoodles.

But these results are only preliminary. Very interesting would be results comparing LAM-mpi with openMPI but I think I've got to adapt the scripts for that.

plmauk November 7, 2007 11:30

Hi, Does anybody know, wich c
 
Hi,
Does anybody know, wich cluster were recommended for
OpenFoam running?


All times are GMT -4. The time now is 04:35.