CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Hardware

New computer build. 10k budget

Register Blogs Members List Search Today's Posts Mark Forums Read

Reply
 
LinkBack Thread Tools Display Modes
Old   June 12, 2012, 05:14
Question New computer build. 10k budget
  #1
Member
 
Stuart
Join Date: Jun 2012
Posts: 59
Rep Power: 5
ShowponyStuart is on a distinguished road
I am building some computers for my final year project (FYP) for mechanical engineering. I am limited to 4 cores (per simulation) due to the licencing, so I am going to build a couple of computers and I wanted some people smarter then me (which is most people) to see if I am heading down the right path.
What I am looking at is a main computer and a couple of independent (ie NOT nodes for distributed computing) computers to also run simulations for me.

I will be using my main machine for lots of pro-e, matlab etc. as well, so I figure a decent GPU wont go astray in it, where as in the others I will only be doing a bit of pre/post processing on it occasionally (ideally it will be mostly be done on my #1 machine)

I am investigating the leading edge vortices created by tubercles, and how they effect performance of a hydrofoil overall, so the meshes will get pretty big (and fine) quite quickly. (Note I will also be using these machines later on down the track for other things, like rocket nozzle simulations and increasingly more complicated things, as I am facinated by cfd. I am considering starting up a bussiness but that is much further down the track). Also I will be using ANSYS13(14?) CFX. I have access to the complete ANSYS13-14 package including pre and post etc...

So how do those specs look, will these allow my to pump out a decent amount of simulations relatively quickly?
Main machine
Keep in mind this machine is primary built for CFD (pre/post,sims) but also pro-e, matlab, strand 7 etc...(So its basically a CAE workstation)
CPU --------> i7 3930k
MOBO ------> ASUS Sabertooth X79?? (not sure if this is the best bet or not)
RAM -------> 64Gb quad channel DDR3 @ 2333MHz
HDD(1st) -> 60Gb SSD(For programs ie windows, ansys, proe etc.)
HDD(2nd) -> 240Gb SSD (working drive for ANSYS simulations)
HDD(3rd) -> 2x1Tb 7200rpm in raid0 (for "longer term storage")
GPU -------> Nvidia Quadro 4000
Secondary machine(s)
NOTE: each one will be operating independently and NOT as a node due to the licence arrangement at my university. Keep in mind this machine is primary built for running CFD simulations with a bit of pre/post processing here and there.
CPU---------> i7 3930k
MOBO-------> ASUS Sabertooth X79?? (not sure if this is the best bet or not)
RAM---------> 32Gb quad channel DDR3 @ 2333MHz (can increase this later if needed)
HDD (1st) --> 120Gb SSD
HDD (2nd) -> 1Tb 7200rpm (short term storage until data can be transferred to the main computer)
GPU -------->Nvidia Quadro 600
For Both Machines
Cases ----->??? Will need to be big to accomodate water cooling (and slick looking too )
PSU's ------> ??? (850W or higher to cope with demand and overclocking?)
Cooling --> liquid cooled. (Corsair H80/100?)

Last edited by ShowponyStuart; June 14, 2012 at 05:59. Reason: UPDATED LIST!
ShowponyStuart is offline   Reply With Quote

Old   June 12, 2012, 05:32
Default
  #2
Senior Member
 
Joern Beilke
Join Date: Mar 2009
Location: Dresden
Posts: 185
Rep Power: 9
JBeilke is on a distinguished road
Why do you want 3x 15k HDD's? They are not really quiet.

The rest seems ok.
JBeilke is offline   Reply With Quote

Old   June 12, 2012, 05:51
Default
  #3
Member
 
Stuart
Join Date: Jun 2012
Posts: 59
Rep Power: 5
ShowponyStuart is on a distinguished road
yeah, I have been looking and it seems that the bottleneck tends to be in the in/out writes. I was hoping that they would be sufficient for big transient writes to prevent everything slowing down. Realistically, perhaps it would be a better to bite the bullet and get bigger ssd's, but I figured I could save some money there.
I had planned to set up a reasonable NAS, so I could transfer my simulations off each computer onto that to avoid filling the computers so could have faster sims, and just transfer data across later. Im not sure if thats the way to go or not. I think I can tolerate the noise, the water cooling is more to keep the cpus cool to be honest. Im a stickler for that.
ShowponyStuart is offline   Reply With Quote

Old   June 12, 2012, 12:14
Default
  #4
Senior Member
 
Charles
Join Date: Apr 2009
Posts: 179
Rep Power: 9
CapSizer is on a distinguished road
Quote:
Originally Posted by ShowponyStuart View Post
I am building some computers for my final year project (FYP) for mechanical engineering. I am limited to 4 cores (per simulation) due to the licencing, so I am going to build a couple of computers and I wanted some people smarter then me (which is most people) to see if I am heading down the right path.
What I am looking at is
Main computer-(2(3?)xWorkhorses in brackets)
(note if nothing in brackets the specs are the same)

cpu--> i7 3960X 6-core -(i7 3930k 4 core)
ram-->64Gb @ 1800+?
hdd-->60Gb ssd
hdd-->3*300 15k rpm raid 0 -(2*120 15k rpm raid 0)
GPU-->Nvidia Quadro 4000- (Nvidia Quadro 600)

So how do those specs look, will these allow my to pump out a decent amount of simulations relatively quickly?
Well, apart from the question of whether you really need the 15 k raid, the thing that doesn't seem to make any sense is the 64 Gb (GB surely???) RAM. To start with, can you actually find a single socket motherboard that will support 64 GB (I'm assuming you mean 64 per machine?)? Most I've seen only support up to 32, but I would happily be proved wrong. And following on from that, it really seems odd to run only 4-way parallel with such large models. 4 GB or less per core seems to be a more reasonable number. If you are going to be running 16 GB per core (probably around 10 million cells per core?), disk I/O is going to be the least of your difficulties.

Water cooling the CPU's seems to be a pretty good idea, I've been impressed by the low noise level.
CapSizer is offline   Reply With Quote

Old   June 12, 2012, 13:35
Default
  #5
Super Moderator
 
diamondx's Avatar
 
Ghazlani M. Ali
Join Date: May 2011
Location: Canada
Posts: 1,291
Blog Entries: 23
Rep Power: 20
diamondx will become famous soon enough
some workstation with dual socket can go up to 192 GB in ram

http://configure.us.dell.com/dellsto...recision-t7500
__________________
Regards,
New to ICEM CFD, try this document --> http://goo.gl/G2gkE
Ali
diamondx is offline   Reply With Quote

Old   June 12, 2012, 14:44
Default
  #6
Senior Member
 
Charles
Join Date: Apr 2009
Posts: 179
Rep Power: 9
CapSizer is on a distinguished road
Quote:
Originally Posted by diamondx View Post
some workstation with dual socket can go up to 192 GB in ram

http://configure.us.dell.com/dellsto...recision-t7500
Indeed, but the OP seems to be interested in single socket machines. FWIW, this single socket board can actually take 8 DIMMs, for up to 64 GB of RAM: http://www.asus.com/Server_Workstati...specifications
CapSizer is offline   Reply With Quote

Old   June 12, 2012, 18:47
Default
  #7
Senior Member
 
Erik
Join Date: Feb 2011
Location: Earth (Land portion)
Posts: 486
Rep Power: 9
evcelica is on a distinguished road
Hardware looks good, but like other people have said, ditch the HDDs and go with SSDs, They work MUCH better for reading random sections of data. When I do a transient analysis and want to pull velocity at a single point over time, the HDD can take hours some times, but my SSD is always WAY faster. You can raid 0 two of them if you want, but this will only help you when you are reading or writing results, not during calculations. One more thing about Raid 0 is you also double the size of the disk, so two 120GB SSDs would Raid into a 240GB... I think. Just make sure with SSDs you do your research and don't get one of the ones that causes incurable blue screens.

The 3930K is also a 6 core, only difference from the 3960X is the cache size is 12 instead of 15. Also the 3960X has a 100MHz clock speed increase which doesn't mean anything if you are overclocking. Contradictory to what others have said, I've found overclocking makes a huge difference with the Sandy-bridge-E line of processors. I've seen an almost perfectly proportional increase in calculation speed with clock speed. This was using the CFX benchmark, I have not tested larger models, so maybe that would prove different.

If your going to be overclocking make sure you cool everything in the case well, not just the CPU; as memory modules, and VRM area get hot too; Water cooling lowers the air flow around the VRM area since you don't have a fan on the CPU right near there anymore.

Make SURE your ram is compatible with your motherboard, go to the motherboard manufacturers site and make sure it is on their list.

Don't forget to disable hyperthreading if using CFX
























.
evcelica is offline   Reply With Quote

Old   June 12, 2012, 18:56
Default
  #8
Senior Member
 
Join Date: Aug 2011
Posts: 114
Rep Power: 6
scipy is on a distinguished road
Send a message via Skype™ to scipy
Well, first of all both the 3960X and the 3930K are 6 core CPUs Second of all, don't waste money on the 3960X.. 3930K performs exactly the same when clocked to X's native frequency.

Second of all, CapSizer is right. If you want your node machines to be quad cores (i7-2600K for example), then go for max 16 GB RAM. I have the mentioned cpu and 32 GB of 1333 MHz DDR3 but when I run a case of about 17 million cells with a pressure based couple solver (RAM usage goes up to 97 % or so), it takes so goddamn long that it's only feasible for steady state simulations and for "final results" case (in a grid independence study or something similar).

There was a topic here about mechanical calculations and the I/O bottleneck, so.. save some money (nearly double if you substitute 3930K with 2600K for the nodes, and double once more for the main machine subbing 3960X with 3930K) and pay for the 60 GB async NAND SSD per node(regular Corsair Force Series 3). For the main machine get at least 120 GB sync NAND (Corsair Force GT, for example) so you can have a couple of main programs installed plus the active case files. For the main machine it's a good idea to get 8x8 GB of the fastest RAM you can afford (I think CL10-11 2133 MHz is the "best buy"), and for the nodes don't go over 1600 MHz (the increase in performance is minimal and the price is more than double).

If you are really going to be serious then some used InfiniBand cards and a switch should be in the plans. 10 gbps SDR infiniband should do fine for anything up to 24 nodes and those cards can be had on eBay for anything from 30 to 100 euros, switch can be found for 300-500 (24 port Topspin etc). Without a fast interconnect, speeding up all the other "bottlenecks" becomes meaningless.

On the other hand, you can skip all the trouble and setup and just get a nice Quad Opteron 6272/6274 with 256 GB RAM and you're good to go on almost any simulation
scipy is offline   Reply With Quote

Old   June 12, 2012, 19:15
Default
  #9
Senior Member
 
Erik
Join Date: Feb 2011
Location: Earth (Land portion)
Posts: 486
Rep Power: 9
evcelica is on a distinguished road
He's only able to run 4 processors, so will probably be running local parallel, not distributed? I really don't see the purpose of the 2 or 3 other computers if they are only doing post processing, or are you going to run several computers distributed parallel?
evcelica is offline   Reply With Quote

Old   June 12, 2012, 20:55
Default
  #10
Member
 
Stuart
Join Date: Jun 2012
Posts: 59
Rep Power: 5
ShowponyStuart is on a distinguished road
I should clarify. As I am using the university licences, I have access to 30+ licences, but am limited to 4 cores per a simulation due to how the university have its livences set up. So I can, for example, have 4 seperate simulations running at once, but I can not run 16 cores on 1 simulation. So I think that answers evcelica and scipy as to why I have done it this way, as they are not nodes, but rather stand alone computers.

Quote:
Originally Posted by scipy View Post
Well, first of all both the 3960X and the 3930K are 6 core CPUs Second of all, don't waste money on the 3960X.. 3930K performs exactly the same when clocked to X's native frequency.
For this I was looking for that extra cache at the processor level, aslso as this will be my main computer for pretty much everything, I didnt mind spending that bit extra to get the top of the line for this.


Quote:
Originally Posted by CapSizer View Post
Well, apart from the question of whether you really need the 15 k raid, the thing that doesn't seem to make any sense is the 64 Gb (GB surely???) RAM. To start with, can you actually find a single socket motherboard that will support 64 GB
From my investigation there seems to be fair few mobos that can support 64GB of ram. The ones in the link seem to be a good starting point.
http://www.techspot.com/review/484-intel-x79-motherboard-roundup/page3.html


Quote:
Originally Posted by evcelica View Post
The 3930K is also a 6 core, only difference from the 3960X is the cache size is 12 instead of 15.

Contradictory to what others have said, I've found overclocking makes a huge difference with the Sandy-bridge-E line of processors. If your going to be overclocking make sure you cool everything in the case
I was under the impression that you can get a 4 core version of the 3930k that was substantially cheaper?

Has anyone else had any experiance with overclocking improving performance much? I would expect meshing and that to be quicker I guess. Yeah I like to keep my gear cool, so that shouldnt be a problem.




Im concerned about how big my transient files will get though, and how much it will cost for the size that I want for a ssd. I might be over estimating what I need here, but i was thinking a transient may be up to 200+GB, which means I would need to transfer it from the ssd anyway or spend ridiculous money on a huge ssd. I was trying to save a little money here. Honestly though, I wont be doing all that many transient sims for the thesis I think, they will be more for me to wrap my head around the problem completly, so I spose smaller ssds will be fine on the other computers if I am running steady state sims.

Last edited by ShowponyStuart; June 12, 2012 at 22:56.
ShowponyStuart is offline   Reply With Quote

Old   June 12, 2012, 22:26
Default
  #11
Senior Member
 
Erik
Join Date: Feb 2011
Location: Earth (Land portion)
Posts: 486
Rep Power: 9
evcelica is on a distinguished road
The 4-core version of sandy-bridge-E is the i7-3820, but it has a locked multiplier, so it will be relatively difficult to overclock compared to the K (unlocked) versions.

Two top of the line 120GB SSDs should cost about 200 each, making a ridiculously fast 240GB raid SSD. Intel has a single 240GB SSD that is only like $340. Though I've never heard of the disk writing being a bottleneck for CFD, are you writing results every iteration or something?

For transient results you can write just the variables of interest which saves an enormous amount of space/time. I mean seriously, do you really care what the ... second order vorticity divergence gradient is anyway (Yes I made that up). I started writing selected variables for transient simulations for this very reason, writing EVERYTHING takes up an enormous amount of space. I would go with one good SSD over any Raid HDD, They have just performed SOOO much better for me, are quiet, and use far less energy. And when you are post-processing and reading the data you will be sitting in front of the computer waiting for results, it can calculate while you sleep. So spend the extra money on SSD(s) over HDDs instead of the 3960X over 3930K, which like scipy said, will probably not offer much if any performance boost over the 3930K, which is like $400 cheaper.

Perhaps not the most strategic move, but since you keep mentioning the "main computer" one thing you may want to think about, just for the pure BAD-ASS factor, is getting just one dual socket XEON E5-2687W machine. You could run two simulations at the same time and be about as fast as two separate 3930K machines. Heck you could even run three or four simultaneous simulations on that single computer no problem. Three or more 3930K machines would be more effective at getting work done, but one bad-ass machine has its perks too, just something to consider.
evcelica is offline   Reply With Quote

Old   June 12, 2012, 22:51
Default
  #12
Member
 
Stuart
Join Date: Jun 2012
Posts: 59
Rep Power: 5
ShowponyStuart is on a distinguished road
Quote:
Originally Posted by evcelica View Post
The 4-core version of sandy-bridge-E is the i7-3820, but it has a locked multiplier, so it will be relatively difficult to overclock compared to the K (unlocked) versions.

Though I've never heard of the disk writing being a bottleneck for CFD, are you writing results every iteration or something?

Perhaps not the most strategic move, but since you keep mentioning the "main computer" one thing you may want to think about, just for the pure BAD-ASS factor...
So you rekon its worth going the 6 core 3930k just to allow for overlocking? I didnt relise they crippled the 4 core version. I figured both would overlock in much the same way as each other.

As for transient results, I am actually quite interested in the vortices produced(leading edge in particular) and how they develop/decay and the macro effects these vortices have lol. But I also want to be able to be able to visually simulate these in time to a certain degrees, but I spose I could probably cut down a bit on transient results.

I am really considering just biting the bullet and going all ssd, but its going to be expensive. May even result in me running 1 less computer .

As for that badass machine, I desperatly want it (I had a very good 10g machine in mind), but the only thing stopping me is the licencing. If I run 2 simulations and/or post/pre processing on a single machine the licence server gets shitty and tells me I I dont have enough licences, but is fine if I run on seperate computers. That is the only thing stopping me lol.

Do you(or anybody) have a preference for where you buy your computer gear? Somewhere cheap/reliable?
ShowponyStuart is offline   Reply With Quote

Old   June 12, 2012, 23:01
Default
  #13
Senior Member
 
Erik
Join Date: Feb 2011
Location: Earth (Land portion)
Posts: 486
Rep Power: 9
evcelica is on a distinguished road
In license preferences you can specify "use a separate license for each application". This may fix your problem of the license server acting shitty on you. It should then act as if it were separate computers?

I've always got my stuff from Newegg.com, seem to have good prices, customer service, and a huge selection. There may be better out there though, Tiger direct is another popular one, but I've never bought from them, don't know why, I just like newegg.
evcelica is offline   Reply With Quote

Old   June 14, 2012, 04:57
Default Update.
  #14
Member
 
Stuart
Join Date: Jun 2012
Posts: 59
Rep Power: 5
ShowponyStuart is on a distinguished road
Okay, so here is a run down of what I am thinking below. If anyone has any thoughts or specific suggestions on what I have here, feel free to let me know, it will be more than welcome. I'm kinda stuck on getting the specifics, and getting the design finalised properly so I can start ordering components, so any help there too would be brilliant.

Main machine
Keep in mind this machine is primary built for CFD (pre/post,sims) but also pro-e, matlab, strand 7 etc...(So its basically a CAE workstation)
CPU --------> i7 3930k
MOBO ------> ASUS Sabertooth X79?? (not sure if this is the best bet or not)
RAM -------> 64Gb quad channel DDR3 @ 2333MHz
HDD(1st) -> 60Gb SSD(For programs ie windows, ansys, proe etc.)
HDD(2nd) -> 240Gb SSD (working drive for ANSYS simulations)
HDD(3rd) -> 2x1Tb 7200rpm in raid0 (for "longer term storage")
GPU -------> Nvidia Quadro 4000
Secondary machine(s)
NOTE: each one will be operating independently and NOT as a node due to the licence arrangement at my university. Keep in mind this machine is primary built for running CFD simulations with a bit of pre/post processing here and there.
CPU---------> i7 3930k
MOBO-------> ASUS Sabertooth X79?? (not sure if this is the best bet or not)
RAM---------> 32Gb quad channel DDR3 @ 2333MHz (can increase this later if needed)
HDD (1st) --> 120Gb SSD
HDD (2nd) -> 1Tb 7200rpm (short term storage until data can be transferred to the main computer)
GPU -------->Nvidia Quadro 600
Both Machines
Case ----->??? Will need to be big to accomodate water cooling (and slick looking too )
PSU ------> ??? (850W or higher to cope with demand and overclocking?)
Cooling --> Corsair H80/100?

NOTES/QUESTIONS
HDDs
- For the hdds, is it worth while getting the two separate ssds? I had assumed it worked in the same way as normal hard drives.
- Thoughts on the 2 hdds in raid0 to allow me to store my simulations once the major grunt work has been done ie post processing etc?
(I was thinking raid0 will be good to as I will probably have to go back to the sims regularly to recheck my results (both outputs and post simulations) so the drives will still need to be relatively quick, and if I regularly back-up to an external HDD, im guessing my data should be safe enough.)
-GRAPHICS CARD
As for the graphics card, I can be talked into something else here as long as it is compatible withproe(especially), ansys(even though it barely uses it), matlab etc...

-ANYTHING I HAVE MISSED?
So how does it look? Im hoping I havent overlooked anything, because (like everyone) I have a limited budget and 1 shot to get it right.
ShowponyStuart is offline   Reply With Quote

Old   June 14, 2012, 16:17
Default
  #15
Senior Member
 
Charles
Join Date: Apr 2009
Posts: 179
Rep Power: 9
CapSizer is on a distinguished road
Quote:
Originally Posted by ShowponyStuart View Post
I have a limited budget and 1 shot to get it right.
Mmmm ... it doesn't look a lot like the systems I would build if I had a limited budget and a "several times 4-way parallel only" constraint. To start with, although SSD's are great, if you are running large simulations on only 4 cores, you would expect processing to take MUCH longer than I/O, so I doubt that you will see significant time savings from the SSD's. It would be altogether different if you were streaming stuff to and from disk all the time, but CFX doesn't work like that. Also give some consideration to maybe just going with less expensive motherboards and Core i5 CPU's. Yes, they are slower, but they overclock beautifully and you will be able to fit more quad-core systems into your budget, which may give you better total throughput.
CapSizer is offline   Reply With Quote

Old   June 14, 2012, 17:16
Default
  #16
Member
 
Join Date: Jul 2011
Posts: 59
Rep Power: 6
rmh26 is on a distinguished road
Especially on the secondary machines that you just running test cases on and doing any data processing visualization you don't need to be concerned with I/O but might still be worth it for you main machine.
rmh26 is offline   Reply With Quote

Old   June 15, 2012, 09:43
Default
  #17
Senior Member
 
Erik
Join Date: Feb 2011
Location: Earth (Land portion)
Posts: 486
Rep Power: 9
evcelica is on a distinguished road
I did a little benchmarking for you testing different CPU frequency and ram speeds on the 3930K. I also ran one case on a quad core i7-2600. These were all using 4-core parallel. The CFX case had 1.3M nodes, double precision. Attached graphs show the results. Seems overclocking both memory and CPU frequency make a decent difference in calculation speed.

a 37% increase in CPU speed showed a 19.5% increase in calc speed.
a 33% increase in Memory speed showed a 24% increase in calc speed.
This overclocking range showed a 38% performance difference, thats pretty nice!


At comparable speeds, Seems the 3930K is about 1.5 times as fast as the quad core sandy bridge, even using only 4 cores. I still have to check the 3930K at the 1333 RAM speeds though. So I would probably stick with all 3930K, and not go with the i5s for the secondary computers. Plus you have more RAM capacity, and in the future if you ever want to run distributed you can connect the 3930K machines together .

About your hardware:

-Motherboard: The sabertooth x79 is really made for dependable stock speeds. Its not an overclocking board, and only supports 1866 MHz RAM. I'd reccomend the ASUS P9x79 Deluxe. I would say go with the ASUS WS but it has a very limited amount of "qualified" RAM.

-Cooler: go with the H100 over the H80, and buy two variable speed fans to use instead of the crappy stock ones. You will also need a 4 pin fan splitter cable.

-Case: Check out the cooler master HAF-X. Great case, a lot of fan spots, and air flow, and you can mount the double length radiator H100 on the Top. Be warned though, this case is HUGE! It is expensive, kind of a waste maybe, and there are smaller and cheaper cases out there, but I just really like this one.

-RAM: I think you mean 2133 MHz RAM? It would be very difficult to get 64GB of RAM running at 2333 MHZ, it may even be difficult to got it running stabily at 2133 MHZ. Im sure it can be done, but just be warned it may be tricky and you could spend some time getting it to work right. No big deal though, because you can always underclock it. make sure whatever RAM you get is on the motherboards QVL list.

-That power supply should be plenty, I've been using an 800W.

Thats all I can think of for now.
Attached Images
File Type: jpg CFX.Benchmark.jpg (51.3 KB, 247 views)
evcelica is offline   Reply With Quote

Old   July 6, 2012, 03:24
Default Update
  #18
Member
 
Stuart
Join Date: Jun 2012
Posts: 59
Rep Power: 5
ShowponyStuart is on a distinguished road
Okay I've been off with the fairies for a while, but I'm getting back into sorting out this computer. I think I have settled ideally on the following specs

CPU --------> i7 3930k
MOBO ------> ASUS Rampage IV Extreme
RAM -------> 32Gb quad channel DDR3 @ 2400/2333MHz (only using 4 slots so I can upgrade later if necessary)
HDD(1st) --> 64GB Samsung 830 Series SSD (For programs ie windows, Ansys, Proe etc.)
HDD(2nd) -> 512GB Samsung 830 Series SSD (working drive for ANSYS simulations)
HDD(3rd) -> 2x1Tb 7200rpm in raid0 (for "longer term storage/usage" but will regularly back up to an external hdd to be on the safe side)
GPU -------> Nvidia Quadro 4000
PSU -------> Corsair 850W
Cooling ---> Corsair H100

If I build myself, I think I can do it for around $5000-5500 or so, but im considering buying the system from "originpc" and getting the system below.

Processor: Overclocked Intel Core i7 3930K 4.5GHz - 5.2GHz
Graphics Card: Single ORIGIN 2GB GDDR5 NVIDIA Quadro 4000
ORIGIN Professional Graphics Card Overclocking: ORIGIN Professional Graphics Card Overclocking
Memory: 16GB Corsair Vengeance DDR3 1600Mhz (4x4GB) Quad-Channel Memory
System Cooling: ORIGIN CRYOGENIC Custom Liquid Cooling CPU & Motherboard (refill kit included)
Power Supply: 850 Watt Corsair TX850 PSU
Motherboard: ASUS Rampage IV Extreme (USB 3.0, SATA 6Gb/s, 4-WaySLI capable)
Hard Drive One: 64GB Samsung 830 Series - Solid State Drive


Although that system will cost me around $5200 AUD and I will still need to add the extra hard drives and possible change the ram, it has the benifit of them building (and overclocking) it for me. Not only do they overclock the cpu and gpu, they install custom water cooling for the cpu and motherboard. This will save me time, and hopefully circumvent the possibility of buying a dud motherboard.The asus Rampage IV Extreme mobos seem to be particularly notorious for turining up DOA with asus refusing to replace it).

So im happy to pay extra to ensure I dont get any dodgy components as they (supposedly) test the machines for a certain amount of time before sending them out. At least that is my thought process. Ive been wrong before haha.

Any thoughts? Anyone had experience that maybe able to help me a bit here?
ShowponyStuart is offline   Reply With Quote

Old   July 6, 2012, 07:01
Default
  #19
Senior Member
 
Erik
Join Date: Feb 2011
Location: Earth (Land portion)
Posts: 486
Rep Power: 9
evcelica is on a distinguished road
Just a thought:
Memory is controlled by the integrated memory controller on the CPU, therefore RAM speed/density/voltages have a large effect on CPU overclocking and voltage setings.
If you are going to have to change the RAM out, then that is going to change all the CPU voltages and their "stable" overclock may not be stable anymore; you will be starting from scratch anyways.
evcelica is offline   Reply With Quote

Reply

Tags
ansys, build, cpu, overclocking, ram

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Help with new computer build ozzythewise Hardware 4 August 4, 2010 16:18
64bitrhel5 OF installation instructions mirko OpenFOAM Installation 2 August 12, 2008 18:07
OpenFOAM build from source instructions tj22 OpenFOAM Installation 7 April 6, 2006 16:15
Unigraphics -> CFX4 Build Vincent CFX 1 February 11, 2003 17:04
CFX Build 5 question... cfd guy CFX 6 June 19, 2001 22:38


All times are GMT -4. The time now is 01:35.