The System Board (“Motherboard”), Power Supply and Case
The system board (also called the mainboard or motherboard) provides a place for all of the aforementioned devices and chips to live. More importantly, it connects all of these items to the system bus that transmits data and electrical signals throughout the system. It is essentially a souped-up, giant circuit board.
There are several important things to consider when looking at a motherboard but most of them are not relevant for HPC. It is usually not a good idea to start your shopping with a motherboard because there are too many factors. Once you have settled on a CPU model, make note of its socket architecture (LGA1366, Socket 771 etc.) and look for motherboards that are compatible with that socket type and CPU model, and preferrably any other models or technologies that you may want to upgrade to in the near future. Companies like ASUS provide lists of CPU models compatible with its boards.
Motherboard manufacturers follow the typical trends in technology, so unless you are looking for some cutting edge technology, you can usually find a board containing most modern features given the CPU model.
Some motherboard features to consider aside from CPU compatibility:
- Memory type. The standard RAM at the time of this writing is DDR3. The board will typically have a RAM footprint that is dictated by the group of CPU models with which it is compatible. For servers, consider a board that supports Registered RAM (RDIMMs).
- Maximum memory. Purchase a board that will grow with your RAM needs. This is where the difference between a high end desktop board and a server board comes into play. A server board is likely to have a lot more space for RAM.
- Number of CPUs. This is usually dictated by the CPU model, so you should already know whether or not the board supports the number of CPUs needed. It is always good to check to make sure the board that interests you supports the number of CPUs you desire.
- I/O bus. Both hard disks and SSDs use an I/O bus. Most boards are equipped for some revision of SATA. For best performance, the fastest interface at the time of this writing is SATA III, but most hard disks cannot even reach SATA II speeds.
- RAID. If you have several disks you want to use, you may want a board that supports the particular RAID configuration you want to use. I recommend using a RAID controller separate from the motherboard becaue is the motherboard dies, so may the RAID array.
- On-board video. For a research server, you may not need high end graphics. In this case you may want to find a board with basic on-board video. If you are considering purchasing a GPU, it may be worth spending money on that rather than a video card.
- On-board audio. You are less likely to want high fidelity sound than high end graphics. It is rare for server boards to contain on-board audio, but if you need it, you can always buy a separate sound card.
- Ample PCI, PCI-X or PCIe slots. You will want several PCI type slots for other devices such as a sound card, video card, GPU, SSD, or RAID controller.
- On-board LAN. Your research server will most likely be connected to the Internet, but most boards these days contain on-board LAN chips. Faster is better as your internal network may grow and become faster (100Mbps vs. 1Gbps vs. 10Gbps).
- USB ports. The type and number of ports may be important to you. USB 2.0 is the standard, but USB 3.0 is becoming popular.
- Firewire ports. The type and number of ports may be important to you. Most Macs come with Firewire 800 which is sufficient. Many server boards do not provide on-board Firewire because it is typically used for high speed communication devices that are not typically used with servers such as video cameras, or external hard drives.
Your server will also need a power supply to provide power to the system board as well as other peripherals such as hard disks, Bluray/DVD/CD reader/writers. The wattage necessary to power your server is very dependent on your hardware. Various system board manufacturers provide tools to help estimate the necessary minimum wattage. Using a power supply with too little wattage will cause instability and some devices may not function at all.
Like the power supply, the case is highly dependent on your hardware choices. For a server, you will want a case that is not cramped and provides adequate airflow for your high performance hardware. Some users prefer to use water cooling systems rather than fans. The most important factor that should match between the system board and the case is the form factor. There are several form factors:
- ATX is the most common for desktops and many commodity servers. Supports a maximum of 7 PCI/AGP/PCI-e slots.
- SSI (C/E/M)EB are similar to ATX and have the same mounting holes and IO connector areas as ATX. Typically found in servers and gaming systems.
- microATX is shorter than an ATX board and is compatible with most ATX cases. It has fewer slots than ATX (maximum of 4 PCI/PCI-e/AGP). Popular in desktops.
- EATX (extended ATX) is used in rackmount server systems.
So how much did this set you back?
About $2500, which isn’t too bad for me. A MacPro would have been close to $3000 for a low end. I was eyeing the MacPro…but I am kind of over Mac. I am still not settled on ALL of the hardware in this purchase. I’ve considered exchanging for a faster CPU clock speed, or a different motherboard, but we will see.
I kinda read this as “just buy top of the line everything except maybe the motherboard and you’ll be good.” Any experience with tradeoffs? How would you prioritize cpu vs ram vs disk io speed?
In my experience multiple cores don’t necessarily give a huge boost in HPC if you aren’t writing for it. I don’t tend to write multithreaded code because of the complexity and debugging issues that go with it. In that case, more cores won’t help my code finish faster; it’ll only help if there are multiple jobs running at once. Xeons are great if you’re switching between a boatload of threads like a web server might or you’re concerned about power/overheating, but in my experience they aren’t worth the extra cash you pay. In that situation, clock speed can be a lot more important than you give it credit for here.
Well, to be frank, this upgrade has been way overdue, and I did a bit of “throwing money” around. The purpose of the post was to document the decisions I made and what else was out there. Of course it will come across that way, because in an ideal world, if everyone could afford the top of the line hardware, we would all be good computing-wise. It would be too difficult to tailor this post to everyone’s needs.
In terms of priority, *for my research use* I put RAM as the most important, followed closely behind by the CPU and the number of threads it can run concurrently (not so much the clock speed), followed by disk I/O. I find that for my work investing in more or better RAM gives the best bang for the buck, followed by the CPU. Although a lot of my work is disk bound (crawling), faster disks are so much more expensive and are out of my budget. Because of this, I had to make the tradeoff to favor upgrading RAM over disks. At least there are some tricks that can be done with RAM to prevent overuse of disks.
To me “high performance computing” goes hand in hand with parallelism. Of course if the code has not been written to take advantage of the cores, the extra cores are useless. I would expect that if someone is buying a multicore processor, they intend to program to use the cores. For my research, not programming to take advantage of these extra cores would qualify as *not* high performance computing. I never suggested that CPUs with more cores will make things run faster; just that more work is done at a time. Hadoop is the most trivial example I can give where code takes advantage of multiple cores, OpenMP is another.
Everyone will have their own opinion, but for my use, based on experiences I have had using servers containing Xeons, and the issues I faced without a higher-end processor, the Xeons were the way to go without a question.
why wouldn’t you drop that money on several cheap computers, instead of one expensive one?
for $2500 you could have got 14 motherboards, 14 AMD Phenom quad-core CPUs and 14 x 1 GB ram
Sigh. I’ve heard that. While a cluster would be cool, I tend to use AWS when I need a cluster setup. Also just too much of a pain to have to build that many systems. I don’t spend money often, so I was ok with putting out a one time large purchase ;). This machine really serves two purposes: for development of jobs to be shipped to AWS (time = $), and to prevent me from having to use AWS for high-memory, high-speed applications.
Over the past 20 years though, I have collected a lot of old machines that might an ok cluster, just obviously not high end.
Heya
Is SATA3 Working on this motherboard?
I cant find any info saying that this Motherboard can support SATA3. Only Sata 2 in official notes.
Let me know if u got Sata 3 to work.
Thanks bye
I got D18 too, works like dream so far!
It may not support SATA III. Drives seem to lag far behind the interfaces though. Most drives can barely perform at SATA II interface speeds.
Btw forgot to say, U might be better off with EVGA Server motherboard- They can OC CPU = 12 cores x 4-5 ghz after OC…. That would be pretty much power…
I am having trouble receiving the D18. It seems to be a rare beast. It’s taken almost a month now. Considering cancelling. If I do I will probably go with the EVGA or Supermicro.
I can sell u mine D18 and jump to EVGA tbh. I could use some extra OC for my work π
Which OS do you plan on running?
I finished the system π
I put in the same HD that was running on my old server. I am running 64bit Ubuntu 10.04 (Lucid). Runs great!
Considering upgrading to 11.04, or switching to CentOS, but I really don’t have too much of a reason to switch to CentOS.