Skip to content

Build a Deep Learning Machine under $1000

As deep learning (DL) gradually domainates the research and job markets in AI, making a deep learning machine is essential for research and practice on getting up-to-date technology. The key difference between building a PC (including a gaming PC) for programming and AI is about choosing the right GPU for DL and its corresponding components. Although there are many good articles about choosing a good GPU with quantitive analysis, or building high-end hardware system (e.g., this or this ), I feel that discovering the most cost-effective but expandable strategy of building an entry-level system could be good for the majority of researchers or AI hobbyist (that are as frugal as me). Afterall, it doesn't make sense to spend a lot of money at the begining on high-end components and quickly the price dropped a lot but you didn't have enough time to learn from the old one.

In general, the most cost-effective products in the world are usually sold in large volume. So in general, you may waste a lot money on making everything on high-end level with doubled or tripled prices to compensate their low volumes of sales.

Note: this article is not intended to provide an up-to-date suggestion of configuration but a general idea for a long-term and cost-effective hardware guide for entry-level AI experiment.

GPU

GPU could be the only part that needs a high-end one as a DL machine is a GPU-centered system. You may check this article for the GPU you want, or check how much you left after deciding the rest. We assume a single GPU setting but other components are supposed to work for 2~3 GPUs so you may add more when newer ones come out.

A few important facts about GPU:

First, a newer GPU may typically have double or more performance in recent years (like the boost of 10 series from 9 or 20 series with tensor cores that supports fp16, which may speed up 3~4x and reduce the consumed memory by half). So I feel it doesn't make much sense to go for multiple GPUs at the beginning just for performance. As for research or simple practice, a multi-GPU setting may introduce extra coding and make your code impossible for many other researchers to repeat if they don't have multiple ones. And maybe just after one year, a newer single GPU is always better than your 2~3 old ones with no extra coding and no slow inter-GPU communications.

Second, the size of GPU memory matters a lot, where some big models may not able to run for a GPU with a small memory. This could be a big problem as it is not a performance issue that you can pay more time to run but could be totally not working (e.g., using a small batch to save memory but the training is not stable).

We will discuss the rest components first so all the budget leftover can be used for your GPU, as discussed in the summary.

CPU

As the computation load is mostly on the GPU, a cost-effective CPU that can handle a few processes is good enough. Note that most high-end CPUs are designed for doubled or tripled cores/threads, not much for frequency (or speed) on a single process. So an ideal CPU should be on the low-end like almost free so later you can replace it with no regret. My first CPU is just a $50 pentium for a single CPU. But I suggest to use i3 (around $120) to support 2~3 GPUs (I didn't see much difference when compare them with an i5). Plus, you may get a free fan from a low-end CPU.

Motherboard

A good motherboard is essential for expandability. So make sure the combination of CPU/motherboard and power supply can match each other within 1~2 years of upgrades.

As we aim for 2~3 GPUs in future, make sure you have 2~3 PCIe slots. PCIe should be configured at 2 * 8x for two GPUs or 2 * 8x + 4x for 3 GPUs. In most cases, one GPU runs on 8x won't hurt your performance much (compared one one 16x) but allow you to install two GPUs. PCIe 4x may throttle the performance but its OK for my NLP applications. Of course, there's no problem when you just have one GPU running at 16x and almost all motherboard can support that.

I assume we may upgrade to 64GB memory in future (16GB per module), so we possibly need a motherboard with 4 memory slots.

Open-box/refurbished motherboard may save a lot but still have good enough quality for 1~2 years. I have an open-boxed ASUS prime for 2 years at $80. It still works quite well now. It also comes with a (debug) power button (so I have the chance to not buy a case at all). Note that some sellers like Microcenter may have a combo with CPU to save $30, even including their open-boxed motherboard.

Memory

The good news is that the crazily high memory price is gone. Now you can easily pick a new 16GB memory for $75, which is usually good for 1~2 GPUs (except some poorly writen python project).

SSD

For the first storage device, I think no need to consider a hard disk but just go for SSD because many DL models are large and slow to load/save checkpoints. Make sure it's at least $500 GB (about $60) and 1TB could be better. Also, check if you have some external hard disks so to store not-in-use DL projects out there in future.

Power Supply

Consider to have a power supply to support 2~3 GPUs ahead (250w for a 1080Ti, 260w for a 2080Ti). These add up to at least 900w for 3 GPUs. I have a 850w open-boxed Corsair one with warranty for $80.

Accessories

To be frugal, I won't discuss things like keyboard or mouse and I assume you have at least a normally working one for those and a laptop. So after installing the OS and basic configuration. You can use your laptop to remotely run everything. Also you may consider a $30 case but I totally run my system in the air and my motherboard has a power button so no need to short-circuit the pins to power up and have a configuration that is easy to upgrade.

Summary

As a summary, we have a $120 CPU + $80 motherboard + $75 memory + $60 SSD + $80 power supply = $415. If I assume my budget is $1000, then I have $585 for GPU, which is good enough for a 2070 (as recommended by many articles) and close to a 2080. It depends on your budget to try 2080 Ti/Titan RTX.

Comments