If someone were to be setting up a workstation to run large Nengo models, what GPU would you recommend? Would something like the GTX1080 work well, or would you recommend going for something in the Tesla/Quadro series? Or something else?
It depends on the kind of Nengo models you want to run. For standard Nengo models, for the best bang for your buck, consumer grade graphics cards like the GTX1080/1070 will work just fine (Spaun [4.8million neurons] runs just well on the GTX1080, using up about 3GB of the total 12GB of VRAM available). However, the downside of consumer grade graphics cards is that they are not designed to run continuously, so their lifespan is shorter.
If you care about the lifespan of your graphics card, and have lots of money, then go for the Tesla series. But they can cost almost 4x as much as the consumer grade cards, and don’t even match the amount of cores available on the consumer cards. The Quadro & Tesla lines also tend to have a lower compute capability (see here: https://developer.nvidia.com/cuda-gpus). This is especially important if you are running CUDA code that uses features that require a higher compute capability to run.
NVIDIA just released the new Tesla line (with the new architecture and thus the highest compute capability), and @xchoo is right: They’re a lot more expensive for similar performance (in terms of FLOPS and memory bandwidth):
So what does the Tesla line get you? Here’s a good summary:
A few of the key features, like better double precision performance and error correction, are things we don’t really care about. And I think even if the Teslas last longer, you’d still be better off buying something from the GeForce line once every couple years for $500-$1000 (and I think they can last much longer than a couple years unless you’re using them non-stop), than buying a Tesla once every 5-10 years for $4000-$8000. Not only is GeForce more cost-effective, you’ll get better technology by being able to upgrade them.
My takeaway is that Tesla is for servers that need great reliability in their GPUs over long periods. As long as whatever’s running on your GPU is not mission-critical (and for anything we do it’s not), then you don’t need a Tesla. If you’ve got the cash, save your money and hire more people to do research instead.