I'm looking for a good ethereum mining GPU. I heard it's mostly memory bandwidth intensive, so I rounded up the specs of the latest cards from Wikipedia and came up this (prices are in CAD.):
card price watts GB/s GB/s/$ GB/s/W $/(GB/s)/year
GeForce GTX 1050 148.82 75 112 0.75 1.49 1.548
GeForce GTX 1050 Ti 189.78 75 112 0.59 1.49 1.974
GeForce GTX 1060 3GB 271.69 120 224 0.82 1.87 1.413
GeForce GTX 1060 6GB 339.96 120 224 0.66 1.87 1.768
GeForce GTX 1070 517.44 150 256 0.49 1.71 2.355
GeForce GTX 1080 817.81 180 352 0.43 1.96 2.707
GeForce GTX 1080 Ti 954.34 250 484 0.51 1.94 2.297
NVIDIA TITAN X 1638.35 250 480 0.29 1.92 3.977
NVIDIA TITAN Xp 1638.35 250 547.7 0.33 2.19 3.485
Radeon RX 460 189.78 75 112 0.59 1.49 1.974
Radeon RX 470 244.39 120 211 0.86 1.76 1.349
Radeon RX 480 4GB 271.69 150 224 0.82 1.49 1.413
Radeon RX 480 8GB 326.30 150 256 0.78 1.71 1.485
Radeon RX 550 107.86 50 112 1.04 2.24 1.122
Radeon RX 560 135.16 80 112 0.83 1.4 1.406
Radeon RX 570 230.73 150 224 0.97 1.49 1.2
Radeon RX 580 4GB 271.69 185 256 0.94 1.38 1.236
Radeon RX 580 8GB 312.65 185 256 0.82 1.38 1.423
Everything there's pretty self-explanatory except for the far right column. It's an estimated TCO per unit memory speed per year including cost of power in Ontario. So, you can probably just ignore it if you don't live in Ontario. In fact, you can probably ignore it even if you do.
Anyway, the GB/s/W is the dominating factor when it comes to electrical bills. By that measure, the Radeon RX 550 is winning, but you'd need quite a few of those in a single system to really reap the benefits.
In the rest of my build, I'm assuming that mining throughput scales more or less linearly with additional GPUs. If this is true, communication between GPUs isn't necessary (i.e. crossfire support unneeded) and it doesn't matter too much what the PCIE bus speed is. I'm assuming linear scaling because I think this is how it works on the network at large, with eth generation scaling linearly with number of miners. Is that a sound assumption?
Anyway, to those who have actually done this in real life, am I at all close here? Am I missing anything critical?
BTW, this is the script I used to generate that table:
#!/usr/bin/env python
# constants for doing conversions.
hour = 60*60
year = 365.25*24*hour
kilo = 1000
watt = 1
CAD = 1
USD = 1.36529*CAD
# Data is from Wikipedia.
# https://en.wikipedia.org/wiki/GeForce_10_series
# https://en.wikipedia.org/wiki/AMD_Radeon_400_series
# https://en.wikipedia.org/wiki/AMD_Radeon_500_series
card = [
'GeForce GTX 1050', 'GeForce GTX 1050 Ti', 'GeForce GTX 1060 3GB',
'GeForce GTX 1060 6GB', 'GeForce GTX 1070', 'GeForce GTX 1080',
'GeForce GTX 1080 Ti', 'NVIDIA TITAN X', 'NVIDIA TITAN Xp',
'Radeon RX 460', 'Radeon RX 470', 'Radeon RX 480 4GB',
'Radeon RX 480 8GB', 'Radeon RX 550', 'Radeon RX 560',
'Radeon RX 570', 'Radeon RX 580 4GB', 'Radeon RX 580 8GB',
]
price = [
109*USD, 139*USD, 199*USD, 249*USD, 379*USD,
599*USD, 699*USD, 1200*USD, 1200*USD,
139*USD, 179*USD, 199*USD, 239*USD, 79*USD,
99*USD, 169*USD, 199*USD, 229*USD,
]
watts = [
75, 75, 120, 120, 150, 180, 250, 250, 250,
75, 120, 150, 150, 50, 80, 150, 185, 185,
]
gflops = [
1862, 2138, 3935, 4372, 6463, 8873, 11340, 10974, 12150,
2150, 4940, 5834, 5834, 1211, 2611, 5095, 6175, 6175,
]
memspeed = [
112, 112, 224, 224, 256, 352, 484, 480, 547.7,
112, 211, 224, 256, 112, 112, 224, 256, 256,
]
# http://www.ontario-hydro.com/current-rates
powerprice = (12*0.087 + 6*0.18 + 6*0.132)/24 * CAD/(kilo*watt*hour)
# Expected lifetime of GPU before resale. PCIE4 is coming out in two years,
# probably destroying all of this.
lifetime = 2*year
# Expected percent depreciation of GPU's resale value after above lifetime has
# elapsed. Optimistic?
depreciation = .2
print('{:20}{:>10}{:>10}{:>10}{:>10}{:>10}{:>15}'.format(
'card', 'price', 'watts', 'Gflops',
'Gflops/$', 'Gflops/W', '$/Tflops/year'))
for i in range(len(card)):
gpd = gflops[i]/price[i]
gpw = gflops[i]/watts[i]
# Average total cost of ownership per year per teraflops of computing
# power, including cost of power if for some reason running at full tilt
# all year round.
dpT = 1000/gpd
tco = dpT*(depreciation/lifetime + powerprice) * year
print('{:20}{:10.2f}{:10}{:10}{:10.2f}{:10.3}{:15.4}'.format(
card[i], price[i], watts[i], gflops[i], gpd, gpw, tco))
print()
print('{:20}{:>10}{:>10}{:>10}{:>10}{:>10}{:>15}'.format(
'card', 'price', 'watts', 'GB/s',
'GB/s/$', 'GB/s/W', '$/(GB/s)/year'))
for i in range(len(card)):
gpd = memspeed[i]/price[i]
gpw = memspeed[i]/watts[i]
dpg = 1/gpd
tco = dpg*(depreciation/lifetime + powerprice) * year
print('{:20}{:10.2f}{:10}{:10}{:10.2f}{:10.3}{:15.4}'.format(
card[i], price[i], watts[i], memspeed[i], gpd, gpw, tco))