Point of diminishing return for 60 FPS eGPU gaming? What GPU should I buy?

Leovinus

Newcomer
This is sort of a Part 2 of my previous thread. I've chosen to go the Mac mini route with an i7-8700B 6C/12T CPU and 32GB of ram connected to an eGPU. Coupled to this is a recently purchased 60Hz 4K IPS display with very decent 1440p scaling, so high FPS gaming is not in the cards now or in the future. Now gaming can, and mostly will, be done in Windows. For optimal compatibility with MacOS and the least amount of "hacking" on my part I'll be going with an AMD GPU.

I've had my eyes on the 5700XT as I suspect it won't bottleneck too badly at 60 FPS over the 4 lane PCIe3 + overhead eGPU connection. Or might I want to wait and see what RDNA2 brings to the table? With luck 4K 60 FPS, or 1440p 60 FPS with ray-tracing, becomes possible. If that's the case I don't mind shoving a second hand 580 in the box and waiting a bit. But then again, it might be fruitless if it bottlenecks regardless.

So the real head-scratcher is how games will saturate the 4 lane PCIe3 + overhead under my 60 FPS high resolution/eye-candy target. Which begs the question of the title: Where is the point of diminishing return? What GPU, really, is the practical limit here?
 
Does the pcie3 and overhead give big enough impact to be noticeable? Does rdna will have better compression?

I've not been following egpu tech lately. But many years ago, Nvidia GPU was the preferred choice due to its compression, so it works good even on slow pci lanes.

Btw we still don't know how PS5 and Xbox s will affect muktiplatform game design / performance. Maybe rdna will provide better performance due to similar arch with PS5 and xsx?
 
A rough estimation is that you loose around 20-25% in overhead on average. But bandwidth is also limited to 4 PCIe lines. So the bandwidth limitation is real. As such eGPU's seem to scale better with IQ, which increases that load on the GPU side, rather than frame rate, which loads up the CPU side.
 
Mac mini, Ive had one before and also my last PC was also the same mini size.
My chief question would be does fan noise bother you, as it will be an issue, theres no escaping this
 
A rough estimation is that you loose around 20-25% in overhead on average. But bandwidth is also limited to 4 PCIe lines. So the bandwidth limitation is real. As such eGPU's seem to scale better with IQ, which increases that load on the GPU side, rather than frame rate, which loads up the CPU side.

eGPU bandwidth on Thunderbolt 3 is unfortunately much lower than the theoretical 4x PCIe 3.0. Thunderbolt 3 needs to carry both the PCIe bus and a 10Gb/s USB 3.1, and in some cases up to 8 lanes of displayport 1.2. The bigger problem is that Thunderbolt actually always sets aside the 10Gbps for the USB 3.1 even if you're not using them, effectively lowering the bandwidth for the PCIe connection.

This means that out of the theoretical 5 GB/s from a Thunderbolt 3 connection to an eGPU, you can't really squeeze more than ~2.7GB/s (5GB/s total - 1.25GB/s USB3.1 - overhead), which is actually closer to a 2x PCIe 3.0.
You can measure the PCIe GPU<->CPU bandwidth (also known as device-to-host) by running the AIDA64 GPGPU benchmark and looking at the "Memory Read" or "Memory Write", which constitutes de effective bandwidth at which the CPU can read or write in the VRAM.

Then there's the fact that all the 14nm Intel CPUs use an external Thunderbolt 3 controller, which adds considerable latency to the pipeline, and this also reduces performance.
The best CPUs out there for using with an eGPU are the Ice Lake models, which integrate a higher performing Thunderbolt 3.0 controller within the chip itself, thus reducing the latency. On Ice Lake models, the performance hit is reduced by less than half compared to Sky Lake & friends in many cases.




There are some perspectives for improvement in the short term, though.

- USB 4.0 supposedly builds on Thunderbolt 3 with the same connectors and total 5GB/s bandwidth, but since it's not forced to carry other signals it should be able to use the full 5GB/s bandwidth just for the GPU. Since it's USB, we should expect AMD to support these too.
- Thunderbolt 4.0 should use 4 lanes PCIe 4.0 effectively doubling the maximum bandwidth from 5 to 10GB/s (80Gbps). At the same time, Intel already said that Thunderbolt 4 will only spare the same 10Gbps for USB, so the potential for the eGPU link should be up to 70Gbps or 8.75 GB/s.

So in the future, it looks like we're going from the current 30Gbps / 3.75GB/s CPU-GPU rate on current Thunderbolt 3 systems, to 5GB/s on USB4 (probably Zen3?) and 8.75GB/s on Tiger Lake and beyond.
 
This means that out of the theoretical 5 GB/s from a Thunderbolt 3 connection to an eGPU, you can't really squeeze more than ~2.7GB/s (5GB/s total - 1.25GB/s USB3.1 - overhead), which is actually closer to a 2x PCIe 3.0.

I was entirely unaware that it permanently sets aside the USB 3.1 bandwidth. A lack of intelligent bandwidth splitting seems an obvious and confusing flaw. Either way I'm already invested in the current tech through the Mac mini and Razer Core X. Regardless, I'm looking forward to USB 4 and Intel's future CPU's with built-in controllers. I just hope AMD gets in on it quickly once the USB4 spec is finalised. Modularity in this way has a lot of promise.

But the price-to-performance equation of TB3 does smart. So the question remains. What would be a good fit under bandwidth limited circumstances? Or conversely, at what point does the power of the GPU cease to matter even under optimal circumstances? And what GPU would that be? In my case either something in AMD's current lineup, or possibly something more powerful coming down the line.

It's a challenging question since I lack the proper technical and software know-how, and I suspect just as challenging - or even more so - to answer considering the variables involved for those that don't. Your answer on the technicalities was very informative though, ToTTenTranz, and greatly expanded my understanding of the subject.
 
Mac mini, Ive had one before and also my last PC was also the same mini size.
My chief question would be does fan noise bother you, as it will be an issue, theres no escaping this

It does a bit, but I'm used to it. From my experience the tone of a Mac mini isn't exactly grating. Which I definitely can't say about the fan in my laptop cooler...
 
Oh so you have one already, OK ignore my statement then. I was advising against one if you want a quiet computer
 
Oh so you have one already, OK ignore my statement then. I was advising against one if you want a quiet computer

Sort of. It's on the way and aimed to become the best worst purchase I've ever made willingly. As evidenced by pretty much all of my prior research and common sense. I'm already planning for its thermal disabilities.
 
What would be a good fit under bandwidth limited circumstances? Or conversely, at what point does the power of the GPU cease to matter even under optimal circumstances? And what GPU would that be?

There's no clear answer to that. It depends on your target resolution and IQ settings.

What we do know is you have a narrow bandwidth bettween your GPU and your CPU. This means the CPU can't order the creation of new frames very fast and sending new data to the GPU is slow, though that should mean longer loading times and/or texture / geometry pop-ups when the VRAM is full. As mentioned above, this won't mess a lot with anything the GPU can do for itself though. In general, you can sustain modest framerates at high resolution and IQ if your GPU allows you to, but lowering resolution and IQ probably won't ever give you very fast framerates.


To summarize, with an external RTX 2060, you can probably play Metro Exodus maxed out @ 4K ~35 FPS, but even with an external RTX 2080 Ti you probably can't play Witcher 3 @ medium settings 1080p with say over 90FPS.
 
There's no clear answer to that. It depends on your target resolution and IQ settings.

What we do know is you have a narrow bandwidth bettween your GPU and your CPU. This means the CPU can't order the creation of new frames very fast and sending new data to the GPU is slow, though that should mean longer loading times and/or texture / geometry pop-ups when the VRAM is full. As mentioned above, this won't mess a lot with anything the GPU can do for itself though. In general, you can sustain modest framerates at high resolution and IQ if your GPU allows you to, but lowering resolution and IQ probably won't ever give you very fast framerates.


To summarize, with an external RTX 2060, you can probably play Metro Exodus maxed out @ 4K ~35 FPS, but even with an external RTX 2080 Ti you probably can't play Witcher 3 @ medium settings 1080p with say over 90FPS.

This is what I've understood as well. That's why my target is 1440p/4K 60 FPS. In part because these are the frame rates and resolutions suitable to my monitor, and because I understand the inverse relationship between higher frame rates and performance.

Running with your example: Would you think a hypothetical RTX 2080 Ti 1440p/4K running Witcher 3 - or whatever is the equivalent of running Crysis these days - at 60FPS is viable?

  1. If the answer to that is no there is no point for me to get a 2080 Ti or similar as, assumedly, texture resolutions and draw distances etc. would have to be lowered anyway to fit the restricted bandwidth. Granted, this will inevitably become an issue. But for the here and now(+ a year or two)?
  2. If the answer is yes I'll simply get the best card my budget allows. Despite the odd edge case of stutter and pop-in as data-spikes occur.
 
What difference does the resolution make regarding PCIe bandwidth? I'd think it doesn't matter much.
 
What difference does the resolution make regarding PCIe bandwidth? I'd think it doesn't matter much.

Resolution doesn't affect bandwidth as such, it's just a way to control where the bottleneck primarily occurs in the system. At lower resolutions the CPU hits the fixed bandwidth limit trying to throw game data to the GPU, meaning a sufficiently strong GPU goes under-utilised. However by leveraging IQ (like resolution) the bottleneck can be shifted towards the GPU instead. Preferably to the point where the GPU renders at my display resolution and refresh rate without asking for more data than can be delivered. In my case 60Hz 1440p/4K. But if the bandwidth limit is so severe that, even at those IQ settings, an expensive GPU is bandwidth bottlenecked, it's just a waste of money.

I'm a layman hobbyist though, so I couldn't begin to explain it as technically and accurately as ToTTenTranz above.
 
Resolution doesn't affect bandwidth as such, it's just a way to control where the bottleneck primarily occurs in the system. At lower resolutions the CPU hits the fixed bandwidth limit trying to throw game data to the GPU, meaning a sufficiently strong GPU goes under-utilised. However by leveraging IQ (like resolution) the bottleneck can be shifted towards the GPU instead. Preferably to the point where the GPU renders at my display resolution and refresh rate without asking for more data than can be delivered. In my case 60Hz 1440p/4K. But if the bandwidth limit is so severe that, even at those IQ settings, an expensive GPU is bandwidth bottlenecked, it's just a waste of money.

I'm a layman hobbyist though, so I couldn't begin to explain it as technically and accurately as ToTTenTranz above.
Yea so you're probably good to run most games at 60fps with an eGPU. Going higher than that might be difficult depending on the game.
 
Yea so you're probably good to run most games at 60fps with an eGPU. Going higher than that might be difficult depending on the game.

Yuuuup, which is fine by me. I'm thinking I might just get the fastest AMD GPU in my budget when I've gotten the mini (it's on it's way, have the eGPU enclosure already though. A Razer Core X). Hopefully AMD will have revealed their plans by then, and I can make my mind up. But I get the distinct feeling that at 60FPS the strongest GPU I can muster will not be bottlenecked. And I can honestly play at 30-40FPS too.
 
I would try out one of the streaming services before going egpu route. Perhaps streaming is good for kind of games you play or perhaps not. It's probably free to try out streaming.
 
Back
Top