As an armchair observer, i ask, are games today taking better advantage of higher GPU clock speeds rather than moar cores?
I dont understand how AMD GCN arch can fall off after the 290 Hawaii. Then i noticed Nvidia GPU have been increasingly relying on high clocks. With high clocks, you can push things like fillrate and tessellation higher with lesser units, no? And with lesser compute units, you can push for higher clocks...and you need lesser units because game engines are still not as massively parallels/limited by consoles...make sense?
I dont understand how AMD GCN arch can fall off after the 290 Hawaii. Then i noticed Nvidia GPU have been increasingly relying on high clocks. With high clocks, you can push things like fillrate and tessellation higher with lesser units, no? And with lesser compute units, you can push for higher clocks...and you need lesser units because game engines are still not as massively parallels/limited by consoles...make sense?