Maybe he used the flops capacitor to get one from the future with the Fusion APU powered by all the garbage posts on the forums.
Devs are too lazy to build that.
Maybe he used the flops capacitor to get one from the future with the Fusion APU powered by all the garbage posts on the forums.
Where?
The worst framerate performance on a Digital Foundry tests since they began to do it in 2008: average 1080p 15 fps with Vsynch on PS4 and 1080p and average 15 fps on Xbox One with Vsynch off..
And 2 to 3 minutes by loading screen... Problem of luminosity on Xbox One...
http://www.eurogamer.net/articles/d...mage-is-the-worst-performing-game-weve-tested
Tech Interview: Doom
Very nice interview, probably one of the very best DF articles in a while
On async compute:
On TSSAA:
ULTRA BUMP 2000
Does anyone know if this shitshow ever got anywhere near fixed? I assume what happened here is similar to Sword of the Stars 2: Release or die(SOTS2 later got patched to actually work and is apparently a pretty decent game. But, the initial release was a shitshow).
I also wonder how DF staff are going to find out the resolution of any given game when they are over 1080p. I don't think there is going to be steps to count..
Sure there will be. Higher res makes jaggies and other aliases smaller and softer and thus harder to pick out, but there's not a hard cutoff that occurs at native res.I also wonder how DF staff are going to find out the resolution of any given game when they are over 1080p. I don't think there is going to be steps to count..
I also wonder how DF staff are going to find out the resolution of any given game when they are over 1080p. I don't think there is going to be steps to count..
If they can't work this out themselves, they don't deserve the clicks.Someone needs to tell Digital Foundry to drop RX480 memory clockspeeds (default 224GB/s) 20-25% to achieve 170-180GB/s to simulated memory contention and cpu load on NEO if they want a closer and more likely simulation of NEO gpu's real world capabilities.
I disagree. The contention would be roughly the same on the PS4 target machine if they only change the res. Their R7 265 with 1.84tflops has also 178GB/s of bandwidth, and performs very similarly like a real PS4 on some games, like TW3 or RSS. I think the 'coding to the metal' on PS4 is largely enough to overcome the contention parameter.Someone needs to tell Digital Foundry to drop RX480 memory clockspeeds (default 224GB/s) 20-25% to achieve 170-180GB/s to simulated memory contention and cpu load on NEO if they want a closer and more likely simulation of NEO gpu's real world capabilities.
Not a surprise. Nvidia has spent a huge amount of money in mobile GPU and CPU design but hasn't got any big design wins since the Microsoft Surface RT (which flopped). This is great for Nvidia. They get some returns for years of development effort. I am sure Nintendo got a good deal, since Nvidia's mobile side must be desperate. Hard to compete against Qualcomm, Samsung, Apple and others. Intel couldn't compete as the mobile chip profit margins are so thin. Like Intel, Nvidia is also used to high profit margins. They make most of their money with high end GPUs, professional workstations and HPC. All really high profit margin segments.Nintendo NX is powered by Nvidia Tegra technology
http://www.eurogamer.net/articles/d...-mobile-games-machine-powered-by-nvidia-tegra
/ Ken
I think Nvidia has used that experience with mobile GPU/Tegra/Jetson to push into other lucrative markets such as automobile (yeah still very early days for all but scale and margins are much bigger than what they could do with gaming HW).Not a surprise. Nvidia has spent a huge amount of money in mobile GPU and CPU design but hasn't got any big design wins since the Microsoft Surface RT (which flopped). This is great for Nvidia. They get some returns for years of development effort. I am sure Nintendo got a good deal, since Nvidia's mobile side must be desperate. Hard to compete against Qualcomm, Samsung, Apple and others. Intel couldn't compete as the mobile chip profit margins are so thin. Like Intel, Nvidia is also used to high profit margins. They make most of their money with high end GPUs, professional workstations and HPC. All really high profit margin segments.