General Next Generation Rumors and Discussions [Post GDC 2020]

Worse resolution, worse settings and frame rate.

I also know this isn't a proper comparison but I don't see any game running on a 7850 looking anything like the last of us 2.
One should separate capable technical prowess from production values. All Sony exclusives look the way they do because of the amazing amount of funding and time they are given to constantly retool and re-work the games, all for one type of configuration.
 
One should separate capable technical prowess from production values. All Sony exclusives look the way they do because of the amazing amount of funding and time they are given to constantly retool and re-work the games, all for one type of configuration.

True and that's why I said it's probably not a fair comparison but what about warzone which should run much better on a 4 gen i5, GTX 680 and 16GB of ram.
 
Yes, because as 4A Games point out, draw calls ...DirectX as a whole is not getting thinner or smaller, functions are getting more complex to provide flexibility to the game/app an....
You've missed the spirit of the argument. Let's say every game on XB1 is a million times slower at draw calls than PS4*. Does that result in games being one million times slower in their refresh rate? No. So the real question is given a piece of hardware, comparing to-the-metal coding versus fully BC APIs, what is the real cost in game performance on that hardware? If it's 50%, if games run half the speed using an API, then no-one wants it in a console. If it's 15%, some will accept the sacrifice, preferring games to have slightly tuned down effects and features for a persistent library. If it's 1%, there's no reason not to use fat APIs and have your library portable across generations. Even moreso when the BC question affects hardware considerations, and to-the-metal coding gets up maybe +15% performance this gen but -20% next gen because your hardware is having to use legacy features, and then an additional 15% drop in relative performance to the previous generation when you switch to fatter APIs so you don't have that problem next time around...

* DX12 supposedly solves this; that interview was talking about DX11 - an example, like Metal and Vulkan, where APIs are less detrimental to performance and strike a balance between performance and portability.
 
It will be curious to see what the RT-RT APIs are like on the PS5. Will it be such low level that they will always be tied into that specific hardware implementation or will it be abstracted enough to have the same software be able to run on PS6 with different hardware?
 
True and that's why I said it's probably not a fair comparison but what about warzone which should run much better on a 4 gen i5, GTX 680 and 16GB of ram.
Indeed. I do believe those older nvidia keplers are likely going to have a trouble keeping up now that games were advancing towards more compute
 
Indeed. I do believe those older nvidia keplers are likely going to have a trouble keeping up now that games were advancing towards more compute
I don't know when they got async compute running better, but that could be part of the reason.
Wonder how it compares to an equivalent amd gpu.
 
I don't know when they got async compute running better, but that could be part of the reason.
Wonder how it compares to an equivalent amd gpu.
agreed, compute gets a bigger focus I think around 900 series IIRC.
 
Please pardon the dust as the thread is cleaned up from discussion that ended up at the point of pure noise, with a new user misstating their baseless beliefs as facts.
 
You've missed the spirit of the argument. Let's say every game on XB1 is a million times slower at draw calls than PS4*. Does that result in games being one million times slower in their refresh rate? No. So the real question is given a piece of hardware, comparing to-the-metal coding versus fully BC APIs, what is the real cost in game performance on that hardware?
I am missing the spirit of the discussion, you keep raising questions that the Digital Foundry interview with 4A Games answers so I'll bow out here. There are plenty of technical articles looking at the relative overheads of APIs. Speculation is unnecessary. Sure, APIs have evolved over the last few years, but I've seen nothing to evidence that there has been a radical performance leap ahead of where we were just a few years back.
 
The relative overheads doesn't matter so much as the consequences of the real world on screen differences and how much they impact gamers. All the technical articles in the world can't identify that, any more than all the scientific journals in the world describing photons and energy transfer could describe if someone will like a painting or not. ;)

eg. You link to an article that says "OpenGL can unlock 15x performance gains." Does that mean an OpenGL game runs 15x faster in framerate over its DX counterpart? No.

The valid metrics here aren't the theoretical but the real-world differences. How much slower, and how fewer features, does XB1 have versus PS4 due to API (and not slower hardware)? If someone could find an article that explains/measures that, it'd bring a lot to the discussion. Something like a DF face-off comparing PS4 level PC hardware to PS4 and showing the difference in framerate. Ideally best-cases with different APIs as sometimes an API is poorly used (DX12 games running slower than DX11).

Edit: I actually say all this in the body of that post you've replied to. I've acknowledged the API has overheads and some workload are far slower, so I don't know why you're continuing that line with more links to 'API overhead' info. :???:
 
Last edited:
Seeing that doom runs about as well as the ps4 version on equal-to-ps4 hardware i think api overhead is not so much. It euns bad on nv hw but that might be due to architecture differences.
 
DX11.x the XO version was always going to be less of an ideal fit compared to DX12.
DX12 was a more modern architecture, better fit for modern gpu IP, and closer to the metal.

So sure you can poke values straight into registers on PS4, but the net benefit compared to having to go through a modern API is far from convincing.
Its also part of the reason they have to go around the houses to support bc. Even if you say who cares about bc, well Sony seems to now.
I can see them start to restrict that sort of access moving forward.

Just too many net positives to not giving devs full direct access, even if it means slightly less performance.
 
API overhead is only really going to matter if you're draw call limited, as far as I know, and most games are not draw call limited. I'm sure there are other cases, but in the order of things API overhead is going to be very very low on the list of what actually determines performance when comparing DX12 or Vulkan to "to the metal" programming.
 
We just gotta see what happens. I haven't heard anything about a delay, only bad yields. Also keep in mind the sources I have at Sony are no where near as good as MS. This delay rumor is just speculation due to a banner change at sony's website.
Poor yields would be bad for AMD. There is no way that Sony (or Microsoft) do not have a contract clause covering yield/cost ratios. It is also bad for Sony as it may restrict the number of consoles but we've known that for over a month. I wonder would could cause bad yields and given AMD's future tech is predicated on clocking higher, is this issue going to bone all of AMD's future output?
 
We just gotta see what happens. I haven't heard anything about a delay, only bad yields.

When did you hear about bad yields?
IIRC, there were rumors of the PS5 SoC getting yet another pre-mass production revision as late as last month.
 
Poor yields would be bad for AMD.
Sort of a variable target though. If you are looking for the cream of the crop chip and not getting as many, you're bound to have poorer yields than a lower binned chip.
 
Back
Top