Digital Foundry Article Technical Discussion Archive [2013]

Status
Not open for further replies.
And if that 1 GB truly is flexible meaning that it isn't always available to for games (in part or in whole) that means Sony must already have something that is potentially causing the OS to use ~3.5 GB of memory. Otherwise they could just guarantee that games have 5 or 5.5 GBs of memory in the first place. That they can't means that it's getting used at some point.

I'm sure those 3Gigs are filled with Kaz .gifs.
Or they are doing what you said, being cautious. That 1.5Gig of non-guaranteed memory might be completely free and prety much guaranteed anyway at launch, but at the risk of not being anymore when the OS gets upgraded.
 
Is it possible for the game developer to also create an app that the user can download and run on the OS side of the console, augmenting the game?

Like, for instance, a map application that the user can access which gets the current coordinates (and already discovered game space) from the cloud servers, and spare the game itself from having to load and/or keep the corresponding data in the game RAM?

Should I patent this ideia??? :D
 
Is it possible for the game developer to also create an app that the user can download and run on the OS side of the console, augmenting the game?

Like, for instance, a map application that the user can access which gets the current coordinates (and already discovered game space) from the cloud servers, and spare the game itself from having to load and/or keep the corresponding data in the game RAM?

Should I patent this ideia??? :D

I think I beat you to that in the News & Rumours threat. Patent dispute!

I don't see why that kind of thing wouldn't be possible at least, if you can flip between app and game quickly enough.
 
I think I beat you to that in the News & Rumours threat. Patent dispute!

Duh! That's why I don't post much, everybody steals my ideas before I have the chance to post them! :mad:

I don't see why that kind of thing wouldn't be possible at least, if you can flip between app and game quickly enough.

Yep, that's what I was thinking.

A killer feature would be to allow direct communication between game and app, authorized by the signing keys or something like that.
 
Well this is good news for PC gamers with only 2GB GPU's. Once you split off a chunk of that 5GB for the system the new consoles aren't going to have much more than 2GB available to games anyway. It may be the case that a 2GB GPU is actually able to keep pace with the new consoles for a good while longer than expected.
 
Well this is good news for PC gamers with only 2GB GPU's. Once you split off a chunk of that 5GB for the system the new consoles aren't going to have much more than 2GB available to games anyway. It may be the case that a 2GB GPU is actually able to keep pace with the new consoles for a good while longer than expected.
Uh, what? the 5GB is entirely available to the game.
 
Well this is good news for PC gamers with only 2GB GPU's. Once you split off a chunk of that 5GB for the system the new consoles aren't going to have much more than 2GB available to games anyway. It may be the case that a 2GB GPU is actually able to keep pace with the new consoles for a good while longer than expected.
Indeed it should also be interesting if Kaveri ever supports GDDR5m.
The efforts done on those console should be beneficial to PC gamers, MSFT just introduced neat stuffs in their last direct X and that including the possibility as on console to scale render target, a neat performance trick :)

Actually kaveri with properly tweaked clocks (pushing the GPU at the expense of the CPU) should be quite competitive. It is even better if you think about what should come next.
Then there is Windows RT could MSFT makes it so it is a breath to port the RT devices?
Powerful set-up are getting close, Logan is nice for a mobile product, but 4 A15 running at high speed and sanely clocked SMX (backed by proper memory) could do marvel in the low end market.

I still find it weird that MSFT is fighting it-self in the "PC" gaming realm, it is still one strong reason for people to buy Windows. If they could get there with Windows RT it would be interesting.
What I mean is to make PC game development "CPU ISA agnostic".

For people with high end set-up (or recently high end) they may not have upgrade for a while, I've that distinct feeling that games cost development is to hit a ceiling. It could be years before the asset quality improve over what is available on some high profile PC games.
That ceiling means that more and more devices should be able to run those games (in various power budget).

Actually I wonder if MSFT is stealthy preparing a PC gaming offensive, the App is already there on Windows 8, the last addition to their API is pretty much a blessing for conservative set-up (as proven by this gen of console being able to render some render target at lower resolution is a blessing for performances).

EDIT
looked back at Directx 11.2 I wonder about the scaling part, reading their short makes me think more of some sort of dynamic scaling which is not a good a trick as rendering whatever render target at whatever resolution.
 
Last edited by a moderator:
I think realistically if you have a 4gb gpu on your pc then you are likely good to go for the next bunch of years, since part of that 5gb allocated to games on console can be set in regular ddr memory on pc without issue. Short term though 2gb and 3gb gpus will be ok until the games catch up.
 
Uh, what? the 5GB is entirely available to the game.

As in part of that 5GB will be allocated as "system" memory which on a PC will be countered by the DDR3 pool while only a fraction will be allocated as graphics memory which needs to be countered by GDDR5 in a PC. If that fraction is 1/2 then you're looking at 2.5GB vs 2GB of pure graphics memory. But with a much bigger pool of DDR3 backing up the graphics memory in the PC which will help to offset the difference.
 
Duh! That's why I don't post much, everybody steals my ideas before I have the chance to post them! :mad:



Yep, that's what I was thinking.

A killer feature would be to allow direct communication between game and app, authorized by the signing keys or something like that.

Err...doesn't Smart Glass do what you are describing?
 
As in part of that 5GB will be allocated as "system" memory which on a PC will be countered by the DDR3 pool while only a fraction will be allocated as graphics memory which needs to be countered by GDDR5 in a PC. If that fraction is 1/2 then you're looking at 2.5GB vs 2GB of pure graphics memory. But with a much bigger pool of DDR3 backing up the graphics memory in the PC which will help to offset the difference.

That much larger pool will only be a benefit on 64 bit versions of a game. 32 bit versions of a game will still be limited to 2 GB of system memory.

Regards,
SB
 
That much larger pool will only be a benefit on 64 bit versions of a game. 32 bit versions of a game will still be limited to 2 GB of system memory.

Regards,
SB

Indeed but I think any discussion like this is made on the assumption that developers will make use of what's available (at least the low hanging fruit). If next gen console games are utilising 4-5GB of total memory then we have to assume their will be 64bit only versions of that same game on the PC before any comparison can be made. Its possible that in many cases the PC may only receive a current gen version port in which case all talk of superior hardware is moot to begin with.
 
Well I don't like that article:
1) perfs doesn't scale linearly with the CUs count. They use part that have more CUs respectively than both Durango and Orbis.
2) the 2 GPU they chose have 32ROPs and a 256 bit bus.
3)
To achieve this, we wanted to ensure (as much as possible) that the rendering wouldn't be CPU or memory limited, so we utilised our existing PC test-bed, featuring a Core i7 3770K overclocked to 4.3GHz
That is imho a bad premise, CPU may prove a bottleneck vs such a high end desktop chip. Jaguar may not suck but a core i7 with its 8 threads clocked that high is not in the same ball park.

They are pretty clear on why they chose how they did, etc. but it doesn't cut it. They try to hard to match the specs to in fine not being able to to do it.
I think a bonaire vs HD7870 the whole thing powered by an old Athlon X4 at low speed would paint a better picture along with testing at various setting so trying to outline when bandwidth becomes a concern, the impact of AA, resolution, etc. The impact of the CPU bottleneck.
 
What's most impressive is that even with a 1.2 TF, 600 mhz 7850, they were able to run Crysis 3 at high settings at 1080p at ~30+ FPS.

That really shows what a treat we're in for next gen. And I'm sure Crysis doesn't use anywhere near 5GB of RAM that XBO has either. And that's a cruddy unoptimized PC game.
 
What's most impressive is that even with a 1.2 TF, 600 mhz 7850, they were able to run Crysis 3 at high settings at 1080p at ~30+ FPS.

That really shows what a treat we're in for next gen. And I'm sure Crysis doesn't use anywhere near 5GB of RAM that XBO has either. And that's a cruddy unoptimized PC game.
That is not really a surprised as a HD7700 can do it.

I don't think that PC games are unoptimized either, the main "optimization" to me available to console and not PC is the extend to which you can play with the reoslution of your different render targets.

Still I don't think that article does a good job at showing what could be the advantage(s) of one system over the other but is is more something I would expect from a site like the techreport.
 
Last edited by a moderator:
The only thing we can do to compare the two next gen boxes to wait for them. As close as they are to PC's, Things like Esram, unified GDDR5, different ROP, TMU counts make it very hard to setup a representative hardware.

If indeed the CU count increase, even with extra bandwidth, TMU and ROPs doesn't scale the performance linearly, there'll be a lot of idle transistors, and hopefully first parties will make use that through GPU compute. I'm nearly certain multi-plats will not really dig that much into that territory. On the PS3, they literally had to use SPU's to make up for RSX's weaknesses or otherwise there'd be a serious performance gap (e.g. Frostbite probably couldn't run properly if they didn't dig into SPU's). I've heard the CU's of PS4 is PS3's spu's (from a Cerny interview probably), but due to PS4 already not requiring extra work for platform parity, I don't think Multiplat devs will make extensive use of GPGPU.
 
Well I don't like that article:
1) perfs doesn't scale linearly with the CUs count. They use part that have more CUs respectively than both Durango and Orbis.
2) the 2 GPU they chose have 32ROPs and a 256 bit bus.
3) That is imho a bad premise, CPU may prove a bottleneck vs such a high end desktop chip. Jaguar may not suck but a core i7 with its 8 threads clocked that high is not in the same ball park.

They are pretty clear on why they chose how they did, etc. but it doesn't cut it. They try to hard to match the specs to in fine not being able to to do it.
I think a bonaire vs HD7870 the whole thing powered by an old Athlon X4 at low speed would paint a better picture along with testing at various setting so trying to outline when bandwidth becomes a concern, the impact of AA, resolution, etc. The impact of the CPU bottleneck.

They specifically said they weren't trying to recreate the consoles, only test a theory that 50% more GPU power doesn't mean 50% better performance as measured by frame rates.
 
Status
Not open for further replies.
Back
Top