As I understand it, XB1 has a considerable (1 GB) dataset present for Kinect.As some of us speculated before it only makes sense for Sony to have a similar memory split in the PS4 as the Xbox One.
As I understand it, XB1 has a considerable (1 GB) dataset present for Kinect.As some of us speculated before it only makes sense for Sony to have a similar memory split in the PS4 as the Xbox One.
And if that 1 GB truly is flexible meaning that it isn't always available to for games (in part or in whole) that means Sony must already have something that is potentially causing the OS to use ~3.5 GB of memory. Otherwise they could just guarantee that games have 5 or 5.5 GBs of memory in the first place. That they can't means that it's getting used at some point.
Is it possible for the game developer to also create an app that the user can download and run on the OS side of the console, augmenting the game?
Like, for instance, a map application that the user can access which gets the current coordinates (and already discovered game space) from the cloud servers, and spare the game itself from having to load and/or keep the corresponding data in the game RAM?
Should I patent this ideia???
I think I beat you to that in the News & Rumours threat. Patent dispute!
I don't see why that kind of thing wouldn't be possible at least, if you can flip between app and game quickly enough.
Uh, what? the 5GB is entirely available to the game.Well this is good news for PC gamers with only 2GB GPU's. Once you split off a chunk of that 5GB for the system the new consoles aren't going to have much more than 2GB available to games anyway. It may be the case that a 2GB GPU is actually able to keep pace with the new consoles for a good while longer than expected.
Indeed it should also be interesting if Kaveri ever supports GDDR5m.Well this is good news for PC gamers with only 2GB GPU's. Once you split off a chunk of that 5GB for the system the new consoles aren't going to have much more than 2GB available to games anyway. It may be the case that a 2GB GPU is actually able to keep pace with the new consoles for a good while longer than expected.
Uh, what? the 5GB is entirely available to the game.
Duh! That's why I don't post much, everybody steals my ideas before I have the chance to post them!
Yep, that's what I was thinking.
A killer feature would be to allow direct communication between game and app, authorized by the signing keys or something like that.
As in part of that 5GB will be allocated as "system" memory which on a PC will be countered by the DDR3 pool while only a fraction will be allocated as graphics memory which needs to be countered by GDDR5 in a PC. If that fraction is 1/2 then you're looking at 2.5GB vs 2GB of pure graphics memory. But with a much bigger pool of DDR3 backing up the graphics memory in the PC which will help to offset the difference.
That much larger pool will only be a benefit on 64 bit versions of a game. 32 bit versions of a game will still be limited to 2 GB of system memory.
Regards,
SB
Well I don't like that article:
That is imho a bad premise, CPU may prove a bottleneck vs such a high end desktop chip. Jaguar may not suck but a core i7 with its 8 threads clocked that high is not in the same ball park.To achieve this, we wanted to ensure (as much as possible) that the rendering wouldn't be CPU or memory limited, so we utilised our existing PC test-bed, featuring a Core i7 3770K overclocked to 4.3GHz
That is not really a surprised as a HD7700 can do it.What's most impressive is that even with a 1.2 TF, 600 mhz 7850, they were able to run Crysis 3 at high settings at 1080p at ~30+ FPS.
That really shows what a treat we're in for next gen. And I'm sure Crysis doesn't use anywhere near 5GB of RAM that XBO has either. And that's a cruddy unoptimized PC game.
Well I don't like that article:
1) perfs doesn't scale linearly with the CUs count. They use part that have more CUs respectively than both Durango and Orbis.
2) the 2 GPU they chose have 32ROPs and a 256 bit bus.
3) That is imho a bad premise, CPU may prove a bottleneck vs such a high end desktop chip. Jaguar may not suck but a core i7 with its 8 threads clocked that high is not in the same ball park.
They are pretty clear on why they chose how they did, etc. but it doesn't cut it. They try to hard to match the specs to in fine not being able to to do it.
I think a bonaire vs HD7870 the whole thing powered by an old Athlon X4 at low speed would paint a better picture along with testing at various setting so trying to outline when bandwidth becomes a concern, the impact of AA, resolution, etc. The impact of the CPU bottleneck.