PlayStation 4 (codename Orbis) technical hardware investigation (news and rumours)

Status
Not open for further replies.
ZFS makes sense when a lot of RAM dedicated to it. Don't know are there benefits when RAM is limited.

Maybe tunning can reduce the amount needed(rule of thumb is 1GB per 1TB, for all features including deduplication).

The most interesting case for me is transparent compression feature configured to zlib with hardware offloading like in the Ps2.

Even in software the FreeBSD 9.1 has LZ4 that is at least 50% faster than LZJB.
http://wiki.illumos.org/display/illumos/LZ4+Compression
 
Last edited by a moderator:
They might be taking the 7GB figure form the previous rumors.

probably.

Personally I think Sony will put a 6GB limit on gaming, it gives them the flexibility to expand the OS functions over time, whilst at the same time, gives a 1GB advantage over the competition...it would be the best of both worlds solution.
 
With 1GB for OS they would have already doubled their initial reservation.
Also MS still can free 1GB, the one that now is reserved for future updates, so PS4 would have no extra RAM advantage if Sony too had chosen 2GB.

The OS could truly be at 1GB because this is what devs have been murmuring lately but Sony might have kept it at 512MB if they felt it was already good.
Sony has already decided the OS reservation and devs know about it so sooner or alter we will know about it.
 
It's really the difference between online policies they are complaining about. I actually would think they could do PS4/PC/MAC all playing together with just the Xbox One version in a silo if they wanted because Sony has always been so flexible in the past. But they may have also tweaked the gameplay in the console versions such that having PS4 and PC players together wouldn't work, and that results in PS4, Xbox One and PC/MAC all being on different servers.
 
With 1GB for OS they would have already doubled their initial reservation.
Also MS still can free 1GB, the one that now is reserved for future updates, so PS4 would have no extra RAM advantage if Sony too had chosen 2GB.

The OS could truly be at 1GB because this is what devs have been murmuring lately but Sony might have kept it at 512MB if they felt it was already good.
Sony has already decided the OS reservation and devs know about it so sooner or alter we will know about it.

There's the RAM the OS actually uses, and then there's the RAM any extra features might use on top of that, whether it's for caching of app data for fast app switching, or allowing you to do true multi-tasking ala Xbox One (game + app simulanteously), the sky is the limit.

As I've already said before, once you give developers RAM, you can never get it back.
 
There's the RAM the OS actually uses, and then there's the RAM any extra features might use on top of that, whether it's for caching of app data for fast app switching, or allowing you to do true multi-tasking ala Xbox One (game + app simulanteously), the sky is the limit.

As I've already said before, once you give developers RAM, you can never get it back.
Right, but Microsoft have two sets of developers to cater for now; the game developers and the app developers, like Netflix. It's not simply a case of to reducing the 360 'OS' overhead to suit themselves based on their own future plans, they will have third parties who develop apps based on that 3Gb RAM virtual machine.

Once they deploy it, it's basically set in stone, unless they are willing to go cap in hand to every third party Xbox One app developer and plead for co-operation in reducing resources used for whatever app they are offering.
 
I have confirmation that all 18 CUs are identical and can all be used in exactly the same way.
The 14+4 thing was just a suggested workload split presented to developers.

So hopefully that'll put an end to all the speculation around that rumour.
 
Looks like first parties didn´t want x86:

Having worked for years as a consultant for Sony Worldwide Studios, Cerny was one of the individuals who engaged with feedback developers provided for Sony's next console. The realisation that he should be more involved with the PlayStation 4 came to him after he spent his time-off researching the 30 year history of the X86 CPU, after feedback from first-party game programmers said it shouldn't be used in the PS4.

http://www.ign.com/articles/2013/07/10/how-mark-cerny-got-the-playstation-4-lead-architect-job
 
I'd like to hear their reasonings. The early days of predicting the next-gen consoles didn't expect x86 IIRC. It comes with legacy bloat and from that perspective, is inefficient for the transistor count, plus I guess Intel tax increases the relative cost. However, regardless of validity of those arguments, that doesn't affect the first parties. Was it just that they were used to PPC and didn't want another ISA to worry about?
 
Sony did shrink the PS3 OS.
I suppose MS can still do the same now or even later...but I digress.

1GB for the PS4 OS is not absurd IMO

I meant, you can give them 7GB of RAM, but you can't down the road say "Oh, we need one more GB from you so you only get 6GB now". The reverse (Sony giving developers more RAM through reducing their own footprint) can obviously happen, it doesn't affect previous games.
 
I'd like to hear their reasonings. The early days of predicting the next-gen consoles didn't expect x86 IIRC. It comes with legacy bloat and from that perspective, is inefficient for the transistor count, plus I guess Intel tax increases the relative cost. However, regardless of validity of those arguments, that doesn't affect the first parties. Was it just that they were used to PPC and didn't want another ISA to worry about?

Or understanding Cell gave them a competitive advantage against third parties that they didn´t want to lose. Maybe after investing so much time in learning such a hard architecture for an engineer going to a far easier one is demotivating.
 
Last edited by a moderator:
I'd like to hear their reasonings. The early days of predicting the next-gen consoles didn't expect x86 IIRC. It comes with legacy bloat and from that perspective, is inefficient for the transistor count

I think that was conventional wisdom, but then his research showed that the legacy bloat has been cut back significantly and is hardly an overhead anymore, to the point where Larrabee could work with Atom cores that were hardly less efficient than anything else on the market.

The actual presentation is out there on Youtube and listening to his exact words is probably better than taking this summary from IGN. I listened through it, and vaguely remember his comments being along those lines (but with extremely little technical detail).
 
Status
Not open for further replies.
Back
Top