PlayStation 4 (codename Orbis) technical hardware investigation (news and rumours)

Status
Not open for further replies.
No they went with x86 because it makes the most sense from a cost/performance perspective as well as easing games development.

If there is a PS5 at all then it will almost certainly use the cloud for BC.

So those who will buy ps4 games through retail will get left out from playing them via cloud ?

I just think they are trying to make it pc like . This has 2 advantages - x86 will go on evolving with pc and Sony can again use it in future systems and provide pc like backward compatibility which will help them to compete with steam , the second advantage is that the system will be developer friendly . There are latency problems in cloud tech that the fast action game genres will avoid . And for that there should be a home console . Otherwise the walking dead , and heavy rain like games will do just fine in cloud .

I have a ps3 and I have lots of retail games which I know I can't play on ps4 or cloud as I don't want to purchase them digitally again just like psp and umd situation on ps vita . Lastly I don't think future broadband speeds will be fast enough in my region .:rolleyes::cry:
 
So those who will buy ps4 games through retail will get left out from playing them via cloud ?

No, the landscape will be very different by the time the next NEXT gen launches. I bet by then most organisations will themselves be operating from "the cloud" not just BC.
 
So those who will buy ps4 games through retail will get left out from playing them via cloud ?
Why should they? Backwards compatibility is about letting you play the games you already bought. If you've bought them from the cloud, Future PSN will know, if you haven't insert the disc and let Future PSN verify you are in possession of the game.

The choice x86-64 is a sensible choice at this juncture. Going forward it means those developing multiplats can leverage code across PC and PlayStation. And launching when it is, with many engines already running on multi-core x86-64, those engines can be made to run with relatively little effort - at least compared to porting across to another architecture.

Forsaking hardware backwards compatibility was the smartest thing Sony did from an engineering and cost perspective.
 
No, the landscape will be very different by the time the next NEXT gen launches. I bet by then most organisations will themselves be operating from "the cloud" not just BC.
Sony are selling all their regional HQs to move into the cloud. Kay Hirai is digitising himself a la Tron. :yep2:
 
Why should they? Backwards compatibility is about letting you play the games you already bought. If you've bought them from the cloud, Future PSN will know, if you haven't insert the disc and let Future PSN verify you are in possession of the game.

lets say i borrow a retail disc game from my friend and then insert it . So how will the future psn know that my game is genuine and that i bought it with my money .

if ur method could be applied then sony would try it with psp and umd discs which would give respective digital versions to the vita owners .;)

edit : isnt it off topic discussion ?
 
lets say i borrow a retail disc game from my friend and then insert it . So how will the future psn know that my game is genuine and that i bought it with my money .
Why should Sony care? Right now if you borrow a PS3 game on disc from a friend, you can play it and your friend can not. The same would be true.
 
You mean a free PSN copy, download but not cloud streamed.

Many believe that not every game can be played on the cloud due to latency issue. And cloud gaming costs $$$ to run, so it may be safer to assume it will cost extra.
 
I imagine that'd open it to hacking. If there's any flash-based caching I expect it on the mobo (could be part of the drive controller) or drive.
 
Well, it wouldn't be alone, mind you. Not that pairing X86 with IBM tech would be easy, mind you.

It's just... the SPEs themselves are tiny, they acted for OS functionality in PS3, too, and 6 of them would be enough to emulate PS3, as one was dropped for yields and the other for OS.

Sony included a full PS1 CPU for Sound in the PS2... why not do it again? Could make sense, no?

To a certain extent. Unfortunately, the GPGPU already has a video decoder. For b/c, porting PC multi platform games to PS3 has other advantages. So the benefits are not so clear cut. For audo decoding and decompression, they can probably find cheaper solutions. For security, they can roll their own on the custom ARM chip. It doesn't sound like good investment.

*If* they are in the mood to throw in additional components, I would rather they use LightPeak/Thunderbolt for PSEye's AUX port. Intel is supposed to have cheaper controller for 2013. It would give them a lot more flexibility (e.g. user pay for b/c, stackable/chained components later on)

But at this moment, it is possible that the aux port is based on FireWire. So don't expect anything unusual. ^_^

EDIT:
This is assuming there is no dedicated software running on the SPUs for PSEye.
 
Clock speed is one if the few things that can be fiddled with without needing to respin silicon, it just depends on yields and manufacturing timelines.

A small overclock would be nice ~1.8ghz seems doable, i will be disappointed if it comes at the stock 1.6ghz.

But this is coming from a person who spent 3+ hours yesterday overvolting my poor old HD6850 GPU so it could reach 1000mhz core clock :devilish:, i only managed to get 990mhz :cry:.
 
One thing i would like to bring to discussion: Kaveri suppossely will have true unified address space and probably it will include a on-chip shared scratchpad for that - like an shared L3 between cpu and gpu-. Kabini still is based on virtual addressing, so a step down HSA implementation from Kaveri.

Do you think PS4´s APU will have unified address space and some kind of L3 or GPU will have to bring cpu data from cpu's L2 and cpu from GPU´s L2?.

Based on the fact that in the beginning PS4´s apu was going to be Kaveri based -Steamroller in- it would be logical bringing some bits of its architecture to a jaguar -kabini- based apu...above all, being unified space address and a L3 shared memory a boost to CPGPU performance.

Which L3 size would be enough?.

By the way, talking about Kaveri look what our good friend repi has received to play with:

https://twitter.com/repi/status/306835055114866688
 
Last edited by a moderator:
AMD says that unified adress space is needed to make GPU computing much more efficient and much easier for the programer.

Kabini is very weak compared to a Kaveri and it is much, much weaker than the PS4 APU. The question is: Does Kabini need this feature, given the fact that it will be used in systems that are most likely never going to use GPGPU at all? The PS4 on the other hand has a 1.8TeraFLOPS GPU paired with a 100GigaFLOPs CPU. Does the PS4 APU need this feature, given the fact that it will be used in a system that most likely is going to use GPGPU all the time?

So yes, I expect the PS4 APU to have unified adress space for CPU and GPU.
 
AMD says that unified adress space is needed to make GPU computing much more efficient and much easier for the programer.

Kabini is very weak compared to a Kaveri and it is much, much weaker than the PS4 APU. The question is: Does Kabini need this feature, given the fact that it will be used in systems that are most likely never going to use GPGPU at all? The PS4 on the other hand has a 1.8TeraFLOPS GPU paired with a 100GigaFLOPs CPU. Does the PS4 APU need this feature, given the fact that it will be used in a system that most likely is going to use GPGPU all the time?

So yes, I expect the PS4 APU to have unified adress space for CPU and GPU.
and what would be the shared memory to crunch data from?
 
If that leaked diagram is real, the CPU and GPU exchange data via the process queues and many ring buffers. Those need to be fast and typically done via shared physical memory. I took a quick look at the libGCM API in PS3. The command buffers are allocated by the CPU and accessible by the GPU.

There are also data cache for the processors to use. Plus DMA to fetch data presumably into the GPU and CPU cache.
 
If that leaked diagram is real, the CPU and GPU exchange data via the process queues and many ring buffers. Those need to be fast and typically done via shared physical memory. I took a quick look at the libGCM API in PS3. The command buffers are allocated by the CPU and accessible by the GPU.

There are also data cache for the processors to use. Plus DMA to fetch data presumably into the GPU and CPU cache.

But that already is made in actual GCN with its ring buffer and 2 ACEs, right?.
 
Status
Not open for further replies.
Back
Top