PlayStation 4 (codename Orbis) technical hardware investigation (news and rumours)

Status
Not open for further replies.
IIRC, they did have a line of Sony TVs with built-in PS2 a long time ago, though it never did sell much.
The market changed. Pretty much every new TV has media capabilities nowadays and the better ones have "apps", some even bluetooth.
ps2 had wired controllers, ps3 uses standard blue tooth for their controllers, usb for cameras, has a market for downloadable titles.

So it would be natural fit IMHO if Sony would just use the PS3 hardware for the TV functionality thats nowadays expected anyway.
(Or atleast plan to do this step ASAP with PS4 hardware)
 
IIRC, they did have a line of Sony TVs with built-in PS2 a long time ago, though it never did sell much.

Generally speaking I like to keep my electronics separate, it really suck to have to deal with TV down time bc game system is broke or vice versa.
 
The Vgleaks article about Orbis over time had the downgrade happening before the capacity went up.
What's potentially interesting is that no speed grades above 5.0Gbps have the option for 1.35V until the introduction of 5.5Gbps chips.
Maybe there's a corner case where 5.5 at 1.35V somehow yields better than 6.0 at 1.5V at 4Gb density, but the latter speed and voltage at 2Gbit would have been more established.

So what's the new magic number now if there's one other than 176?
 
Joking aside... I never understood why sony did not went with integrating cost-reduced versions of their consoles with their Smart TV-Line.
A decently big TV should be capable of dealing with 150W heat and the "smart tvs" could just use the console hardware for their TV and media duties (theres atleast a rather large overlap).
Dont see why there wouldnt be a market there for a TV with the PS3 PSN titles as "apps"

I can see this happening if, or when, they can get Gaikai to run with the majority of Broadband customers. If your new uber TV had a PS4, or VitaTV for the lower end, built in and was capable of streaming any PS era games directly to it or from there to other connected devices like a Phone, Tablet, Vita, or another TV. I could see them gaining quite a large chunk of the consumer space quite rapidly.

And with all the sounds they've been making it would seem that they aren't too far off making something like that happen. Just not in a package that is at a price point that makes it a commodity and not a luxury.

They've got the tech, the experience, and the expertise. But have they got the cajones?
 
The last bit of info we have seen leaked is a downgrade of uncore bandwidth and the reduction in GDDR5 speed.

hm... how is the uncore bandwidth determined? I'm not particularly familiar with Onion's bus width & the clocks it's associated with (or not).
 
hm... how is the uncore bandwidth determined? I'm not particularly familiar with Onion's bus width & the clocks it's associated with (or not).

The Onion and Garlic buses were both downgraded, which were what I was referring to as uncore bandwidth. They are at least 2/3 of the links that I was considering.

The CPU bandwidth is rather unhelpfully listed as <20 GB/s for both cases. The Onion bus would be more closely linked to that value than Garlic, and as a percentage of bandwidth, it dropped more.
 
I can see this happening if, or when, they can get Gaikai to run with the majority of Broadband customers. If your new uber TV had a PS4, or VitaTV for the lower end, built in and was capable of streaming any PS era games directly to it or from there to other connected devices like a Phone, Tablet, Vita, or another TV. I could see them gaining quite a large chunk of the consumer space quite rapidly.
I was thinking of local play primary. Something like Apps is coming to every big brand.
And with all the sounds they've been making it would seem that they aren't too far off making something like that happen. Just not in a package that is at a price point that makes it a commodity and not a luxury.
PS Vita TV is already announced to sell for 100$, and I cant think the PS3 chipset would be more expensive - especially since the TV needs similar components anyway. The rather big volume of PS3 chips would probably factor in favorable against similar solutions. So I expect less than 100$ difference over way worse performing sets and with tons of exclusive quality games. The few free "games" on my LG are rather disastrous and unacceptable slow 2d games
They've got the tech, the experience, and the expertise. But have they got the cajones?
Sometime it appears to me Sony is thinking too much with them instead of using common sense.
 
The market changed. Pretty much every new TV has media capabilities nowadays and the better ones have "apps", some even bluetooth.
ps2 had wired controllers, ps3 uses standard blue tooth for their controllers, usb for cameras, has a market for downloadable titles.

I think it would be quite painful integrating the TV OS part with the game OS.
 
What was it then - was it that the battery was rated at 40W max output?
 
Isn't the unified memory on the consoles something of a performance detractor in some cases?
Besides having the gddr5 split the bandwidth between the cpu and gpu, wouldn't there be some instances where either the cpu or gpu have to stall memory reads because one or the other is reading from memory. Same with writes, if one processing unit is writing to memory and the other needed to as well it would have to stall to wait?

Since the jaguar cpu has so little L2 cache wouldn't there be many cache misses, creating countless main memory reads?

I specifically want to talk about memory bandwidth, read/write stalling rather than the potential benefits in OTHER areas we've all read and talked about hsa and huma potentially providing.
 
Some amount of interference would happen.
The GPU doesn't seem like it would be as severely affected. The numbers for the CPU block's bandwidth (<20 GB/s) are a very small fraction of the whole, so the CPU is physically unable to block the GPU from most of the bandwidth. The cache subsystem is designed to reduce the amount of time the CPU would get anywhere near that.
The ratio of GPU to CPU bandwidth is such that the bus should manage decently, even if the CPUs were actively trying to interfere with the GPU. The GPU is able to tolerate a significant amount of latency, so that doesn't seem like a likely problem.

The CPU might be a bigger loser here, depending on how effectively the memory controllers can balance the latency requirements of the CPUs versus the sheer volume of GPU accesses.
This wouldn't be unique to Orbis.
APUs in general have tended to have noticeably worse memory latency, with the earliest ones having pretty terrible numbers.
Certain numbers of an unspecified competitor I can't use in a versus comparison to Orbis show it's not a GDDR5 or single-pool problem.
 
Certain numbers of an unspecified competitor I can't use in a versus comparison to Orbis show it's not a GDDR5 or single-pool problem.
:D Referencing other hardwares for comparison of how things work is fine; it's the performance comparison and 'which is better' that goes nowhere. In your case it'd be fine to quote XB1 numbers if they help illustrate the point, and then people responding who go OT and talk about it being better or worse would be the ones getting serious stick.
 
The Vgleaks numbers for a remote L2 and L1 hits are ~100 and ~120 cycles, respectively.
I don't expect DRAM access to beat those, and benches of Kabini show clock latencies above 130 cycles.
That was a techreport bench, which was a cache and memory bench which used Y axis with powers of two increments, so it should be significantly higher.

I've run across AIDA64 benches with claimed 130ns latencies for Kabini, but I'm not sure about those.

Either way, latencies are far above those of the best desktop chips, and seem to be pretty bad even relative to some of the more mediocre APUs.
 
The Vgleaks numbers for a remote L2 and L1 hits are ~100 and ~120 cycles, respectively.
I don't expect DRAM access to beat those, and benches of Kabini show clock latencies above 130 cycles.
Hasn't MS said it would be 140+ cycles to memory? Anyway, it's the same ballpark. And it's a relatively high latency, that's the point here.
 
Status
Not open for further replies.
Back
Top