Love_In_Rio
Veteran
Well, IMHO if there are any problems with beta kits performance it is the 1,6 Ghz CPU.First parties like Guerrilla may be struggling to offload Cell loads to a GPGPU config.
Cerny announced the secondary processor and mentioned the audio processor briefly in the Feb announcement. Sony won't be able to remove them quietly.
Besides, they really need a hardware security mechanism. The secondary processor may come in handy in that regard.
Their executives mentioned that $399 has always been their target. It doesn't make sense to announce the price and then remove the parts.
This is just another case of kids making up extraordinary shit for no reason other than to see the waves they just made spread over the internet. Case in point: xbone downclock rumor.
I award 1 to this rumor, that's all it's worth.
That and Sony's failure to implement the security algorithm correctly. Had Sony not screwed that up, the difficulty in breaking PS3's security would have been far higher. I dare say a modern console taking everything learnt from the successes and failures of last gen should be robust enough to withstand 10 years of hacking (at least at the consumer level, so complex hardware solutions are needed to circumvent).doesnt the wide-spread PS3 hack are originated from stolen USB service dongle?
its a bit like PSP hacking where it get up from service battery (jig battery).
What puzzles me is that these hacks started by using games with bugs, so hackers could use some exploit to inject code. They should just run games in a sandbox or as a process in a less privileged mode.
They fixed their holes, but couldn't undo the damage.
Well they are only human
doesnt the wide-spread PS3 hack are originated from stolen USB service dongle?
its a bit like PSP hacking where it get up from service battery (jig battery).
...A lot of PS4 games seem to be struggling to hit 30 FPS, enough to make me wonder "what's wrong?". After all this was supposed to be the gazelle hardware not vice versa...
Speaking of this, can any developers shed light on the impact of GDDR5 latency vs something like DDR3? Or to bring it back to last gen, GDDR3 OR RAMBUS?
The bandwidth of GDDR5 is obvious, but for smaller bits when the system isn't moving large chunks of data like textures, how much can the latency hold the system up? Would these smaller bits not fit well in the 2MB L2 cache?
It will be not problem, at least not more that the high latency access of SPEs to their local memory this gen.
Phrased another way:
What kind of performance bump could be expected by cutting the GDDR5 latency in half?
Does it really work that way? You would presume that you stream data from GDDR5 to the CU for processing - the latency is only for getting that stream to start and reach the GDDR5, but once that stream is going, it will go full speed and latency is not relevant?
So it would only matter in the unlikely case where you would randomly access individual bits of memory from the GDDR5 pool, doing no optimisation and making no use of any cache lines, etc.?