If that's true (smaller than the original PS4), then it has to be a smaller node. It also probably means that it's not pushing the new node very much in terms of die size so a 2X GPU would be within reason.
will it run crysis ?
They could move to Puma [successor of Jaguar] that would by default run at small clocks [to emulate PS4 performance level as much as possible], with option for devs to activate higher clocks. Variable clocks are nothing new, especially for modern X86 [my 4GHz desktop processor runs most of the time at 800MHz]. I presume that this would not offer truly significant boost the CPU, but even 20-40% would be good.Based on his previous posts, a 499$ price should mean an improved CPU as well. But how do you significantly improve the CPU with a strict hardware BC compliance?
Unless all software is written in a scalable threading manner that doesn't make any assumptions about hardware implementation, core count changes strike me as unrealistic. Is something like 2.4 GHz possible? A 50% CPU increase would be significant, compatible, and not require any changes anywhere. Alternatively 8 cores that are just better.Very big overclock? I don't see others possibilities. 2 CPUs?
$500 is a mistake.
Unless all software is written in a scalable threading manner that doesn't make any assumptions about hardware implementation, core count changes strike me as unrealistic. Is something like 2.4 GHz possible? A 50% CPU increase would be significant, compatible, and not require any changes anywhere. Alternatively 8 cores that are just better.
I played through this entire game on PS3, at 720p... despite owning a beefy gaming rig at the time (played it already on PC).
I'm not sure that's ever been the claim though has it?
I read the article. I also read Carmack's tweets about a single GPU in VR having to deal with a lot of overhead and latency for changing between two different viewpoints for the same frame, which a dual-GPU system would avoid (all they have to do is "wait" for both frames to finish, which is usually not a lot if the GPUs are identical), and I read presentation slides about AMD's LiquidVR and nVidia's GameworksVR.
I also have tested SteamVR myself, with a single 290X getting a mediocre score and two 290X with multigpu enabled getting a perfect score.
They all point to dual-GPUs being excellent for VR, where you can at least achieve twice the complexity of each frame at the same latency.
Then there's all the VR devkits being shipped with dual-GPU solutions.
After all this stuff presented how someone could still think dual-GPU is somehow worse for VR is beyond my comprehension.
$500 is a mistake.
This makes no sense. Porting to a new node is relatively easy. It just takes money and time and far less of both than creating a new architecture.I could imagine a situation where this is happening from both Sony and Microsoft because both discovered it would be cheaper to commision new 14nm APU tech from AMD than to have their current architectures shrunk...
The time and money for the new architecture on a new node has already been invested though (presuming a PC technology derivative like the PS4).This makes no sense. Porting to a new node is relatively easy. It just takes money and time and far less of both than creating a new architecture.
I wonder if the smaller size could point towards it using HBM2 memory. That would cut down the pcb size appreciably, and help explain the supposed $499 price.Damn. A console smaller than PS4 with 3.6TFlops+ would be enticing...
Would a developer that's got access to a development kit really be in the know about how much it'll cost? I would have thought that only Sony would have that information.