He meant slow eDRAM access in the XBox 360 case right ? That quote doesn't talk about slow ESRAM access on Xbone ?
Question is in the present tense about the xbox one, after the question is context, past tense, about the 360.He meant slow eDRAM access in the XBox 360 case right ? That quote doesn't talk about slow ESRAM access on Xbone ?
Question is in the present tense about the xbox one, after the question is context, past tense, about the 360.
Answer is in the present tense, I think it's unambiguously about the xbox one.
I don't know if both systems will be bandwidth bound in similar ways too. They are very different.
Must be the reason the HotChips presentation had that narrow arrow/line connecting eSRAM to the CPU. Narrow being suggestive of slow bandwidth between the two.
It is just a strong suspect due to the fact that the 2 platforms are almost similar.
It is true that their memory setting is deeply different, but the end results in terms of peak bandwith are quite similar.
That was my first reaction, but HSA isn't dependent on a 32 MB scratch pad. CPU and GPU still share DDR3 and cache interactions. The 32 MBs is just outside that memory model, as a pool stuck on the end, as it were. It doesn't mean the CPU cannot use the low latency SRAM though. It's there only or the GPU. It'd be really nice to get the latency figures, BTW. The ESRAM carries this label but it's unqualified. It'd be good to know the implementation of the SRAM.Would I be wrong to suggest that slow access from CPU to eSRAM may limit the heterogeneous computing potential of the Xbox One?
Both will benefit from data locality.
They are very different because of esRAM nuances, GPU ACE, queues and cache tweaks (may affect efficiency *if* you have parallel compute jobs on the GPU), raw GPU power.
Peak bandwidth alone doesn't tell the complete story because extra care is needed on the XB1 side.
Both have DMA engines, audio blocks, compression blocks. XB1 devotes more computing resources on audio processing (e.g., for Kinect), and use esRAM for developers to hit above the GDDR3 and GPU's lower weight class (raw power). Sony focus on a more powerful GPU and general GDDR5 performance from the getgo. Then decide how to specialize it for assorted tasks in parallel.
In my view, for the same scenario, they may/will behave differently.
e.g., In KZSF, the developers port their "old" engine to PS4, and then move as many things to the GPU as possible, freeing up the CPU. We can see how the game run today as more tasks are shifted around. If they were to port the game to XB1, their first design focus should be on the esRAM usage immediately to extract performance. They may not be able to move as many things from the CPU to the GPU to run in parallel (fewer ACEs and CUs). They will instead focus on finishing the current stage ASAP and prepping the esRAM and GPU for another different stage at once (Hence, favors higher clock. The Move Engines should be helpful here too ?). In the mean time, since there are fewer ACEs on the GPU, the CPU may need to take on more compute scheduling and non-audio tasks.
This scenario may not be universally true, but it shows different design concerns and stress points for both platforms. It would be difficult to say CPU and bandwidth _will_ be the problem. In the quest for performance, in the XB1 case, the esRAM may be the point of contention. In the PS4 case, managing the GPU internal states may be the source of headaches as developers try to fit more stuff there together.
There are subtle details on both XB1 and PS4, but their basic approach are still different. The bottlenecks and incremental improvement numbers in the latest DF articles are measured on XB1 only. They may not apply "as is" to PS4 too.
I wonder if people will still argue that 1080p doesn't matter.
Of course you can blow up a snip of any game and side-by-side prove one looks "sharper" but you are making a comparison where on the PC you can just add horsepower to keep driving up resolution. On consoles you have to show the shot at 900p and 1080p but also show if the framerate is as stable or the image quality is as good to determine if 1080p is worthwhile on a given title
They are not blown up,it's side by side focused on the gun.
sure,but all things equal 1080 looks clearly better.Ok but that's not even the point.
I wonder if people will still argue that 1080p doesn't matter.