Digital Foundry Article Technical Discussion Archive [2013]

Status
Not open for further replies.
He meant slow eDRAM access in the XBox 360 case right ? That quote doesn't talk about slow ESRAM access on Xbone ?
 
He meant slow eDRAM access in the XBox 360 case right ? That quote doesn't talk about slow ESRAM access on Xbone ?

He's saying that CPU has no access to the eDRAM on the 360, but now the CPU does with the eSRAM on the X1, but it's very slow.
 
He meant slow eDRAM access in the XBox 360 case right ? That quote doesn't talk about slow ESRAM access on Xbone ?
Question is in the present tense about the xbox one, after the question is context, past tense, about the 360.
Answer is in the present tense, I think it's unambiguously about the xbox one.
 
Question is in the present tense about the xbox one, after the question is context, past tense, about the 360.
Answer is in the present tense, I think it's unambiguously about the xbox one.

Must be the reason the HotChips presentation had that narrow arrow/line connecting eSRAM to the CPU. Narrow being suggestive of slow bandwidth between the two.
 
I don't know if both systems will be bandwidth bound in similar ways too. They are very different.

It is just a strong suspect due to the fact that the 2 platforms are almost similar.
It is true that their memory setting is deeply different, but the end results in terms of peak bandwith are quite similar.
 
Must be the reason the HotChips presentation had that narrow arrow/line connecting eSRAM to the CPU. Narrow being suggestive of slow bandwidth between the two.

It's probably like how Cell can access the GDDR3 in PS3, but only over a very slow connection making it completely unadvisable. It's not working memory for the CPU in either case. If your CPU needs something from VRAM have the GPU/Move Engines relocate it for you.
 
Would I be wrong to suggest that slow access from CPU to eSRAM may limit the heterogeneous computing potential of the Xbox One?
 
It is just a strong suspect due to the fact that the 2 platforms are almost similar.
It is true that their memory setting is deeply different, but the end results in terms of peak bandwith are quite similar.

Both will benefit from data locality.

They are very different because of esRAM nuances, GPU ACE, queues and cache tweaks (may affect efficiency *if* you have parallel compute jobs on the GPU), raw GPU power.

Peak bandwidth alone doesn't tell the complete story because extra care is needed on the XB1 side.

Both have DMA engines, audio blocks, compression blocks. XB1 devotes more computing resources on audio processing (e.g., for Kinect), and use esRAM for developers to hit above the GDDR3 and GPU's lower weight class (raw power). Sony focus on a more powerful GPU and general GDDR5 performance from the getgo. Then decide how to specialize it for assorted tasks in parallel.

In my view, for the same scenario, they may/will behave differently.

e.g., In KZSF, the developers port their "old" engine to PS4, and then move as many things to the GPU as possible, freeing up the CPU. We can see how the game run today as more tasks are shifted around. If they were to port the game to XB1, their first design focus should be on the esRAM usage immediately to extract performance. They may not be able to move as many things from the CPU to the GPU to run in parallel (fewer ACEs and CUs). They will instead focus on finishing the current stage ASAP and prepping the esRAM and GPU for another different stage at once (Hence, favors higher clock. The Move Engines should be helpful here too ?). In the mean time, since there are fewer ACEs on the GPU, the CPU may need to take on more compute scheduling and non-audio tasks.

This scenario may not be universally true, but it shows different design concerns and stress points for both platforms. It would be difficult to say CPU and bandwidth _will_ be the problem. In the quest for performance, in the XB1 case, the esRAM may be the point of contention. In the PS4 case, managing the GPU internal states may be the source of headaches as developers try to fit more stuff there together.
 
Would I be wrong to suggest that slow access from CPU to eSRAM may limit the heterogeneous computing potential of the Xbox One?
That was my first reaction, but HSA isn't dependent on a 32 MB scratch pad. CPU and GPU still share DDR3 and cache interactions. The 32 MBs is just outside that memory model, as a pool stuck on the end, as it were. It doesn't mean the CPU cannot use the low latency SRAM though. It's there only or the GPU. It'd be really nice to get the latency figures, BTW. The ESRAM carries this label but it's unqualified. It'd be good to know the implementation of the SRAM.
 
Both will benefit from data locality.

They are very different because of esRAM nuances, GPU ACE, queues and cache tweaks (may affect efficiency *if* you have parallel compute jobs on the GPU), raw GPU power.

Peak bandwidth alone doesn't tell the complete story because extra care is needed on the XB1 side.

Both have DMA engines, audio blocks, compression blocks. XB1 devotes more computing resources on audio processing (e.g., for Kinect), and use esRAM for developers to hit above the GDDR3 and GPU's lower weight class (raw power). Sony focus on a more powerful GPU and general GDDR5 performance from the getgo. Then decide how to specialize it for assorted tasks in parallel.

In my view, for the same scenario, they may/will behave differently.

e.g., In KZSF, the developers port their "old" engine to PS4, and then move as many things to the GPU as possible, freeing up the CPU. We can see how the game run today as more tasks are shifted around. If they were to port the game to XB1, their first design focus should be on the esRAM usage immediately to extract performance. They may not be able to move as many things from the CPU to the GPU to run in parallel (fewer ACEs and CUs). They will instead focus on finishing the current stage ASAP and prepping the esRAM and GPU for another different stage at once (Hence, favors higher clock. The Move Engines should be helpful here too ?). In the mean time, since there are fewer ACEs on the GPU, the CPU may need to take on more compute scheduling and non-audio tasks.

This scenario may not be universally true, but it shows different design concerns and stress points for both platforms. It would be difficult to say CPU and bandwidth _will_ be the problem. In the quest for performance, in the XB1 case, the esRAM may be the point of contention. In the PS4 case, managing the GPU internal states may be the source of headaches as developers try to fit more stuff there together.

But you do not take in account some subtle details revealed in the DF interview..,
 
There are subtle details on both XB1 and PS4, but their basic approach are still different. The bottlenecks and incremental improvement numbers in the latest DF articles are measured on XB1 only. They may not apply "as is" to PS4 too.
 
There are subtle details on both XB1 and PS4, but their basic approach are still different. The bottlenecks and incremental improvement numbers in the latest DF articles are measured on XB1 only. They may not apply "as is" to PS4 too.

Nevertheless regarding BW bound the 2 platforms will face similar fate.
 
It depends on the scenarios. e.g., If the eSRAM is tied up at a particular moment (or missed), then XB1 may have to fallback on DDR3 bandwidth. The peak bandwidth may not apply here.

If specific compute tasks happen inside the GPU (using additional ACEs and CUs) _and_ the results can be synchronized and passed to the graphics pipeline(s), memory access would be different compared to doing the compute on the CPU.

If more stuff run on the PS4 GPU, then presumably more data will be sucked into the GPU. The developers will need more flexible memory schemes to work effectively. Average bandwidth need will also go up, but whether it's bandwidth bound, it would be case by case.

It's up to the developers. The hardware itself may have potential but is pretty dead without the developers' ingenuity.
 

I wonder if people will still argue that 1080p doesn't matter.

L5PHwXB.png
 
I wonder if people will still argue that 1080p doesn't matter.

L5PHwXB.png

Of course you can blow up a snip of any game and side-by-side prove one looks "sharper" but you are making a comparison where on the PC you can just add horsepower to keep driving up resolution. On consoles you have to show the shot at 900p and 1080p but also show if the framerate is as stable or the image quality is as good to determine if 1080p is worthwhile on a given title
 
Of course you can blow up a snip of any game and side-by-side prove one looks "sharper" but you are making a comparison where on the PC you can just add horsepower to keep driving up resolution. On consoles you have to show the shot at 900p and 1080p but also show if the framerate is as stable or the image quality is as good to determine if 1080p is worthwhile on a given title

They are not blown up,it's side by side focused on the gun.
 
I wonder if people will still argue that 1080p doesn't matter.

L5PHwXB.png

That is something very obvious, but the issue is that a lot of people will be playing on 720p TVs and hence would not know the difference. But, yes, people who will be playing on 1080p monitors or TVs, including me, will obviously find 720p games blurred and lacking. Even now I try to play all my PC games on 1920X1080 even if I have to reduce some effects to get it running smooth. The lack of detail by removing pixels is too jarring, the IQ too blurred to enjoy the graphics. Prime example was Alan Wake which looks much better with everything set to Max, but I had lower the resoltion, blurring everything in the far distance. I later found myself playing on medium and on 1080p and finding the graphics more tolerable. All that texture res and detail wasn't of much use when I didn't get enough pixels to see them.

But its the devs who have to make a call. If they think most of their audience is still on 720p TVs, obviously it will be a better decision to go with more eye candy and a lower res.After all, we have been enjoying games on resolutions even lower than 720p.
 
Status
Not open for further replies.
Back
Top