the possibilities are endless!Depends. Could read one buffer and write to another.
May have different channels and channel formats. Probably can't be assumed the GBuffer is the same format, though it might be.
the possibilities are endless!Depends. Could read one buffer and write to another.
May have different channels and channel formats. Probably can't be assumed the GBuffer is the same format, though it might be.
The above Chart is incorrect! Xbox One eSRAM maximum theoretical speed is 204 GB/s. But that number is irrelevant since real life numbers stated by Microsoft point to real life values of something between 140-150 GB/s. Not much different from PS4!
AFAIK, it's 102GB/s full duplex. Meaning you can write data to the eSRAM at 102GB/s while reading data at 102GB/s at the same time, but you can never read data at over 102GB/s. This means latencies will be much better, but raw bandwidth isn't as large as the other GDDR5 implementation.
Furthermore, trying to correct an Anandtech article with an arsetechnica one won't go very well in 99 out of 100 times.
Even 102 GB/s is the theoretical peak... but in real life scenarios ?
Even 102 GB/s is the theoretical peak... but in real life scenarios ?
It's not quite full duplex. Besides the fact that getting dual-issue is apparently dependent on some rather onerous banking considerations, it was indicated that the interface will not dual-issue a write alongside a read every 8th cycle.
The correct marketing term is "suplex."Seven-eights duplex, then
Performance is ROP bound and they can't go larger than 720p anyway due to their g-buffer format exceeding ESRAM size.
Yep, that's it, that's what they explained back then. Hence the maximum ~145GB/s number for operations that can read and write from the same location.It's not quite full duplex. Besides the fact that getting dual-issue is apparently dependent on some rather onerous banking considerations, it was indicated that the interface will not dual-issue a write alongside a read every 8th cycle.
They use 16 byte per pixel gbuffer (http://www.frostbite.com/wp-content/uploads/2014/11/course_notes_moving_frostbite_to_pbr_v2.pdf) so it fits fine even in 1080, albeit with very little room for anything else.
just curious, but does compiler matter for shader performance on the consoles?They use 16 byte per pixel gbuffer (http://www.frostbite.com/wp-content/uploads/2014/11/course_notes_moving_frostbite_to_pbr_v2.pdf) so it fits fine even in 1080, albeit with very little room for anything else.
ROP bound doesn't really mean anything on this hardware... probably you mean bandwidth bound. But there's no such thing as a single bound for a frame. If I had to guess they're vgpr bound on their important shaders just like everyone else this gen.
As for why they're 720: they find the image quality acceptable for their goals and there's not enough consumer pressure to get them to change their goals.
AFAIK, it's 102GB/s full duplex. Meaning you can write data to the eSRAM at 102GB/s while reading data at 102GB/s at the same time, but you can never read data at over 102GB/s. This means latencies will be much better, but raw bandwidth isn't as large as the other GDDR5 implementation.
Furthermore, trying to correct an Anandtech article with an arsetechnica one won't go very well in 99 out of 100 times.
Yes... You are correct. I admit talking about 204 GB/s is wrong because of that limit. Besides it would never be 204 but 192 GB because eSRAM cannot read and write on all clock cycles.
eSRAM is a strange beast
just curious, but does compiler matter for shader performance on the consoles?
My recollection from some of the early debate here is that XB1 should be capable of some very good particle effects which we might not see replicated on PS4, hopefully developers showcase some of the unique advantages of the hardware on first party exclusives soon.
I don't think so, even by the opinion of Microsoft engineers. Actually esram low latency already brings you simultaneous read and write for some operations within the (7/8) stuff ratio, see @3dilettante post.Perhaps we need to also take into account the latency of Esram vs GDDR5 latency.