esram astrophysics *spin-off*

Status
Not open for further replies.
If an access is going to memory from a CPU, it is going through the individual connections to the L1 to L2, to the L2 interface, to the system request queue, and then the memory controller.
Jaguar is a low-power architecture, so there are already things like half-width buses between the caches, and the L2 interface per cluster is only able to transfer a limited amount of data per clock.

The magic of caches is that (usually, hopefully) you don't need to lean on these connections more than 10% (or some other small number) of the time per level. So you only want to go out of the L1 10% of the time, and of the times you do, only 10% will hit the L2 interface.

AMD's architectures past this point have a memory crossbar with fixed width connections, which has itself been a bottleneck in the past.
There is also a request queue that coherent accesses (most CPU accesses) must go on so that they can be kept in order, snoop the other L2, and so that the Onion bus can do its job.
This is all in the uncore and would be part of the northbridge.
This ordered and coherent domain has to manage the traffic between the clients and broadcast coherence information (a lot of back and forth) between them.
AMD probably wants to do this all without too much hardware, power, latency, or engineering effort.

Contrast this with Garlic, which is not coherent and deals with a GPU whose memory model is very weakly ordered, not CPU coherent, and whose memory subsystem has extremely long latency. This is a comparatively simpler connection that goes from the GPU memory subsystem to the memory controllers.

Thanks. Im still wondering why AMD would design more throughput for a DDR3 based apu versus a GDDR5 based apu.

But I don't won't to push the discussion off topic.
 
The per-cluster L2 interface bandwidth hasn't changed. That or the crossbar in the northbridge would be gating factors, and decisions on clock speeds can vary things a bit there.
Which DDR3 APU are you comparing this to?
 
I read the interview and the Dying Light dev starts with:

Jakub Klarowicz said:
We haven’t played around with the eSRAM much yet.

So they really aren't talking from much experience yet. Perhaps in a few more months they'll have something more substantial to offer.
 
It was interesting in a developer speaking positively about ESRAM as an asset, whereas it's often portrayed as an albatross.

The negativity towards ESRAM is mostly from comparison to the competition or what sacrifices had to be made to have it. These comments don't really answer to that.
 
It was interesting in a developer speaking positively about ESRAM as an asset, whereas it's often portrayed as an albatross.

Did you even read the article? Gipsel was correct - nothing new was revealed. The developer stated "According to Klarowicz, “We haven’t played around with the eSRAM much yet. Currently, we use it for storing the zbuffer and shadowmaps". Everything after that statement was the authors conjecture or thoughts on the matter. Anyhow, the main question should have been - "Is 1080p in the cards?".
 
I disagree, I think it was interesting to see a positive developer response towards ESRAM. That is what's novel about the comment.

This following comment is not the author's conjecture it is from the developer, so you are in fact incorrect.

“It’s especially helpful because the memory is readily available for any purpose and unit: the CPU, the GPU, textures, render targets, etc. It really smoothes out the optimization process.”
 
Did you even read the article? Gipsel was correct - nothing new was revealed. The developer stated "According to Klarowicz, “We haven’t played around with the eSRAM much yet. Currently, we use it for storing the zbuffer and shadowmaps". Everything after that statement was the authors conjecture or thoughts on the matter. Anyhow, the main question should have been - "Is 1080p in the cards?".

They are using for storing two high bandwidth demanding buffers. If they are already seeing the benefits from it, even without devoting much research to learn some tricks with it then it's some good news, no?
 
I disagree, I think it was interesting to see a positive developer response towards ESRAM. That is what's novel about the comment.

This following comment is not the author's conjecture it is from the developer, so you are in fact incorrect.
It's funny how you left out the key part of that statement...

The full potential of eSRAM, like the Xbox 360′s eDRAM, is yet to be realized but for now, it appears the developer is finding a number of uses for it. “It’s especially helpful because the memory is readily available for any purpose and unit: the CPU, the GPU, textures, render targets, etc. It really smoothes out the optimization process.”
Which is conjecture, and your understanding equals failure of it. ;)
 
Last edited by a moderator:
They are using for storing two high bandwidth demanding buffers. If they are already seeing the benefits from it, even without devoting much research to learn some tricks with it then it's some good news, no?

This isn't new. This has been discussed before in this forum. Not arguing its merits on XB1, just the info as Gipsel has pointed out isn't new.
 
This isn't new. This has been discussed before in this forum. Not arguing its merits on XB1, just the info as Gipsel has pointed out isn't new.

Yeah, it's been discussed, but it also been pointed out that 32mb might not be enough for these buffers @1080p. Since they are saying optimization is being smooth, I would assume 1080p is a target that they are being able to hit.
 
Yeah, it's been discussed, but it also been pointed out that 32mb might not be enough for these buffers @1080p. Since they are saying optimization is being smooth, I would assume 1080p is a target that they are being able to hit.

Only time will tell, hopefully they don't run into last minute compromises. Not showing XB1 version usually is a sign that things aren't going the way they expected - parity wise.
 
1080p shouldn't be a problem. Forza ran at 60fps, though they did have to cut down on environment texture quality for textures being read from the system memory, 60fps being the culrpit as those environment textures have to be pulled twice as often.

http://i.imgur.com/M5jbI9N.jpg

Only time will tell, hopefully they don't run into last minute compromises. Not showing XB1 version usually is a sign that things aren't going the way they expected - parity wise.

Data in the esram can now extend over to the system memory, so if a game needs more than 32MB esram thats fine and thus some of the ddr3 bandwidth can be used as well for the framebuffers, and if need be they can always make a small comprises to other things that use main system memory such as textures.
 
Last edited by a moderator:
1080p shouldn't be a problem. Forza ran at 60fps, though they did have to cut down on environment texture quality for textures being read from the system memory, 60fps being the culrpit as those environment textures have to be pulled twice as often.

http://i.imgur.com/M5jbI9N.jpg

Texture quality in Forza 5 is really good. The filtering could be better, but I'm yet to see a single texture that looks lower res than it should.

Edit:

Only time will tell, hopefully they don't run into last minute compromises. Not showing XB1 version usually is a sign that things aren't going the way they expected - parity wise.

They have since clarified that their goal is to achieve 1080p@60fps on both platforms, and it's looking like they are gonna make it. I would assume 1080p is already achieved now, and are looking to improve the framerate.
 
"mitigates some"

Hard to argue with? Did we expect the esram to hurt the DDR3 performance? ;)
 
"mitigates some"

Hard to argue with? Did we expect the esram to hurt the DDR3 performance? ;)

They tailored the rendering architecture to the XB1 memory hierarchy. Light pre-pass takes up less buffer space and is a better fit for XB1's ESRAM than all out deferred (ie. look at COD or BF on XB1 vs PS4).

I expect developers to take better advantage of the ESRAM as rendering techniques evolve. The roles seems to be reversed from last gen. with MS's console being the more difficult to fully utilize this time round.

Cheers
 
Status
Not open for further replies.
Back
Top