SenjutsuSage
Newcomer
Thats exactly what it said but what it is referring to is a standard part of the GCN architecture.
Yea, it sure sounds like it.
Thats exactly what it said but what it is referring to is a standard part of the GCN architecture.
...And the difference between 1 and 2 is 100%, surely that's an enormous difference! Diff between 28 and 32 is just 4 cycles. Computers deal with cycles, not percent. Percentages is a tool humans use, mainly to make certain numbers and mathematical illustrations easier to grasp for our limited minds. Sometimes percentages play tricks on us and make differences look bigger than they really are. 4 cycles@800MHz is what, 5ns? Not all that much really.The difference between 28 cycles and 32 cycles is 15%. Is that small enough to be insignificant?
...And the difference between 1 and 2 is 100%, surely that's an enormous difference! Diff between 28 and 32 is just 4 cycles. Computers deal with cycles, not percent. Percentages is a tool humans use, mainly to make certain numbers and mathematical illustrations easier to grasp for our limited minds. Sometimes percentages play tricks on us and make differences look bigger than they really are. 4 cycles@800MHz is what, 5ns? Not all that much really.
Of course it will, depending on how you define "significantly". It's a completely subjective slash arbitrary. Now weigh in the fact that an SRAM array is several hundred percent larger than the equivalent DRAM - is that still worth 15% performance (I point out that's numbers just pulled out of the air btw, but I'll ride with what we've been using so far, for consistency's sake). I would say that for MS, probably not.But its still my question is that enough to significantly impact performance?
For those interested in the technology behind Infiltrator's impressive demonstration, here’s a brief overview of the features on show:
• New material layering system, which provides unprecedented detail on characters and objects
• Dynamically lit particles, which can emit and receive light
• High-quality temporal anti-aliasing, eliminating jagged edges and temporal aliasing
• Thousands of dynamic lights with tiled deferred shading
• Adaptive detail levels with artist-programmable tessellation and displacement
• Millions of particles colliding with the environment using GPU simulation
• Realistic destructibles, such as walls and floors
• Physically based materials, lighting and shading
• Full-scene High Dynamic Range reflections with support for varying glossiness
• Illuminating Engineering Society (IES) profiles for realistic lighting distributions
Saw this on GAF. Does the bolded hint at anything perhaps?
A number of modern game renderers do some kind of multiple pass rendering that defers shading and use tiling. It's not a reflection of the GPU the system might have.
A number of modern game renderers do some kind of multiple pass rendering that defers shading and use tiling. It's not a reflection of the GPU the system might have.
I really doubt that MS has been going around calling it eSRAM when it's really eDRAM. Yeah I know about 1T-SRAM, but that's a pretty defunct brand name.
frostbite 2 comes to mind.
The last 10 pages have been nothing but a pain to read even for me, I don't know how you other guys put up with all the quick there's a word google it. Also with all this memory access talk if it was that important and provided that much improvement regardless of D ram or S ram wouldn't they just clock it to the speed needed to hit there performance target .
And what is your source for this? MS's own patents are pretty explicitly suggesting otherwise and they explain a lot about the stuff we do know by the sound of it. Maybe the 'deferred' part is what's off?
Doesn't matter how dense it is, SRAM will still be proportionally larger compared to DRAM by basically the same factor. If there's really 32MB bona fide SRAM on that die it's going to eat up upwards of 40, maybe 50% die space. It'll be major, for sure. Hard to see how a fairly minor performance difference could be worth such a big investment in silicon. Bragging rights alone don't carry you very far with the ignorant populace out there - nobody's gonna care other than the fanboys, and they're already sold on your shit anyway so that's not a win.Yeah 32MB of SRAM on the die is a lot but TSMC's 28nm process is pretty dense.
If that would really be their excuse/motivation for going with far, far larger SRAM then they seriously need to think about why they think they need the on-chip memory in the first place methinks.eDRAM is getting more expensive to implement and harder to find fabs that are able to do it on their most cutting edge process.
Thats exactly what it said but what it is referring to is a standard part of the GCN architecture.
Doesn't matter how dense it is, SRAM will still be proportionally larger compared to DRAM by basically the same factor. If there's really 32MB bona fide SRAM on that die it's going to eat up upwards of 40, maybe 50% die space. It'll be major, for sure. Hard to see how a fairly minor performance difference could be worth such a big investment in silicon. Bragging rights alone don't carry you very far with the ignorant populace out there - nobody's gonna care other than the fanboys, and they're already sold on your shit anyway so that's not a win.
Because clocking things to a certain speed just isn't practical if you want enough of them available to sell to customers, I think. The yields may not be so good, so I suppose it's much easier to just try your best to maximize what you get out of a much safer clock speed.
Doesn't matter how dense it is, SRAM will still be proportionally larger compared to DRAM by basically the same factor. If there's really 32MB bona fide SRAM on that die it's going to eat up upwards of 40, maybe 50% die space. It'll be major, for sure. Hard to see how a fairly minor performance difference could be worth such a big investment in silicon. Bragging rights alone don't carry you very far with the ignorant populace out there - nobody's gonna care other than the fanboys, and they're already sold on your shit anyway so that's not a win.
If that would really be their excuse/motivation for going with far, far larger SRAM then they seriously need to think about why they think they need the on-chip memory in the first place methinks.
32MB SRAM really, really is a really really huge amount of memory. A quadcore i7 only has 9MB SRAM (excluding the GPU-only LLC of ivy bridge, which I don't know the capacity of), but it's still roughly half the die give or take a bit. Stepping up to 32, that's...huge. HUGE. There's so much logic they could have sunk into the chip with that much space. 32MB *8 bits per byte *6 trannies/bit, that's a billion and a half just for SRAM arrays. Not factoring in anything else, redundancy (if applicable), control lines and attached logic for handling access conflicts, snooping, resolves, DMA and stuff. All that would weigh out to even more in total.
I'd be fucking super duper amazed if they'd actually do this. Really really big "if".
You are talking about 15% access time difference, if it bought the console that much performance then they would clock it 15% higher and pay the price ( whatever that is, yields, reduced clocks on other components etc) . The reality is it probably makes very little difference, it's already way faster then main memory and proves more throughput, that is the important bit.
Isn't it more than just this, though? ERP suggested that if it was real SRAM and similar to L2 cache performance, a cache miss would drop from 300+ GPU cycles to 10-20 cycles. He also said a shader spends more time waiting on memory than computing values, and if that's truly the case, why wouldn't the SRAM potentially be pretty helpful for Durango development?