Next Generation Hardware Speculation with a Technical Spin [2018]

Status
Not open for further replies.
Yes, since ninty chose esram for the cube I don't see why it couldn't be used again. And again it's not like XO's implementation was a failure.

You have to admit the thought of gddr6 and 128mb of 7nm esram is exciting.

I think the barrier to this would be die size, not cost. They can only make an APU so large at least without getting creative. Perhaps we could see discreet CPUs and gpus again?

Bah. I just like the custom hardware differences we saw in the old days, made it more interesting.
 
You have to admit the thought of gddr6 and 128mb of 7nm esram is exciting.
That'd be overkill. Embedded RAM is used for its high speed where other, cheaper solutions don't work. If GDDR6 is fast enough, coupling it with ESRAM/eDRAM would be an excess of bandwidth, especially if you compromise how much GPU you have to use it to fit it on the die. ESRAM was used on XB1 to offset the low speed of the DDR and enable 8 GBs. PS4 was looking at 4 GBs of GDDR5, and we'd have a very different perception of XB1's choice if Sony hadn't been fortuitous with their gamble.

I think the barrier to this would be die size, not cost.
They're the same thing.
 
Yes, since ninty chose esram for the cube I don't see why it couldn't be used again. And again it's not like XO's implementation was a failure.

You have to admit the thought of gddr6 and 128mb of 7nm esram is exciting.

I think the barrier to this would be die size, not cost. They can only make an APU so large at least without getting creative. Perhaps we could see discreet CPUs and gpus again?

Bah. I just like the custom hardware differences we saw in the old days, made it more interesting.
Compared to PS4 it a looks like clear failure, they traded 6CUs for slightly cheaper memory and lost the performance crown big time to a similarly sized SoC. 7nm transistors are too valuable for eSRAM, especially with NAND prices plummeting to augment a GDDR6 tier approaching 500GB/s (assuming 256bit bus).
 
I think it should be clear that when you’re pushing the die size limit you’re comfortable with due to space/cost/yield concerns, any transistor not spent on raw IPC or TF is wasted. Build the support structure out from there to make sure it’s fed the data bandwidth and latency it needs.
 
But AFAIK eDRAM was only ever used in one system with a discrete GPU (X360), and that was almost 13 years ago when the main GDDR3 memory could only provide 22.4GB/s.

I think he was referencing the fact how the Iris Intel chips with eDRAM were performing against Intel chips without it, when discrete GPUs were used. the 5775c often beats Skylake and Kaby Lake parts that were running at higher frequency.
 
That'd be overkill. Embedded RAM is used for its high speed where other, cheaper solutions don't work. If GDDR6 is fast enough, coupling it with ESRAM/eDRAM would be an excess of bandwidth, especially if you compromise how much GPU you have to use it to fit it on the die. ESRAM was used on XB1 to offset the low speed of the DDR and enable 8 GBs. PS4 was looking at 4 GBs of GDDR5, and we'd have a very different perception of XB1's choice if Sony hadn't been fortuitous with their gamble.

They're the same thing.
It's not overkill, the ps2 had a similar setup and that allowed for things you didn't see on x or cube. Hell even on xbox one x you have games with shit tier texture filtering and fps drops when alphas fill the screen! Like far cry 5 at 4k with trilinear filtering. Every single current nvidia gpu could use more bandwidth, it's hard to go overkill on it.

There was nothing wrong with the esram in xbox one in and of itself is my point. Imagine not sacrificing CU's or gddr6 and still have the eSRAM. Likely? No but the same goes for the rest of our speculation.

Using eSRAM would definitely limit the bus size though... definitely not 384 bit.
 
Last edited:
OG Xbox could do most things that PS2 could and better, without EDRAM.
But it couldn't handle mgs2 or silent hill 2 particle effects. In both cases they were pared back and run worse to boot. In fact aside from alpha effects and maybe a couple other fringe cases the GC had more bandwidth than Xbox for textures and detail.

I'm not saying more gpu units should be cut in favor of eSRAM, i'm just saying it'd be a great cherry on top, all else being equal and far from overkill/useless. A 256bit bus + esram would have a higher performance ceiling than a 384 bit bus without it.
 
But it couldn't handle mgs2 or silent hill 2 particle effects. In both cases they were pared back and run worse to boot.

Maybe, on the flip side both where designed and developed with the PS2 architecture in mind, then ported over to xbox and pc. Sure MGS2 and SH2 could have been done better on the og xbox. Even the pc version was a total mess, no matter what hardware being used at the time, long after PS2 release.

What the heck is OG xbox?

Original Xbox.
 
What the heck is OG xbox?

It's become common vernacular for the following Microsoft Console Generations, because the third console is named "one"

1. First MS Console = Xbox = Original Xbox = OG XBox = OGX
2. Second MS Console = Xbox 360 = X360
3. Third MS Console = Xbox One = XBone, One, One S, One X = XB, XO, XS, OX
 
I think he was referencing the fact how the Iris Intel chips with eDRAM were performing against Intel chips without it, when discrete GPUs were used. the 5775c often beats Skylake and Kaby Lake parts that were running at higher frequency.
So he wasn't talking about eDRAM on discrete GPUs?

Where are you putting PS2 in all this? :-?
I worded it poorly. Should have said "last discrete GPU I know of".​
 
Oh and it gets sillier, we don't know what to call their next device, I'm running with XB2 and XB2X for the mid life bump or second launch sku. Scarlet sounds like a character from a dubious novel about the US Civil War not a console
 
I was thinking perhaps they decided to use codenames from G.I.Joe. Bring on Snake Eyes midgen update! :runaway:
 
So he wasn't talking about eDRAM on discrete GPUs?

Perhaps he can clarify it himself, but I read it as he was talking about the benefit of the eDram on the cpu portion of the Intel chips. When discrete GPUs are used on those chips, the eDRAM acts as some type of cache to the CPU and offers a nice boost to framerates in many situations.
 
I think he was referencing the fact how the Iris Intel chips with eDRAM were performing against Intel chips without it, when discrete GPUs were used. the 5775c often beats Skylake and Kaby Lake parts that were running at higher frequency.

So he wasn't talking about eDRAM on discrete GPUs?


I worded it poorly. Should have said "last discrete GPU I know of".​

Perhaps he can clarify it himself, but I read it as he was talking about the benefit of the eDram on the cpu portion of the Intel chips. When discrete GPUs are used on those chips, the eDRAM acts as some type of cache to the CPU and offers a nice boost to framerates in many situations.

Dr. Evil is correct. That’s exactly what I was referring to.
 
Status
Not open for further replies.
Back
Top