Next Generation Hardware Speculation with a Technical Spin [pre E3 2019]

Status
Not open for further replies.
I remember there were rumors some years ago that there would be a low cost version of HBM with performance below HBM2. What happened to that?
 
Actually, given the timelines involved, and the chip volumes that are (hopefully) involved, I’d suspect that the next node after the initial will be 5nm or below, given that the PS5 is likely not to launch until after 5nm is in volume production.

The attraction of N6 is that you can order masks for N6 with zero design changes and instantly get the benefits of EUV yield boosts.

I remember there were rumors some years ago that there would be a low cost version of HBM with performance below HBM2. What happened to that?

I read a post that suggested LCHBM was discarded due to lack of demand and HBM2 market still operating at full capacity. I believe this was interpreted from a Japanese source.
 
The attraction of N6 is that you can order masks for N6 with zero design changes and instantly get the benefits of EUV yield boosts.



I read a post that suggested LCHBM was discarded due to lack of demand and HBM2 market still operating at full capacity. I believe this was interpreted from a Japanese source.

Won't N6 be available when PS5 starts production, assuming it's going for a November launch? Why not use N6 for launch PS5? When do you expect a N6 PS5 to launch?

When did you read that LCHBM was discarded?
 
Won't N6 be available when PS5 starts production, assuming it's going for a November launch? Why not use N6 for launch PS5? When do you expect a N6 PS5 to launch?

When did you read that LCHBM was discarded?
N6 is entering risk in Q1 2020. We likely won’t see N6 products until 2021.

I read that post concerning LCHBM several months ago on ResetEra. If memory serves, Locuza posted it, who is also an active member here.

Edit: found it

https://www.resetera.com/threads/ps...-and-specification.3436/page-179#post-7547968

There is an japanese articel from pc.watch.impress which covers the situation.
Essence of it, DRAM prices went up, most costumers are high performance clients and the priority for 2-Hi stacks and a low-cost HBM specification went out the window, focusing more on high margin products, with more capacity and speed.

https://translate.google.de/transla...jp/docs/column/kaigai/1112390.html&edit-text=

https://pc.watch.impress.co.jp/docs/column/kaigai/1112390.html
 
N6 is entering risk in Q1 2020. We likely won’t see N6 products until 2021.

I read that post concerning LCHBM several months ago on ResetEra. If memory serves, Locuza posted it, who is also an active member here.

Edit: found it

https://www.resetera.com/threads/ps...-and-specification.3436/page-179#post-7547968

Thanks for looking that up

N6 is entering risk in Q1 2020. We likely won’t see N6 products until 2021.

I read that post concerning LCHBM several months ago on ResetEra. If memory serves, Locuza posted it, who is also an active member here.

Edit: found it

https://www.resetera.com/threads/ps...-and-specification.3436/page-179#post-7547968

What about 7nm with EUV? Could the consoles use it? What are the considerations in choosing 7nm with or without EUV?
 
Last edited by a moderator:
Thanks for looking that up



What about 7nm with EUV? Could the consoles use it? What are the considerations in choosing 7nm with or without EUV?
7nm+ is not compatible with the 7nm toolset, so the costs to port designs are higher. It’s only about a 10-20% power and density gain, so I don’t think it makes much sense with mask costs as they are.
 
Aren’t consoles still planned for 2020?

What’s the point of going from 7nm to 6nm for a late 2020 product? TSMC 7nm+ offers better logic density than TSMC 6nm. 6nm seems like a cheaper path for 15-20% better density for 7nm wares than migrating to 7nm+.
 
Aren’t consoles still planned for 2020?

What’s the point of going from 7nm to 6nm for a late 2020 product? TSMC 7nm+ offers better logic density than TSMC 6nm. 6nm seems like a cheaper path for 15-20% better density for 7nm wares than migrating to 7nm+.
I think it's because Zen 2 and Navi is already designed to use 7nm, so redesigning to use 7nm+ would result in additional costs. Moving to 6nm in 2021+ would not
 
This is my guess of Anaconda's specs, based on redacted information provided by klobrille on resetera and information about Dante by statham on twitter:

CPU : Custom AMD Zen 2 8C / 16T @ 3.2 GHz
GPU : Custom AMD Navi, 64CU 12.08 Tflops @ 1475 MHz
MEMORY : 24 GB GDDR6 @ 672 GB/s
STORAGE : SSD 1 TB NVMe @ 3 GB/s
 
Last edited:
This is my guess of Anaconda's specs, based on redacted information provided by klobrille on resetera and information about Dante by statham on twitter:

CPU : Custom AMD Zen 2 8C / 16T @ 3.2 GHz
GPU : Custom AMD Navi, 64CU 12.08 Tflops @ 1475 MHz
MEMORY : 24 GB GDDR6 @ 672 GB/s
STORAGE : SSD 1 TB NVMe @ 3 GB/s
If Navi still has the 64CU limit could they print more than 64 CUs to disable some for better yields?


When do you expect a migration to 6nm? Would that result in a much quieter console?
 
I think it's because Zen 2 and Navi is already designed to use 7nm, so redesigning to use 7nm+ would result in additional costs. Moving to 6nm in 2021+ would not

Those are the realities of the PC market. What shows up in consoles tend to readily break with conventions of the PC market.

There are no jag based 8 core apus in the PC space. The are no jag parts below 28nm in the PC space. There are no PC GPUs that have rapid packed math that’s not Vega based.

Outside of using zen 2 as chiplets, what’s forcing consoles to restrain themselves to the realities of the PC space. 7nm+ offers better 20% better density than 7nm with 10% saving on performance per watt.

Zen 3 is mostly defined by 7nm+. It’s not slated to be a big departure from zen 2. All I’ve seen is that it will support 4-way SMP vs 2-way SMP of Zen 2. If Zen 3 is mostly just small changes that’s not applicable to console gaming then how much of Zen 3 design is not applicable to a 7nm+ Zen 2 based part? Are console going to be able to share the cost of transitioning to 6nm with AMD’s PC business? Is AMD going to overlap launches of 6nm zen 2 parts and 7nm+ Zen 3 parts into the PC space?

So it’s the design cost of 7nm+ vs the design cost of 7nm, additional costs of transitioning to 6nm and the extra cost of 7nm silicon because of the increased multi patterning and lesser density leading to larger chips.
 
Last edited:
Adonisds

Kobrille's post says that for Dante, "CPU/GPU MATCHES ANACONDA".The dev kit will probably use all 64cu. Anaconda will obviously be expensive because of that. The reject gpus could be used for Lockhart.
 
Great news TMSC 7nm transacrion to TMSC 6nm is almost effortless.... Zen2 is designed on 7nm so chanches for a 2019 (early 2020) release are increased... 6nm will just give some savings and power reduction as I've understood (but maybe a bit less realability !?!? will see)....

So CPU is clearly Zen2 on 7nm/6nm... GPU is Navi also on 7nm/6nm same die.... RAM ? Now RAM is the big dilemma... I've read the real future proof standard is going to be HMB3... so that can cause some delay... or maybe first (2019) will have a 7nm+HMB2 console then (2021) a 6m+HMB3 improvement ?!?
 
Great news TMSC 7nm transacrion to TMSC 6nm is almost effortless.... Zen2 is designed on 7nm so chanches for a 2019 (early 2020) release are increased... 6nm will just give some savings and power reduction as I've understood (but maybe a bit less realability !?!? will see)....

So CPU is clearly Zen2 on 7nm/6nm... GPU is Navi also on 7nm/6nm same die.... RAM ? Now RAM is the big dilemma... I've read the real future proof standard is going to be HMB3... so that can cause some delay... or maybe first (2019) will have a 7nm+HMB2 console then (2021) a 6m+HMB3 improvement ?!?
6nm will offer savings either by fewer masks and less process steps with no design changes, or that with the addition of smaller die if they choose to shrink.

Not sure why you think there’d be less reliability?
 
I see that everyone is trying to guess the GPU clock speeds and CU number of the GPU with the goal of reaching a TF number, but a TF number is not the best measure of performance, specially since we are comparing GPUs of different architectures. It might seem unlikely to some that a console will launch in the end of next year with more than 12 TFs, but is it crazy to believe that after 3 years we will get a GPU with more than twice the performance? It doesn't seem unlikely to me. Can we leave the pointless TF discussion and talk about performance? How much better than the X1X GPU will the next gen GPUs perform?
 
I see that everyone is trying to guess the GPU clock speeds and CU number of the GPU with the goal of reaching a TF number, but a TF number is not the best measure of performance, specially since we are comparing GPUs of different architectures. It might seem unlikely to some that a console will launch in the end of next year with more than 12 TFs, but is it crazy to believe that after 3 years we will get a GPU with more than twice the performance? It doesn't seem unlikely to me. Can we leave the pointless TF discussion and talk about performance? How much better than the X1X GPU will the next gen GPUs perform?
well yea it's pretty pointless.

You want to make sure the rest of the pipeline doesn't have bottlenecks either. So there are a great deal of many things to consider more than just TF. A GPU with high TF but shitty bottlenecks elsewhere will generalyl perform terribly, you can code around those bottlenecks, but that just means some games will be taking full advantage of it and some games won't.

We're really looking for a generation of graphics power that increases the visual fidelity across the board and not for the handful of studios that can make full usage of the hardware at the most optimized point. This is a big reason why using exclusives as a benchmark for console performance as a somewhat flawed argument.

There is at least imo, a reasonable expectation that MS will deliver given the information we know they are entitled to some interesting pricing advantages that they weren't offered to them in the past. In particular the advantage of launching with 2 SKUs and a final SKU for their servers/XCloud project.

This allows for at least 3 bins. So if hypothetically, 64 CU, 60 CUs, 56 CUs - or 0,1,2, CU redundancy per cluster. Making the upper SKU the more expensive one, the middle SKU perhaps for servers, and the weakest SKU for the weaker console for instance.

The other advantages they have come from how they have built the X1X. They have an excellent track record with 4K in both native and reconstructed formats. They were abel to succeed by profiling Xbox One code and using that to see where those bottlenecks might lie at 4K while designing scorpio.

Now they have had 2 years+ of profiling 4K titles on scorpio whether at 30 fps, 60fps, or unlocked. They now have access to DXR titles for Ray tracing profiling on PC. And can ask developers to bring their code in and see how it will run on their silicon.

Because this process focuses on real data points and working with production code it removes a lot of guess work out of the equation. I think this was successful for them with Scorpio and such a process should also give them good success for next generation.

I don't know if that means specs will be amazing or what not, i just have confidence they'll make a competent machine, i'm not sure if that will make it the best machine. But such a process should remove the possibility of another Xbox One.
 
Last edited:
well yea it's pretty pointless.

You want to make sure the rest of the pipeline doesn't have bottlenecks either. So there are a great deal of many things to consider more than just TF. A GPU with high TF but shitty bottlenecks elsewhere will generalyl perform terribly, you can code around those bottlenecks, but that just means some games will be taking full advantage of it and some games won't.

We're really looking for a generation of graphics power that increases the visual fidelity across the board and not for the handful of studios that can make full usage of the hardware at the most optimized point. This is a big reason why using exclusives as a benchmark for console performance as a somewhat flawed argument.

There is at least imo, a reasonable expectation that MS will deliver given the information we know they are entitled to some interesting pricing advantages that they weren't offered to them in the past. In particular the advantage of launching with 2 SKUs and a final SKU for their servers/XCloud project.

This allows for at least 3 bins. So if hypothetically, 64 CU, 60 CUs, 56 CUs - or 0,1,2, CU redundancy per cluster. Making the upper SKU the more expensive one, the middle SKU perhaps for servers, and the weakest SKU for the weaker console for instance.

The other advantages they have come from how they have built the X1X. They have an excellent track record with 4K in both native and reconstructed formats. They were abel to succeed by profiling Xbox One code and using that to see where those bottlenecks might lie at 4K while designing scorpio.

Now they have had 2 years+ of profiling 4K titles on scorpio whether at 30 fps, 60fps, or unlocked. They now have access to DXR titles for Ray tracing profiling on PC. And can ask developers to bring their code in and see how it will run on their silicon.

Because this process focuses on real data points and working with production code it removes a lot of guess work out of the equation. I think this was successful for them with Scorpio and such a process should also give them good success for next generation.

I don't know if that means specs will be amazing or what not, i just have confidence they'll make a competent machine, i'm not sure if that will make it the best machine. But such a process should remove the possibility of another Xbox One.

That's such a great post

No one can know that until they know more about the GPU architecture.

No one can know most relevant things about the GPU right now, really. That doesn't stop us from trying to guess. This is a great topic because we can read good insights from many different people. I certainly wouldn't be able to make a good guess alone
 
I'm sure both Ms and Sony (specially this last one) will make sure to have CPU, RAM and bandwidth future proof... also bandwidth towards mass storage that will be properly buffered ... Teraflops in the GPU can be increased in the "pro" interation that will reasonably will be increased as 6nm process will mature in 2021 or maybe with 4 nm or such... so, for marketing, will have 10 TF (but for ps5 actually just doubling ps4-pro TF could be enough).. will just see in the beginning ps4-pro games that runs at 30 fps running on ps5 at 60 fps... I believe in the revaming at 7nm of a really cheap, quiet, small, ps4-pro... that can even goes trough 6nm... remember the 100 millions ps4 around this are the REAL concern for Sony: not letting them behind
 
Status
Not open for further replies.
Back
Top