Next Generation Hardware Speculation with a Technical Spin [2018]

Status
Not open for further replies.
They all had secret sauce this gen, it never was anything more than a relatively small modification to existing architectures AMD had to offer. We should expect the same.

I don't think we will see any AI or RT circuitry unless AMD already have it planned for their gpus. We would see those features announced as their next architecture way before the consoles come out. The lack of PR response from AMD to counter nvidia's could also mean they have nothing and will only focus on generic compute instead of special purpose like tensor or rt.
2 years is a still a ways out so there is sufficient time for that to happen. I think after their reversal from vega vs volta marketing campaign, I can't imagine wanting them to pre-hype up something that may not deliver on time.

It is imo, best to keep their RT under wraps. Working with MS on both Xbox and DXR etc, should have given insight on what type of customization MS wants for RT to be a in a console even if they don't manage to deliver it for next gen.
 
They all had secret sauce this gen, it never was anything more than a relatively small modification to existing architectures AMD had to offer.

Maybe it's the other way around? I mean in the sense that console SoC architectures ended up dictating most of the specs for the PC GPU architecture.
That's why GCN 1.1 was so close to PS4's Liverpool, Vega adopted PS4 Pro's RPM and according to rumors Navi is being mostly developed by Sony's bidding.
Sure the initial GCN groundwork was probably developed by AMD during the Phenom days, but as soon as they got the first Fusion APUs working, their main efforts probably went into consoles.

If you look at AMD's ridiculously low R&D budget from the past ~6 years (and how most of it must have gone to Zen), it makes sense that AMD put Sony and Microsoft paying for the grunt of their GPU development, through the semi-custom contracts.
It would also explain all the pent up frustration from anyone who ran AMD's GPU division during the last 6-7 years, and their subsequent short-lived career positions.
Making GPUs for PC gaming, professional CAD and general compute out of left-overs from consoles must not have been an easy task. Even less for those who were stuck with Globalfoundries' underperforming process.
 
My guess:

Zen 2 - 8 core/16threads at 2.8 to 3.0 GHz
12.8 TF semi-custom Navi w/ some forward-looking features
24 GB GDDR6 700-800 GB/s bandwidth
something in the way of a HDD/hybrid drive + cache solution to speed up load times, assist in streaming assets
November 2020 release
 
My guess:

Zen 2 - 8 core/16threads at 2.8 to 3.0 GHz
12.8 TF semi-custom Navi w/ some forward-looking features
24 GB GDDR6 700-800 GB/s bandwidth
something in the way of a HDD/hybrid drive + cache solution to speed up load times, assist in streaming assets
November 2020 release

Specs look great, but you left out a price estimate. That looks expensive.
 
Word is Ps5 graphics about year 2006 top
CGI level
Hole graphics pipeline not Just fancy reflections!

Not gamedemo - looking for you AVATAR!!!
 
Last edited by a moderator:
Word from where? Provide a link, ideally to a reputable source.

Being honest, your contributions to this thread aren't in keeping with the standards required for a B3D technology forum. If you're not willing/able to contribute on a more technical level, even if just in sourcing and linking intelligent data, then you should really refrain from posting.
 
Stupid Question (I try to limit them to 1 around every 6 months).

I believe the discussion here ruled out MCM for Navi. From what I remember, AMD stated they could use IF to put more than 1 chip on a package but they did not (or at least - not yet) have a way to make this transparent for developers. Coding would have to be for this package specifically as if one were using multiple GPU's. Something the professional graphics industry has already embraced and thus why it would be acceptable on such professional cards. My question is: Does this limitation maybe only apply to Navi for PC GPU? NOT from a technology standpoint - those limitation obviously remain regardless, but since it is a closed system and going to be produced in very large quantities for Sony and MS, would the concerns about programming be reduced when all the systems have it? Yes, you would still have to code specifically for it, but you would no longer be talking about a small segment of the PC market for years to come. It would be millions within very short order. I expect AMD could, and almost certainly will, rename the chips it produces for Sony and MS to something other than Navi. So anything from AMD stating "No MCM for Navi." could easily be gotten around. i.e. - The PS5 is not Navi, it is XRave!

If you could, would this be worth it? Chip production cost reduction/ Power?/ etc?
 
Stupid Question (I try to limit them to 1 around every 6 months).

I believe the discussion here ruled out MCM for Navi. From what I remember, AMD stated they could use IF to put more than 1 chip on a package but they did not (or at least - not yet) have a way to make this transparent for developers. Coding would have to be for this package specifically as if one were using multiple GPU's. Something the professional graphics industry has already embraced and thus why it would be acceptable on such professional cards. My question is: Does this limitation maybe only apply to Navi for PC GPU? NOT from a technology standpoint - those limitation obviously remain regardless, but since it is a closed system and going to be produced in very large quantities for Sony and MS, would the concerns about programming be reduced when all the systems have it? Yes, you would still have to code specifically for it, but you would no longer be talking about a small segment of the PC market for years to come. It would be millions within very short order. I expect AMD could, and almost certainly will, rename the chips it produces for Sony and MS to something other than Navi. So anything from AMD stating "No MCM for Navi." could easily be gotten around. i.e. - The PS5 is not Navi, it is XRave!

If you could, would this be worth it? Chip production cost reduction/ Power?/ etc?

It's not a stupid question at all! I'm not a pro developer or remotely a hardware engineer so take my reply with a pinch of salt, but this is my understanding:

The more you hide the from the developer, the more performance becomes un-reachable for any given solution, and the harder it is to understand or work around gotchas. Multi-GPU is great for highly independent workloads of situations where your API already hides tons, solutions are simple, and you have tons of money and power to throw at brute force solutions. This is almost the opposite of where consoles are heading, with lower level APIs and a laser like focus on costs and power.

With highly distinct units e.g. CPU, GPU, edram ... separate dies used to make sense (though less so now), but with identical, repeated units trying to share data sets and execution units across slow and power hungry interconnects, I don't think it will for any cost conscious solution.

Perhaps new types of chip interconnect (edge to edge chip interconnects running at on-die speeds) can change this, but I don't think that's likely in the near term.

Also, XRave is perfect console marketing division chud. :LOL:
 
You need to include commentary on your links, like what is it for (so people know whether it's something they want to click or not).

In this case, this is the Backwards Compatibility patent filed years ago for changing timings of hardware for testing purposes, which we already know about. ;)
 
It seems to be a patent for a hardware test platform where the clocks can be messed around with to see if the timings screw with software, with particular note to running the clocks faster. So, for example, testing an overclocked PS4 to see if the higher clock screws up games or not when it comes to BC of PS4 title on a faster-clocked PS5.

And no, I've no idea what the value of the patent is either. ;) Are Sony hoping to stop MS testing BC on higher-clocked systems thanks to this patent?? :rolleyes:
 
And no, I've no idea what the value of the patent is either. ;) Are Sony hoping to stop MS testing BC on higher-clocked systems thanks to this patent?? :rolleyes:

Maybe it's a defensive patent (with an uncharted amount of paranoia), although Nintendo would probably have more more of a knack with that sort of hardware scenario.
 
As much as everyone is hoping for a beefed up CPU in next gen consoles I think we're more likely to see a modest improvement. I think we'll most likely see an eight core Zen "Light" or some derivative of Zen with higher clocks than we had in previous consoles. Does anyone think there's a possibility we could see a 10 or 12 core CPU? Parallelizing CPU work loads seems to be the future for consoles rather than the brute force of say a four or six core Zen CPU.
 
As much as everyone is hoping for a beefed up CPU in next gen consoles I think we're more likely to see a modest improvement. I think we'll most likely see an eight core Zen "Light" or some derivative of Zen with higher clocks than we had in previous consoles. Does anyone think there's a possibility we could see a 10 or 12 core CPU? Parallelizing CPU work loads seems to be the future for consoles rather than the brute force of say a four or six core Zen CPU.
Honestly, I think that constructing and debugging a new CPU design on 7nm is a cost everyone would like to avoid. Far easier to simply use fewer cores if die area is an issue.
 
As much as everyone is hoping for a beefed up CPU in next gen consoles I think we're more likely to see a modest improvement. I think we'll most likely see an eight core Zen "Light" or some derivative of Zen with higher clocks than we had in previous consoles. Does anyone think there's a possibility we could see a 10 or 12 core CPU? Parallelizing CPU work loads seems to be the future for consoles rather than the brute force of say a four or six core Zen CPU.
What in particular is wrong with using Zen as it is? Why does it a need a customised, cut-back version?
 
Instead of massive cpu parallelism, what about indirect drawing where the gpu feed itself and reduce the pressure on the cpu?
 
Honestly, I think that constructing and debugging a new CPU design on 7nm is a cost everyone would like to avoid. Far easier to simply use fewer cores if die area is an issue.
I agree to some extent but I do see a marketing issue with the next gen console having less cores than the their previous versions. The average consumer is not very tech savvy but they do notice things like "8 core CPU". Personally I think it will be another eight core, but I was just wondering if anyone else thought it would be remotely possible for more CPU cores than last gen.
What in particular is wrong with using Zen as it is? Why does it a need a customised, cut-back version?
My line of thinking is that in previous consoles there was more powerful AMD CPU tech available at that time but both Sony and Microsoft went for cheaper lower powered Jaguar's. I think the same trend will continue going forward.
 
As much as everyone is hoping for a beefed up CPU in next gen consoles I think we're more likely to see a modest improvement. I think we'll most likely see an eight core Zen "Light" or some derivative of Zen with higher clocks than we had in previous consoles. Does anyone think there's a possibility we could see a 10 or 12 core CPU? Parallelizing CPU work loads seems to be the future for consoles rather than the brute force of say a four or six core Zen CPU.

2700E is Zen+ 12nm 8C/16T @2.8GHz in 45W.

There’s no reason to cut back from that.

Second, why would they port to 7nm when they can use the taped out Zen 2?

Also, Bulldozer was a failure. They were right not to use an APU based around it.

Instead of massive cpu parallelism, what about indirect drawing where the gpu feed itself and reduce the pressure on the cpu?

This is what AMD attempted with primitive shaders and now Nvidia with mesh methodology in Turing.
 
Last edited:
I agree to some extent but I do see a marketing issue with the next gen console having less cores than the their previous versions.
No. You think people will refuse to buy a next-gen console because a number on the side of the box is less than the number on the side of the box of the console they bought 5+ years earlier? ;)
 
Status
Not open for further replies.
Back
Top