Baseless Next Generation Rumors with no Technical Merits [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
Another thing:

People are expecting the Navi family to go from rdna to rdna 2.0 . But when has AMD ever made a major architectural change to a gpu family? Major architechtural change had to wait for a newer family of gpus. Both Microsoft and Sony have said they're using Navi. We know Navi does NOT have have real hardware RT NOW, so why are you expecting for this to happen? There is, of course, a possibility that genuine hardware RT is a customization (It's doubtful).

This may require defining what part of the architecture or what counts as major. At least since GCN and Sea Islands (Bonaire, Hawaii, Orbis, Durango, possibly Scorpio and Neo) there have been changes to command processor blocks and surrounding IP, with the consoles having a number of other low-level changes.
Vega as far as AMD has used the name has had new instructions added with Vega 20.
Arcturus is in the GFX9 family, although it might not keep the Vega name given some significant extensions to the ISA and loss of graphics functionality.

Navi is GF10, and there is a pair of ISA revisions--GFX1011 and GFX1012, that would seem to keep additional instructions within the same GFX10 family. GFX1011 has some mentions of BVH ray intersect instructions, and if this is not in error that might mean some future variant of Navi family could have it.
https://github.com/llvm-mirror/llvm...00adaf0#diff-ad4812397731e1d4ff6992207b4d38fa

Perhaps that means it's not a generation-ending change to add a few BVH instructions, or maybe even that some groundwork for the feature is there (difficult to squint at a TMU die shot to know if there's an extra set of microcode). New features can exist for internal evaluation in products that never see them exposed, such as how hyperthreading existed in the first Pentium4 core despite it not showing up publicly in the next generation. Primitive shaders for Vega fall into a more problematic category of feature that we know was in the silicon but did not see external use.

AMD seems to have become more willing to differentiate GPUs within a family in terms of features and instruction support with recent products, and that leaves out the flexibility semicustom designs have. Sony's design leads are on the record describing how AMD gives them a list of current and upcoming features when the custom design process is underway.
 
Though on the discussion of redundancy. When I think about die shots, Nvidia, the current consoles. The 20XX series we know that the chips are large and die shots (which I haven't seen) But aside from say CUs that could be set aside as redundant, what about their tensor cores or their RT cores? What if a silicon defect in those areas would be detrimental enough to throw the chip away? Perhaps the chip doesn't need to be that large, they had to include redundancy to bring costs down.
The tensor cores are ALU blocks within the SM. The RT cores are likely a fixed-function block hanging off of the memory pipeline of the SM. A fault in them would be treated like a fault in a SIMD block or other logic. If not in an area with redundancy, the SM is inactivated.

This labelling is clearly wrong, everything looks the same! There are 24 blocks in a section. There are 6 of them that all look the same. Is there a diagram that breaks down what the components are like we can do with the consoles?
From Nvidia, I'm not aware of a diagram that gives a sensible breakdown. Most of the marketing pictures are more like pretty feature checklists with no regard for the implementation.

From the following, area comparisons between Turing chips with and without RT functionality seem to have single-digit percent growth of the SM for RT functionality, which means it's more modest for a chip whose area includes many other blocks.
https://forum.beyond3d.com/threads/nvidia-turing-architecture-2018.60890/page-11#post-2064676
 
Another thing:

People are expecting the Navi family to go from rdna to rdna 2.0 . But when has AMD ever made a major architectural change to a gpu family? Major architechtural change had to wait for a newer family of gpus. Both Microsoft and Sony have said they're using Navi. We know Navi does NOT have have real hardware RT NOW, so why are you expecting for this to happen? There is, of course, a possibility that genuine hardware RT is a customization (It's doubtful).

Why? Because AMD have stated in their presentation about future gpu/cpu tech that rdna2 is used in next gen consoles. Ain't that clear enough?
 
I recall someone on this forum saying given the size of peoject scarlett's die, there could be as many as 100 cus. Wel, if rt cores are included, how many cus can there be?
 
Last edited:
Welll... I wasn´t expecting this... MisterXmedia is commenting on my website.

The site is in Portuguese, but I made an article about some of his coments on Tweeter about RT on Scarlett beeing better than PS5, and somehow, he decided to comment (In english, of course).

And what he is saying is even better than next gen speculation. He claims Xbox One and X have a second die with dedicated RT hardware. And that's how RT Shadows on Gears 5 was made!

This hardware is a primitive and limited version of the Scarlett RT hardware. He clains there is no way a 1.3 Tflops console could do those RT Shadows without it!

I treated him nicely, told him that's just GPGPU usage, and remembered him that PS5 did a game 100% on GPGPU with RT (The Tomorrow Children). Even so he ignored that and continues to claim there is Rt Hardware on a second Die.
I asked him: "If this is true, why doesn´t anything show on Xrays?"

I was shure he would answer this with Klingon Cloaking technology. But he just claims Xrays doesn´t show everything...

OMG!
 
Last edited:
I recall someone on this forum saying given the size of peoject scarlett's die, there could be as many as 100 cus. Wel, if rt cores are included, how many cus can there be?

You may as well forget that anyone said that. It was only even slightly plausible when the thought was that Navi CUs would be comparably-sized to Vega CUs, now that we know they are quite a bit bigger that configuration isn't remotely possible, RT cores or no.

My guess on CU count - somewhere around 40.
 
Last edited:
Welll... I wasn´t expecting this... MisterXmedia is commenting on my website.
I should be surprised he's still around and probably has followers, but he's the definition of a conspiracy theorist.
So in that regards its not surprising at all.
Pull together patents, what people say, articles, add in a good dose of imagination, present it in a factual manner, and people will believe.

You had a lot of time on your hands to try and have a discussion with him. Not to sound bad but there's no changing his views regardless how disproven they may be.
 
And what he is saying is even better than next gen speculation. He claims Xbox One and X have a second die with dedicated RT hardware. And that's how RT Shadows on Gears 5 was made!

This hardware is a primitive and limited version of the Scarlett RT hardware. He clains there is no way a 1.3 Tflops console could do those RT Shadows without it!
Well..
  1. On the first point, there's no second die. This is clearly visible. But more importantly, RT hardware wouldn't require a second die, not sure I'd die on a sword for that. The amount of hardware for RT is _not_ so much that it requires that much more silicon.
  2. On the second point, Imagination has RT working on a mobile GPU chip. So he's wrong on that front. GR6500 is 300 Gflops. By comparison that is a fraction of XBO. *lol edit I guess this is a point in his favour. But honestly, you still need to shade stuff.
  3. On a third point, games have been leveraging some ray tracing everywhere in parts of their games for different things. In the case of gears, RT shadows are only for distance which is much lighter and less intensive than doing models up close due to the amount of geometry you have to account for.
If you had to bet on which one of the Xbox's had RT hardware, I wouldn't bet on OG. The entire system was developed with Kinect in mind; they were lucky to fit in backwards compatibility.

Xbox One X is much more forward looking; if I had to stake that particular conspiracy I would have dumped my chips into X1X. Unfortunately he doubled down on stupidness.
 
Last edited:
Those theories of hidden silicon are beyond moronic. What the fuck kind of megacorporation will put powerful processing hardware in a console giving them a sales-point and processing advantage against their rival, and then not tell people?!?!?!! You have to be a special kind of stupid to believe such mindless crap founded on zero logic. You can make up whatever magical theorems you like about silicon being stacked or undetectable or whatever, but the idea that the developer documents don't mention it, and the marketing materials don't mention it, because the parent company doesn't want people to know about its $xxx million investment in that hardware, has no rationale. Not even a stupid rational like Xray bollocks.
 
Select Meme:
Secrets: Why Microsoft doesn't want you to buy this console.

You won't believe how the Sony fanboys reacted when they found out about this.
 
https://boards.greenhouse.io/sonyinteractiveentertainmentplaystation/jobs/1929657

Job ad from Sony, they seems a bit overconfident

PlayStation is growing rapidly and needs your help to build next generation cloud infrastructure and build awesome tools for our team. In this position, you will be part of a top-notch engineering team focused on delivering our container orchestration solution (Kubernetes) to the organization.

You will be managing distributed systems that are powering 100+ million PS4 consoles that deliver immersive gaming experiences. You will also be one of the leaders of an elite team that is super excited to launch the upcoming world’s fastest console(PS5) in 2020. You will love working at PlayStation if you have a strong passion for systems, availability, and resiliency.
 
Status
Not open for further replies.
Back
Top