Next Generation Hardware Speculation with a Technical Spin [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
The ISA is a significant part of what is considered the architecture. It's perhaps in a more crowded field with GPUs, in particular AMD's GPUs, because AMD lumped virtually everything in the SOC as being part of the architecture. Again.

Perhaps they're talking about a situation similar to the transition from VLIW4 to GCN, where Southern Islands had hardware similarities to the prior architecture outside of the CU array and ISA.
Sea Islands would implement more of the front-end features that would be most recognizable today, and some additional changes to the ISA as well.

Even if this happens in the case of RDNA2, I would need to see how big a departure things would be. Even within the compute portion of the architecture, there are elements or themes that are reminiscent of GCN still. Some of the particulars may stem from inherited elements, such as the number of wavefronts and barriers per CU, or for example some similar rules for wait counts and the same general pipeline organization for the instruction types.
RDNA’s reveal definitely came across as a mixed message as they tried to get grapples on the nomenclature to differentiate an architecture from its ISA.

The only comment I can add is that perhaps the advent of CDNA will leave RDNA less constrained going forward.


He's on discord, you should hop on to see all the juicy dumps from real developers!

Currently, DXR only defines fixed function ray-triangle intersection. To do intersection tests along fancy geometric representations such as spheres, voxels, signed distance fields or any type of procedural geometry really you would have to 'emulate' the ray-AABB intersection tests using intersection shaders. In the patent describing the "ray intersection engine", it mentions the capability to do those "ray-box intersection tests" along with the ray-triangle intersection tests. Highly optimal for doing voxel rendering.
Sounds like he explained information that was already there versus unveiling new info.
 
Last edited:
Well, forget about ERA, GAF, and pastebin. Discord is the new source of real leaks! :D

I wonder how much this is compute utilizing triangle intersection hardware, or kind of HW accelerated general traversal interface that is compatible with custom data structures.
If true, i'm excited. And i wonder which platforms will have this feature...

Maybe it was a guy talking on anandtech.

https://forums.anandtech.com/threads/ray-tracing-is-in-all-next-gen-consoles.2571546/#post-39954331

https://forums.anandtech.com/threads/ray-tracing-is-in-all-next-gen-consoles.2571546/#post-39954869

I can't say too much about this, but the next step will be the custom BVHs.

It reminds me something here it seems Ps5 has it and probably XSX.
 
... all this gives me hope RT could be fixed soon.
I gave up, and just accepted to be stuck in the middle age for a long time.
But there's hope...
:)

Unfortunately this also means loosing hope on 'Consoles force RT on minimal PC specs' could end the current situation of double work and uncertainty quickly.
Will take a long time until vendors, API, and devs settle on something everybody agrees upon. On PC, RT remains a curse.
 
... all this gives me hope RT could be fixed soon.
I gave up, and just accepted to be stuck in the middle age for a long time.
But there's hope...
:)

Unfortunately this also means loosing hope on 'Consoles force RT on minimal PC specs' could end the current situation of double work and uncertainty quickly.
Will take a long time until vendors, API, and devs settle on something everybody agrees upon. On PC, RT remains a curse.

I hope it will improve everywhere. And I think you are on something with your idea of surfel. I am not a dev but if I understand well the worst random memory access of raytracing are coming from diffuse GI and glossy reflection it seems here that surfel solve the problem. A great solution to use with raytracing for specular for example.

https://graphics.pixar.com/library/PointBasedColorBleeding/paper.pdf
 
What's the criterion for a full departure?
RDNA1 made the effort to move parts of its ISA back console-era encodings, when the nearest GPU to depart from was Vega. AMD still seems concerned with some level of compatibility with GCN, so I haven't seen the claim that Navi 2x is changing this.
I'm not sure a full departure is an optimal solution in the face of backwards compatibility.
Sony would have the most examples of struggling with significant architectural discontinuities, such as the Cell to x86 transition, which I'd argue is at a minimum a large if not full departure.



New?


Perhaps it is time...
https://forum.beyond3d.com/posts/711822/
Your post/link was worth a like just for being able to see this old post:
There just aren't enough games out there where you get to fight Hitler riding a T-Rex... :(

Brilliant!!
 
John Carmack not a fan of dedicated chips for audio or physics in nextgen consoles


I heard exactly the opposite thing from an audio engineer meaning that when compute is centralized all is going inside the graphics and audio is treated as a second class citizen. Same for physics on GPU out of visual effect(particle, clothes simulation maybe fluid) it was never used in AAA and because of the weak Jaguar physics in-game were better on PS3/360.

EDIT:

And GPUs are not good on certain audio task

EDIT: presentation about HSA and audio

https://fr.slideshare.net/DevCentralAMD/mm-4085-laurentbetbeder


mm4085-designing-a-game-audio-engine-for-hsa-by-laurent-betbeder-2-1024.jpg
 
Last edited:
Christophe Balestra(ex technical director of ND) told at an IGDA event at Paris where Quaz51(B3D forumer) was if he would change something about PS3, it would be to add an audio chip for not using SPUs for audio.
 
Last edited:
Why? SPUs should be ideal for audio - the number of lame gags and ill-informed descriptions considering them just being DSPs.
 
Why? SPUs should be ideal for audio - the number of lame gags and ill-informed descriptions considering them just being DSPs.

Because it tooks ressources from graphics. For AAA games graphics is too important better to use CPU(x86 core) and DSP to do audio or CPU for physics. It was the Naughty dog technical director saying this he knows it work well but more SPU for graphics was better solution for him.

I am not surprised audio engineer of Ninja Theory hype the audio chip.

Never fight with ressources with graphics you will always lose out of audio for VR.
 
Last edited:
With the jaguar cores, that doesnt surprise anyone. With zen2 cores, 8 of them with ht, audio could be less of an issue.
Also, dedicated audio module on the gpu from amd?
 
I heard exactly the opposite thing from an audio engineer meaning that when compute is centralized all is going inside the graphics and audio is treated as a second class citizen. Same for physics on GPU out of visual effect(particle, clothes simulation maybe fluid) it was never used in AAA and because of the weak Jaguar physics in-game were better on PS3/360.

EDIT:

And GPUs are not good on certain audio task

EDIT: presentation about HSA and audio

https://fr.slideshare.net/DevCentralAMD/mm-4085-laurentbetbeder


mm4085-designing-a-game-audio-engine-for-hsa-by-laurent-betbeder-2-1024.jpg
Carmack hasn't developed a game since Rage, which wasn't even in this console generation. I don't think he deserves to be held in the same technical esteem when it comes to games as he once was. There are other reasons I wish he would drop out of the spotlight, but that's beyond the scope of this thread.
 
Carmack hasn't developed a game since Rage, which wasn't even in this console generation. I don't think he deserves to be held in the same technical esteem when it comes to games as he once was. There are other reasons I wish he would drop out of the spotlight, but that's beyond the scope of this thread.

Is carmack in some kind of spotlight? Last I checked his twitter feed was full of AI stuff he is working on. Before that it was all about vr and oculus. Or are we at a point where even one opinion a year is too much and people have to be silenced?
 
Is carmack in some kind of spotlight? Last I checked his twitter feed was full of AI stuff he is working on. Before that it was all about vr and oculus. Or are we at a point where even one opinion a year is too much and people have to be silenced?
Who said anything about anyone being silenced?
 
Audio hardware is more of a no-brainer to me than RT hardware. 90% of games are garanteed to have at least half a dozen audio channels going off sinultaneously 90% of the time. And very few, if not none at all, devs are about to come up with some new way of doing audio. Game audio has been a pretty standardised "solved" problem for decades. I mean, there are different levels of fancyness in how a game decides which samples to play and their properties (volume, pitch modulation, reverb, speed, etc) but the actual playback of the samples is what I think can easily be HW accelerated with very little wasted sillicon. I just don't think we need hundreds of voices. If a game wants to go thar far, then it can of course do some of the audio in software and feed that software mix to one of the HW channels.
 
Is carmack in some kind of spotlight? Last I checked his twitter feed was full of AI stuff he is working on. Before that it was all about vr and oculus. Or are we at a point where even one opinion a year is too much and people have to be silenced?

No one said silenced. They don't have to be silenced, just so everyone who reads it is well informed how out of touch they are with the topic at hand.
 
Audio hardware is more of a no-brainer to me than RT hardware. 90% of games are garanteed to have at least half a dozen audio channels going off sinultaneously 90% of the time. And very few, if not none at all, devs are about to come up with some new way of doing audio. Game audio has been a pretty standardised "solved" problem for decades. I mean, there are different levels of fancyness in how a game decides which samples to play and their properties (volume, pitch modulation, reverb, speed, etc) but the actual playback of the samples is what I think can easily be HW accelerated with very little wasted sillicon. I just don't think we need hundreds of voices. If a game wants to go thar far, then it can of course do some of the audio in software and feed that software mix to one of the HW channels.
The expectation of Atmos audio will probably be leveled on a lot of games this time around. A 5.1 mix isn't enough anymore.
 
Who said anything about anyone being silenced?

So what exactly did you mean when you wrote:

There are other reasons I wish he would drop out of the spotlight

Is carmack somehow in spotlight and tweeting/argumenting/... actively about gaming? If he tweets once a year about some gaming related topic and that gets some people upset isn't that pretty much wanting to silence him so tweets go from 1 to zero.
 
Status
Not open for further replies.
Back
Top