Next Generation Hardware Speculation with a Technical Spin [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
I'm more inclined to believe this is the case, just shape 2.0, beefed up.
It would make BC simpler and any critiques devs had could be addressed.
Unless amd's solution is a whole lot better I don't see why they would pay for it after all the in-house knowledge they have etc
AMD's TrueAudio was using Tensilica DSPs too ;) TrueAudio Next on the otherhand uses shader units.
 
AMD's TrueAudio was using Tensilica DSPs too ;) TrueAudio Next on the otherhand uses shader units.
Pretty sure it wasn't an of the shelf part.
By expanding on Shape it could maintain BC and add other features.
I'm sure there's good reasons to go with amd's shader route, just pointing out I can see good reason not to.
 
It was based on one Tensilica HiFi EP and two HiFi 2 DSPs, off the shelf IP-blocks from Cadence.
Here's a good summary on it: https://web.archive.org/web/2014071...now_about_amd’s_new_trueaudio_technology_2013
From the article:
MPC: I'm guessing Xbox One will not have TrueAudio as Microsoft has its SHAPE audio engine but what about PS4?

AMD: Sony and Microsoft would be in a better position to comment on the functionality of their audio hardware. I wouldn’t want to answer on their behalf.
I thought Shape, had in house work done on it and included kinect functionality also.
Are you saying Shape was just amd's TrueAudio implementation rebranded? Rebranding wouldn't surprise me, but I don't ever remember hearing that. So even though this may be an interesting article on TrueAudio, it doesn't confirm that to me.

In fact I swear I remember hearing that wasn't the case. But that was a while ago now.
Oh and thanks for article, I'll have to read it bit more thoroughly as only had time to skim it.

Edit:
Oh I see you are saying it's Tensilica based, yea knew that. But is it what TrueAudio was.
Functionality, inputs, outputs, etc.
Benefit of going that route again, maintaining BC and improving on it, compared to going the shader route.
 
Last edited:
IIRC SHAPE is just beefier
From the article:

I thought Shape, had in house work done on it and included kinect functionality also.
Are you saying Shape was just amd's TrueAudio implementation rebranded? Rebranding wouldn't surprise me, but I don't ever remember hearing that. So even though this may be an interesting article on TrueAudio, it doesn't confirm that to me.

In fact I swear I remember hearing that wasn't the case. But that was a while ago now.
Oh and thanks for article, I'll have to read it bit more thoroughly as only had time to skim it.

Edit:
Oh I see you are saying it's Tensilica based, yea knew that. But is it what TrueAudio was.
Functionality, inputs, outputs, etc.
Benefit of going that route again, maintaining BC and improving on it, compared to going the shader route.
Sorry if the posts are a little incoherent, my head doesn't seem to function at it's full capacity at the moment due reasons.
My point was that both MS's previous solution and AMD's dedicated solutions have been using Tensilica DSPs, even if the exact configuration is different and even if MS has had some custom hardware on top of that (unconfirmed AFAIK), so I don't see any reason they would suddenly jump to something else, especially when there hasn't been any recent breakthroughs on the DSP IP front as far as I'm aware (not saying I couldn't have missed it if there was)
 
TrueAudio was two Tensilica DSPs, as appeared in PS4. Shape was 4 DSPs with two dedicated to voice work. IIRC. When Kinect was dropped, there was talk here hoping MS would repurpose those DSPs for audio processing...
Yea, don't think MS opened it up for general use. Which was a shame.

So I guess my question is, would it still be better to go dsp/shape 2.0 route or amd shader?
I think dsp doesn't take up much space, but it's space none the less.
Or even some parts done on Shape other via shader if need flexibility and less latency sensitive parts?
 
So I guess my question is, would it still be better to go dsp/shape 2.0 route or amd shader?
The quote says it's dedicated hardware acceleration. CUs for audio would be repurposed hardware rather dedicated, IMO.

I think dsp doesn't take up much space, but it's space none the less.
As per the same discussion for PS5's audio, if the efficiency for area is greater than TrueAudio next doing the same job, then it's a win for the consoles who 1) have a tighter budget than PC and 2) where it can be guaranteed to be used unlike DSPs on graphics cards that'll largely go overlooked.

Indeed, AMD may revisit TrueAudio hardware in their GPUs if they see TrueAudio Next is being used in games and gen get drivers to use their DSPs once again for cards that have it.
 
  • Like
Reactions: Jay
I've read, but not really understood, that RT's data structures can be used to accelerate sound propagation, so isn't better to keep all on the same CU instead of a separate DSP?
 
I've read, but not really understood, that RT's data structures can be used to accelerate sound propagation, so isn't better to keep all on the same CU instead of a separate DSP?

My understanding is that ray tracing is not part of AMD's TrueAudio Next. TrueAudio Next is an SDK that deals purely in calculating computationally expensive effects like convolution reverb. Occlusion, partial occlusion, propagation, and calculating impulse responses can be done with ray casting, but those rays can be cast on the GPU or CPU. AMD typically recommends their Radeon Rays library, but the two are not tightly coupled. I don't think there's much need to do the ray casting and the audio calculation on the same CU, vs doing some of it "off chip."
 
As per the same discussion for PS5's audio, if the efficiency for area is greater than TrueAudio next doing the same job, then it's a win for the consoles who 1) have a tighter budget than PC and 2) where it can be guaranteed to be used unlike DSPs on graphics cards that'll largely go overlooked.
Sorry not been following. Only come back and dip in now and again, all the noise about talking about insiders and value of leaks puts me off coming here in general.

For me the difference between PS4 to PS5, compared to XO to XSX and this discussion is that XO had a comparatively pretty beafy sound processor. And that moving it forward with tweaks may mitigate BC issues and build on what they have.

If I remember correctly Shape got that beafy due to 'spare' silicon space that got used.

So I could easily see them not going the AMD route, and using fixed function and shaders/RT together.
So not the general AMD solution that was mentioned here, but bespoke.
 
I don't imagine that the audio hardware was programmed for without the use of an api. I don't see any reason why they shouldn't be able to change the hardware behind the api without issue.
 
So I could easily see them not going the AMD route, and using fixed function and shaders/RT together.
So not the general AMD solution that was mentioned here, but bespoke.
There is no general AMD solution that uses dedicates hardware. There's specualtion that if both MS and Sony are using dedicated audio hardware (which they say they are) then perhaps that'll become an AMD solution. Both PS4 and XB1's audio solutions were effectively AMD's TrueAudio and using the same or similar DSPs would support BC as you suggest, but that could be bespoke, or could be a more general solution on offer. Perhaps AMD offer audio DSPs to all their semi-custom clients?
 
I don't imagine that the audio hardware was programmed for without the use of an api. I don't see any reason why they shouldn't be able to change the hardware behind the api without issue.
Was there a comparison between the fixed function offerings and compute units? Just wondering what sort of hardware devs need/want for game audio and whether it makes sense to reserve something as functional as a CU vs the die space for beefing up a more dedicated solution.

There is no general AMD solution that uses dedicates hardware. There's specualtion that if both MS and Sony are using dedicated audio hardware (which they say they are) then perhaps that'll become an AMD solution.

What if "reserving" CUs is just market-speak for "dedicated". xD


PS5 = 40+4 (+4 disabled) CUs -> 6WGP per array * 4 arrays = 48CU
Series X = 56+4+(4 disabled) CUs -> 8 WGP per array * 4 arrays = 64CU

:runaway::runaway::runaway::runaway::runaway:

Oh not the baseless section. :|
 
Last edited:
What if "reserving" CUs is just market-speak for "dedicated". xD
That is a possibility. Unlike Sony who claim audio hardware, this snippet states dedicated acceleration, which could be CUs dedicated to audio. Bit weird to present it that way, and to reserve CUs for audio which in some games won't want it where AMD's TrueAudio Next solution can scale.
 
Was there a comparison between the fixed function offerings and compute units? Just wondering what sort of hardware devs need/want for game audio and whether it makes sense to reserve something as functional as a CU vs the die space for beefing up a more dedicated solution.

What if "reserving" CUs is just market-speak for "dedicated". xD

I haven't seen anything directly comparing TrueAudio vs TrueAudio Next, but I think the use case may actually be slightly different. TrueAudio Next allows a CU reservation that is fed with a "real-time queue" from the GPU scheduler. I believe you can now have two reservations up to a total of 20% of the GPU. The use case is essentially targeting VR, where audio is more important than it would be in a typical 3D game, so effects like accurate reverb become far more important for modeling the physical environment and distance in an accurate way. I'm by no means an audio expert, and just relied on whatever documentation AMD has available, but it seems like the intent is to do computationally expensive processing that is not typically done in non-VR games. @3dilettante made a nice post where he worked out that you could maybe build a processor for DSP that would be roughly 40% of the size and not have the issue a GPU would have with large wave sizes. It was a good speculative post. Comes down to how many reverbs need to be calculated, and how many audio sources would share the same impulse response in the same environment so they could be executed in parallel on the GPU.

Edit: I personally think the TrueAudio Next solution is interesting because you can use the processing power for graphics in the cases where games do not require such complex audio work, instead of having an audio processor sitting idle. But I guess we'll see which way it goes.
 
hm... so worthwhile to just beef up the existing DSP solutions?

I suppose in the context of sharing with graphics, it'd be an optional sacrifice for dynamic resolution from a developer point of view. Maybe? :confused: I suppose RT for graphics is optional in itself too. :p

Better dynamic audio systems would help the cinematic experience. :3 The Gears method with baked audio (Triton) was neat, but it's still baked (I suppose not incompatible with a linear experience).

-
I digress.

That is a possibility. Unlike Sony who claim audio hardware, this snippet states dedicated acceleration, which could be CUs dedicated to audio. Bit weird to present it that way, and to reserve CUs for audio which in some games won't want it where AMD's TrueAudio Next solution can scale.
Back to hybrid solution, I guess. hrm.... anyways. :p
 
Last edited:
Edit: I personally think the TrueAudio Next solution is interesting because you can use the processing power for graphics in the cases where games do not require such complex audio work, instead of having an audio processor sitting idle. But I guess we'll see which way it goes.
Both solutions have their pros and cons - same as the dedicated RT hardware debate, versus doing it all on compute. And the reason for the inclusion is the same. Not every game will need RTRT performance, so for those, RTRT hardware is a waste. But games that do need RTRT need it to be fast, and these will be the landmark, core console experiences, so for those you make the choice to include the RTRT hardware. Same I think with audio. Looking at Cerny's description, he wants audio to become immersive. Sure, not all games need that, but for those who do, you don't want a compromised experience undermining that target of being truly immersive. Committing to dedicated audio processors ensures your top-tier, landmark experiences provide the audio experience you want for your console. For games that don't need that, they'll just have a bit less compute to play with as a result.
 
Status
Not open for further replies.
Back
Top