Next Generation Hardware Speculation with a Technical Spin [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
To scale performance from the low-end to the high-end, different GPUs can increase the number of shader arrays and also alter the balance of resources within each shader array.

hm.... so I guess they can do 3 arrays per shader engine? (e.g. 2*15WGPs *2 SE*2 = 60 CUs)

bugger. :p

(and up to 16 ROPs per shader array)
 
Last edited:
What if "reserving" CUs is just market-speak for "dedicated". xD


PS5 = 40+4 (+4 disabled) CUs -> 6WGP per array * 4 arrays = 48CU
Series X = 56+4+(4 disabled) CUs -> 8 WGP per array * 4 arrays = 64CU

:runaway::runaway::runaway::runaway::runaway:

Oh not the baseless section. :|

/facepalm

You must like running through burning buildings with gasoline drawers on.
 
The summary of this patent is really obtuse. Can anyone explain exactly what they’re getting at here, other than conserving frame buffer space?

http://www.freepatentsonline.com/WO2020036214A1.html

Provided are an image generation device, an image generation method and a program with which it is possible to conserve the storage capacity of a frame buffer for storing the pixel values of pixels for which it has been predetermined that prescribed values are to be set as the pixel values thereof. An acquisition management data storage unit (30) stores acquisition management data. A pixel value storage unit (32) stores pixel values of pixels for which it is determined that the pixel values thereof are to be acquired from the pixel value storage unit (32). An acquisition determination unit (34) determines whether to acquire pixel values from the pixel value storage unit (32) for each pixel on the basis of the acquisition management data. For the pixels for which it was determined that the pixel values thereof are to be acquired from the pixel value storage unit (32), a pixel value determination unit (38) determines the pixel values to be acquired from the pixel value storage unit (32) as the pixel values of these pixels. For the pixels for which it was determined that the pixel values thereof are not to be acquired from the pixel value storage unit (32), the pixel value determination unit (38) determines prescribed values as the pixel values of these pixels.
 
From your quoted text, it seems to be saying about selective rendering pixels or fetching precomputed (temporal?) data. You store some values for pixels and an 'acquisition determining unit', some selection algorithm, choose whether to fetch the preexisting/precomputed data or not.

The three parts are
Pixel storage > predetermined values
Acquisition > do we want to fetch some predetermined pixel data?
Value determination unit > what value to return, could be derived from several values within the storage, such as bilinear filtering.

Sounds like CLUT for the framebuffer??
 
Latencies?
i think in the grand scheme of processing and the nature of things, audio can be more latent that graphics.

As in audio can probably be acceptable at 16ms. But some gamers will want graphical refresh in the 120+ range.

30fps audio might be fine for a 60fps game. Sort of thing.
 
i think in the grand scheme of processing and the nature of things, audio can be more latent that graphics.

As in audio can probably be acceptable at 16ms. But some gamers will want graphical refresh in the 120+ range.

30fps audio might be fine for a 60fps game. Sort of thing.
I was under the impression audio can be very latency sensitive. Wasn't that part of the problem with just using shaders to begin with, hence hiving of a section to reduce it.
 
I was under the impression audio can be very latency sensitive. Wasn't that part of the problem with just using shaders to begin with, hence hiving of a section to reduce it.
Hmm. I think you could be right
 
OK thanks, https://www.theverge.com/2019/10/15/20915250/sony-360-reality-audio-release-date-amazon-partners
quite why they are pushing the music aspect of it escapes me, 'recreates the concert experience'! when we listen to a concert, all the sound comes from the generally the same direction
unlike a film / game.
Though apparently this will work with any headphones, So I must give it a listen just for curiosities sake
Just a quick note here:
The room acoustics of a concert is what makes them quite a bit more impactful than just listening on your headphones. There are a ton of sound dynamics at play in big music halls.
 
From your quoted text, it seems to be saying about selective rendering pixels or fetching precomputed (temporal?) data. You store some values for pixels and an 'acquisition determining unit', some selection algorithm, choose whether to fetch the preexisting/precomputed data or not.

The three parts are
Pixel storage > predetermined values
Acquisition > do we want to fetch some predetermined pixel data?
Value determination unit > what value to return, could be derived from several values within the storage, such as bilinear filtering.

Sounds like CLUT for the framebuffer??

Opening the pdf and the japanese patent it seems to take reference to camera patent*. This is probably CLUT for the framebuffer or maybe linked to a PS5 camera?

*One Fuji film micro device patent of 1995
One of seiko Epson patent of 1998
And one Casio computer patent of 2013
 
I was under the impression audio can be very latency sensitive. Wasn't that part of the problem with just using shaders to begin with, hence hiving of a section to reduce it.

I remember AMD allowing CU reservation so that audio wouldn't be disruptive to the graphics pipeline. Audio is typically is done near the completion of development and CU reservation allowed you add audio while avoiding sudden issues close to launch.
 
I remember AMD allowing CU reservation so that audio wouldn't be disruptive to the graphics pipeline. Audio is typically is done near the completion of development and CU reservation allowed you add audio while avoiding sudden issues close to launch.
This doesn't sound right to me.
If talking about on pc side, games have endless graphical configuration options, and hardware performance levels also. So why would it impact graphics in that way?
I would also expect that they would give different parts budgets anyway.

Not sure if TrueAudio/shaders was used on PS4.

Pretty sure the hiving off was more to do with latency and to support the audio than to support the graphics side.

Probably could do with someone who knows more here.
 
We originally had sound acceleration because it was taxing on the CPU to perform. And back then that made a lot of sense. As the complexity got higher, or A3D’s partial wave tracing solution; that alone was going to eat tons of cycles.

but the importance of sound started to drop when graphics picked up. CPUs became powerful and we’ve generally never looked back now that CPUs are plenty powerful.
However:
a) the audio is not compressed on PC (IIRC)
B) it’s not all that complex either

The need to have silicon for audio acceleration may only be a hat tip to the desire for wave tracing again;
 
This doesn't sound right to me.
If talking about on pc side, games have endless graphical configuration options, and hardware performance levels also. So why would it impact graphics in that way?
I would also expect that they would give different parts budgets anyway.

Not sure if TrueAudio/shaders was used on PS4.

Pretty sure the hiving off was more to do with latency and to support the audio than to support the graphics side.

Probably could do with someone who knows more here.

https://gpuopen.com/amd-trueaudio-next-and-cu-reservation/
 
  • Like
Reactions: Jay
The idea of CU reservation is two-fold: first, to provide a compute resource for critical real-time audio on the GPU that is isolated from the highly variable, competing graphics and compute workloads running alongside ; and secondly, to provide a deterministic method to carve-out a fixed set of compute resources for audio that can help mitigate the threat of last-minute surprises during game development; in other words, to make the tradeoff of using the GPU for audio more manageable in a real-world development context.
So even though yes the issue you highlighted is mentioned, I feel that given in games you scale the graphics on pc, or throw hardware at it that it having inpact on graphics in an area due to dynamic nature of audio, it would be lower issue/more manageable as pc gamers need to manage that anyway.

But that could be my biases.
Too many things in article that I could use to highlight both of views to be honest.

https://community.amd.com/community/gaming/blog/2016/08/17/amd-trueaudio-next-is-bringing-realistic

It is a conventionally-held belief that using a GPU to render audio can cause too much latency, while also interfering with graphics performance. However, TrueAudio Next has the ability to leverage the powerful resources of GPU Compute, safely allowing the GPU to accelerate audio rendering. This is mainly thanks to a core element of this technology: Compute Unit (CU) Reservation.
 
Last edited:
Creative Labs' SXFi combined with object-based sound is something that should have happened in gaming years ago.

I'm secretly hoping at least one of the console makers decided to pick up their tech.
 
Both solutions have their pros and cons - same as the dedicated RT hardware debate, versus doing it all on compute. And the reason for the inclusion is the same. Not every game will need RTRT performance, so for those, RTRT hardware is a waste. But games that do need RTRT need it to be fast, and these will be the landmark, core console experiences, so for those you make the choice to include the RTRT hardware. Same I think with audio. Looking at Cerny's description, he wants audio to become immersive. Sure, not all games need that, but for those who do, you don't want a compromised experience undermining that target of being truly immersive. Committing to dedicated audio processors ensures your top-tier, landmark experiences provide the audio experience you want for your console. For games that don't need that, they'll just have a bit less compute to play with as a result.

Mark Cerny or the Xbox architect are creating the console with dev feedback and audio team too. I don't think audio is not important but like physics on GPU(I remind the Havok demo), I suppose it took a hit because graphics are above anything else out of virtual reality. If we have a Havok demo, I suppose it will run on Zen 2 CPU and if we have 3d audio it will run on dedicated audio DSP on Xbox and PS5. Out of fluid, clothes or hair I think like this gen rigid body physics will stay on the CPU.

Don't treat audio as a second class citizen, very curious to see an in-depth Q/A or conference about the PS5 or Xbox Series X like the Gamelab 2013 of Mark Cerny.
 
they are owned by creative labs now
That doesn't stop them from expiring. A3D is a bit older than one might think, it was originally created by NASA spinoff Crystal River and rebranded to A3D by Aureal when they aquired Crystal River
 
Status
Not open for further replies.
Back
Top