useful for everything. It's just another form of immersive ness.This advanced audio seems something for MP games, headphone users have an advantage then (they already do somewhat). Maybe halo infinite will be htrf if its mp in it.
Many/all first-person games. Horror games will benefit greatly. Stealth games (think Thief). Some adventure games like 3rd person, where you could hear where the dragon was coming from.This advanced audio seems something for MP games...
Many/all first-person games. Horror games will benefit greatly. Stealth games (think Thief). Some adventure games like 3rd person, where you could hear where the dragon was coming from.
For other genres like party games, puzzlers, strategy, etc., the HRTF will just be nicer audio but something those games could live without. But it's certainly not limited to MP games by any stretch.
useful for everything. It's just another form of immersive ness.
Back when Half Life became king, having a sound card with EAX and 3D Sound dramatically changed the immersion of that title.
I’m pretty sure a phone app or the PS Camera could do this well enough. I bet you could even get quite far with a good 2D picture of your ear. Or a built in audio calibration tool based on your feedback.
They really don't have a ton they would need to announce on the spatial audio front, besides the nitty gritty details of things like how many dynamic objects the new Xbox hardware can support, etc.I'm really happy to see the return of audio, I'm really happy Sony is going this route, hoping that MS announces more on this front.
yea you're definitely going to want amazing 3d audio for a title like TLOU2.I never used the feature, but imagine LoU2 where the audio allows you to pinpoint the enemy rather than using the vision thing. This audio could be the next direction for gaming as visuals are hitting diminishing returns.
We don't know yet where the custom CU is, so it's possible. My working assumption with the information I have is that having something like the IOMMU involved usually implies IO-coherence or something else short of full CPU cache coherence.I just recall the xbo audio processor that is connected to the cpu coherent bus through an iommu and axi bridge with an audio related dma on it.
The video also hinted at the possibility of developers getting access to at least some of the engine's compute capability, for whatever purpose a portion of a SPU-like CU's throughput they can find. It's likely more low-latency than the GPU, at least.Theoretical limit and realistic impact in real-world games? 20 GB/s is a LOT of audio data. Uncompressed audio at 48,000 Hz 16 bit is about 96,000 bits/s. 20 GB/s would be 28,000 concurrent sound streams per second. Tempest can only mix 512 sounds which would be a peak audio data rate of 48 MB/s assuming uncompressed. At that data rate you can go fricking 24 bit 96 kHz and hardly touch the bus. Data usage must be for 3D data or something. I think 20 GB/s is loads.
They only mentioned less than half that vector throughput, which I haven't been able to reconcile. The video mentioned that of the wavefronts, half went to the OS and half to the game. The more significant departure is that "half" in this case is apparently one wavefront each. This could point to even more pervasive design changes. Were the ~100GFLOP figures meant to only cover what was offered to the game, which still doesn't mesh with the GPU's top boost?So if the PSVR has a 32 GFLOPs audio processor together with ~6GB/s bandwidth, then assuming the bandwidth proportion is kept the PS5's ~282 GFLOPs audio processor (one RDNA2 CU @ 2.2Ghz) would require (282/32)*6 = 53 GB/s.
I’m pretty sure a phone app or the PS Camera could do this well enough. I bet you could even get quite far with a good 2D picture of your ear. Or a built in audio calibration tool based on your feedback.
I know I sound like a Creative rep at this point, but that's exactly what Super X-Fi is about. You use an app on your phone to take pictures of your head an ears. The feedback from consumers has been pretty great so far.Yup that's the dream and I can't wait.
Yeah a photo of each ear might be enough in the future thanks to machine learning.
Maybe even some head measurements.
Perhaps this kind of flexibility is what justifies having a dedicated GPGPU compute unit, instead of an array of DSPs like they had on the 2013 consoles. The experience with PSVR might have driven them that way.He actually mentioned processing key sounds ("Hero sounds") in higher quality and less important sounds in lower quality.
I don't understand how it can work outside of headphones. Most living rooms have nasty reflections messing up everything and it's already difficult for most people to just have a good stereo reproduction for music and film. Maybe they'll recommend sound treatment to get the most out of it, and then people will realize how amazing everything sounds with proper room treatment.
They only mentioned less than half that vector throughput, which I haven't been able to reconcile. The video mentioned that of the wavefronts, half went to the OS and half to the game. The more significant departure is that "half" in this case is apparently one wavefront each. This could point to even more pervasive design changes. Were the ~100GFLOP figures meant to only cover what was offered to the game, which still doesn't mesh with the GPU's top boost?
For the purposes of consistency in a very latency-sensitive domain, maybe this CU doesn't boost?
Probably, although the video also indicated that the CU worked at the GPU's clock. I would think for audio purposes that the usual variable clocking may not be acceptable.Or perhaps they're just clocking this CU at a lower, fixed clock rate.
Creative released it at the beginning of 2019, Sony followed at the beginning of this year. Now how good it can be will be up to how much data it could collect and match with profiles in the cloudI know I sound like a Creative rep at this point, but that's exactly what Super X-Fi is about. You use an app on your phone to take pictures of your head an ears. The feedback from consumers has been pretty great so far.
Interestingly, at CES they announced a Dolby Atmos soundbar that uses SX-Fi to modulate the soundbar's output for the users' own hearing profile, which is exactly what Cerny proposed for the PS5 down the road.
If the audio CU is one of the 4 deactivated CUs, they might not have the choice to have a stable clock just for this one.Probably, although the video also indicated that the CU worked at the GPU's clock. I would think for audio purposes that the usual variable clocking may not be acceptable.
There's a suggestive quality to a GPU-linked clock that the hardware cannot go below, though.
The mentioned 64 operations per cycle gives a ~1.6 GHz floor, though there's no one interpretation on how to get 64 operations on hardware that usually offers 128.
It can't be a deactivated one as that one is deactivated. Deactivating for yields means it's not available!
Ahh the good old daysBro don't you have a pencil ? Just reactivate it !
I gess we shouldn't expect the actual console revealed any time soon....
"We look forward to sharing more information about PS5, including the console design, in the coming months."
Dammit!
Though the design might give us some clues on the console's shape.
For example, maybe we should expect the PS5 to be two-toned like thisDualShock 5DualSense.