Sony VR Headset/Project Morpheus/PlayStation VR

Expensive, authentic, japanese, rice cookers.

I have a Tiger rice cooker that cost $120 and still no magnets. I wish Zojirushi weren't so stupidly expensive outside of Japan, the good induction models are like $400 imports. :LOL:
 
Nice, it looks like the previous PSVR breakout box. They added the "social screen" but not audio processing since a PC CPU is powerful enough.

Surely the Tensilica DSP's in the PS4 are more than capable of handling positional audio and whatever other fancy effects are needed. I understood them to be the same as TrueAudio in AMD PC GPU's and that's exactly what it's advertised for by AMD.
 
I do think it's a bit of a strange situation that we have some sort of processor that's actively cooled in the box. I would've thought that some sort of passive cooled mobile APU would be more than enough for the interpolation/post processing?

It's obviously not going to be helping with the game rendering, but it almost definitely must be doing more than currently expected... I just can't begin to imagine what that is. I guess the PS4 needs plenty of support, since it's not exactly a powerhouse.

Maybe it's a very advanced interpolation processor?
 
Surely the Tensilica DSP's in the PS4 are more than capable of handling positional audio and whatever other fancy effects are needed. I understood them to be the same as TrueAudio in AMD PC GPU's and that's exactly what it's advertised for by AMD.
We never got as far as confirming it was 100% TrueAudio. I believe the Tensilica cores are used for codecs etc, and aren't capable of super fancy audio. Otherwise positional audio in stereo headphones would be possible on the base unit.
 
It sounds like there are people developing GPU accelerated frame interpolation on PCs:
...
I guess that it's possible that Sony have developed some sort of processor that's performing a similar function.
Motion interpolation of that sort has existed in TVs for years. It certainly doesn't require a powerful GPU or CPU. However, you don't want to create a custom ASIC if there isn't the market for it, suggesting the notion of using an off-the-shelf part at least in version 1 of PSVR while the waters are tested.
 
Motion interpolation of that sort has existed in TVs for years....

Yep, and it sucks more or less. I bought this year's Samsung TV and motion interpolation is good enough, but it introduces stutter or judder in motion everytime scene changes and this inconsistency makes it useless for me. Native rendering at high framerates will always be superior to any interpolation.
 
I am curious about how the interpolation works in detail, especially with respect to the ordering in time?

When using temporal interpolation for a TV signal, do you take two images at time t1 and time t2 and then interpolate an image between the times t1 and t2?
 
I am curious about how the interpolation works in detail, especially with respect to the ordering in time?

When using temporal interpolation for a TV signal, do you take two images at time t1 and time t2 and then interpolate an image between the times t1 and t2?
I think that's how it works. Though the TV may wait for additional frames to better assess motion over a couple of frames. That's why scene changes may create fall back to being juddery. Some sets seem to be not so good and you can see it falls back to original frame rate after every scene change. I like what my LG is doing though, although it's a set with high input latency even in game mode :( I used to be a cinema purist but the interpolation works just too good with camera pans and zooms to turn it off for me. So I'm now used to the "soap opera effect", and I even prefer it on my LG... Cinema enthusiasts probably would hate me.
 
I am curious about how the interpolation works in detail, especially with respect to the ordering in time?

When using temporal interpolation for a TV signal, do you take two images at time t1 and time t2 and then interpolate an image between the times t1 and t2?

if you have 30 fps source and 120 Hz refresh you need to add more than one frame between

I think that's how it works. Though the TV may wait for additional frames to better assess motion over a couple of frames. That's why scene changes may create fall back to being juddery. Some sets seem to be not so good and you can see it falls back to original frame rate after every scene change. I like what my LG is doing though, although it's a set with high input latency even in game mode :( I used to be a cinema purist but the interpolation works just too good with camera pans and zooms to turn it off for me. So I'm now used to the "soap opera effect", and I even prefer it on my LG... Cinema enthusiasts probably would hate me.

To be fair to Samsung there is one interpolation setting called "Clear" which doesn't produce judder/stutter after certain scene change. And I'm on the same boat as you, I like high framerate look, but I would prefer high native framerate instead of interpolation. Maybe someday whole World can agree on one format, something like 4k/100fps. I don't know if you are in 30 or 25 fps region, but I'm in EU and every show from the US is more or less crippled with framerate conversion (jerky motion).
 
I am curious about how the interpolation works in detail, especially with respect to the ordering in time?

When using temporal interpolation for a TV signal, do you take two images at time t1 and time t2 and then interpolate an image between the times t1 and t2?
Basic idea is as follows.
You find object motion vectors, move pieces of image according to those motion vectors and blend between source images between t1 and t2.
There is no limit how many images you create in between the source images.
One of the big problems is the creation of the motion vectors as you only have images as source.

In rendering you can create motion vectors easily, so the quality can be lot better.
Pretty much same method is also used for temporal AA to get correct samples from previous frame.

I'm quite sure that Sony doesn't actually interpolate the content of image in time, but just reprojects the image to a most recent location and orientation of the head.
Here some information on their implementation.
 
Last edited:
Surely the Tensilica DSP's in the PS4 are more than capable of handling positional audio and whatever other fancy effects are needed. I understood them to be the same as TrueAudio in AMD PC GPU's and that's exactly what it's advertised for by AMD.
The impression I get is that there's a generic concept of sorts of a customizable block that can contain various configurations of Tensillica (or whatever new name in the future they might get), scratchpad, interconnect, custom IP, and hooks into AMD's IO-coherent memory subsystem.
Console customers have added various IP blocks for their functions, and have encapsulated the hardware to varying degrees for system purposes.
Microsoft has kept resources reserved, and Sony wraps its ACP behind an asynchronous interface via secure API.
AMD's TrueAudio looks to be the base idea of a non-specialized DSP block plus not having any particular customer or purpose, so let's allow access to middleware and hope somebody uses the checkbox item.

Sony already uses the limited computing power for its onboard DSPs for codec duty and other system purposes, and at least when it was discussed a few years back there is no middleware access to it.
 
Back
Top