Wii U hardware discussion and investigation *rename

Status
Not open for further replies.
But there's a huge difference. The Wii U can't use a pre-compressed video stream for games, it has to compress frame by frame in real time in order to get reasonable latency.
Latency would be my guess (other than Nintendo just being bizarrely different as is their MO ;)), but there really can't be much in it. 2 frames latency would be nigh imperceptible and way better than everyone's experiencing with their TVs anyhow. There may also be an issue of quality, which someone like Richard Leadbetter or maybe Laa-Yosh would need to weigh in on.
 
Just wanted to say I totally agree with your statement. In fact so much, I actually made an account to say so. Multi threads, complex gpu rendering... none of this is new.
 
All I know about compression is that we all hate it here when our movies get their first showing in quite suboptimal quality... ;)
 
Latency would be my guess (other than Nintendo just being bizarrely different as is their MO ;)), but there really can't be much in it. 2 frames latency would be nigh imperceptible and way better than everyone's experiencing with their TVs anyhow. There may also be an issue of quality, which someone like Richard Leadbetter or maybe Laa-Yosh would need to weigh in on.
Latency is more noticeable on a touch screen. Furthermore, 2 or 3 frames of latency doesn't get you very far with a temporal compression scheme like H.264. In order to produce the final data for frame N, you may need frame N + 15.
 
Maybe OnLive / etc. use a h264 profile with independant frames? i.e. like "editing grade" MPEG2 and MPEG4.
Nintendo could use anything, as long as it can be compressed by a DSP. Big rectangular blocks of pixels can be used too rather than full frames. Video compression like in movie files, which computes a frame based on the content of the preceding frame, is absurd and won't get you a 2 frame latency.
 
Latency is more noticeable on a touch screen. Furthermore, 2 or 3 frames of latency doesn't get you very far with a temporal compression scheme like H.264. In order to produce the final data for frame N, you may need frame N + 15.
If that's a limit then you're right, but OnLive proves otherwise. I also point again to PS3's Remote Play. Anyone know what tech that's using?
 
If that's a limit then you're right, but OnLive proves otherwise.
Onlive video quality is reportedly fairly bad, compared to local rendering. I don't think it would go unnoticed by the general public, and video gaming press in particular, if the wuublet gave similar video quality to onlive. In fact, I can easily imagine the uproar it would generate... ;)
 
Here's my theory: games can target the main TV screen and the GamePad's screen at the same time. That means they each have their own framebuffer.

We know that the way the image is sent to the tablet is independent of wether the frame is being drawn. The compression does not wait for a full frame to be rendered, but compresses and then immediately sends the image block by block (as discussed in the Iwata asks article), using some variant of MPG(2) from the looks of things (with stereotypical red-color deformation as shown by DF).

I think this system can get away with using a single framebuffer, and I'm guessing that perhaps rendering the main screen to the GamePad means the new frame is always treated as a third framebuffer that devs just copy to whatever the new framebuffer for the main screen would be getting, instead of drawing GamePad specific stuff on it.

This could be why games that support the GamePad behave a lot (but not exactly) like triple buffered games on other platforms, but I'm not sure. Enforcing Vsync could be helping with the image consistency, but could also just be something legacy from Nintendo.

Just a guess though. Could be lots of other options.
 
OnLive is only bad with poor bandwidth. On high BW connections it's high quality. Local Wifi and only 480p means OnLive's tech and the h.264 codec should be able to produce excellent results, better than DVD (MPEG2 at 40 mbps was considered as good as uncompressed in HD screentests).
 
We know that the way the image is sent to the tablet is independent of wether the frame is being drawn. The compression does not wait for a full frame to be rendered, but compresses and then immediately sends the image block by block (as discussed in the Iwata asks article).
Which article is that? If so, we could see 'screen tearing' on Wuublet, as the framebuffer is replaced part-way through signalling, requiring a v-sync to solve. And if you're going to have a v-sync, may as well store the FB complete and lock it prior to transmission. With the 1GB system RAM, there's plenty enough space for a double buffered Wuublet framebuffer. Copy to backbuffer, swap with front for sending.
 
Which article is that? If so, we could see 'screen tearing' on Wuublet, as the framebuffer is replaced part-way through signalling, requiring a v-sync to solve. And if you're going to have a v-sync, may as well store the FB complete and lock it prior to transmission. With the 1GB system RAM, there's plenty enough space for a double buffered Wuublet framebuffer. Copy to backbuffer, swap with front for sending.

Yes, but they do it that way to get the latency as low as possible.

Source is one of my favorite things about Nintendo. I rarely want their systems, almost all their games bore me to death, but I love reading these :D :

http://iwataasks.nintendo.com/interviews/#/wiiu/gamepad/0/0

single-frame12 of image data has been put into the IC. Then it is sent wirelessly and decompressed at receiving end. The image is sent to the LCD monitor after decompression is finished.

But since that method would cause latency, this time, we thought of a way to take one image and break it down into pieces of smaller images. We thought that maybe we could reduce the amount of delay in sending one screen if we dealt in those smaller images from output from the Wii U console GPU13 on through compression, wireless transfer, and display on the LCD monitor.
 
All I know about compression is that we all hate it here when our movies get their first showing in quite suboptimal quality... ;)

This is one small reason I don't bother with TV anymore. I even liked the quality and instant channel switching of non-NTSC analog television, next to the blocking and banding fest that replaced it.
 
But there's a huge difference. The Wii U can't use a pre-compressed video stream for games, it has to compress frame by frame in real time in order to get reasonable latency.

Which is exactly what AirPlay Mirroring does, it compresses on the fly and sends to the target device.

AirPlay Mirroring is precisely like that from all the iDevices. I'm just saying there is dedicated hardware that sips power for those tasks instead of using valuable processor cycles.

I would be surprised if the Wii U did anything else when sending out to the Wii U GamePad.
 
Re-reading the Iwata article, I think Nintendo have done this...

With MPEG:
- you receive the entire frame.
- if there's any problems with the connection, you pause and wait until you have the frame.
- when you have the frame, you decode the data, and render it.
The frame is delayed by 'time taken to render the entire screen + total transfer time', memory used is relatively high etc.

With Nintendo-tablet-vision:
- you receive a block, if it's complete you render it instantly.
- if there's any problems, you ask the Wii-U to resend that block later in the stream.
- when the final block is received, you render it and flip the framebuffer.
The frame is delayed by 'time taken to render a single block + total transfer time', the memory used is approximately equal to the framebuffer size x2.

I'm not sure whether some form of MPEG can do that, but it looks like a very low latency system with low processing/memory requirements on the tablet. (doesn't appear to help the Wii-U itself in any manner that I can see)
 
Does it really matter if it's H264 vs MJPEG?

MJPEG is effectively all I-Frames, and I believe what Onlive does to minimize latency with H264 to only send IFrames and I think PFrames.

H264 all IFrames is extremely similar to MJPEG.

WiiU is a totally different problem than OnLive, it's got an order of magnitude more bandwidth to send the signal, you can trade a lot of compression ratio for speed and memory footprint.

The only issue with compressing and sending block by block is the entropy coding step, and there are several ways they could address that.

But back to my original point, does it matter?
It compresses video at one end, decompresses and displays it at the other, it's not exactly rocket science.
 
I'm personally more interested in the exact connection being used.

Considering 802.11 b/g/n are pretty much out of the picture due to possible interference issues messing with the bandwidth (collision avoidance type of problems), what exact format and spectrum is it using? Anyone with ideas?

Re-reading the Iwata article, I think Nintendo have done this...

With MPEG:
- you receive the entire frame.
- if there's any problems with the connection, you pause and wait until you have the frame.
- when you have the frame, you decode the data, and render it.
The frame is delayed by 'time taken to render the entire screen + total transfer time', memory used is relatively high etc.

With Nintendo-tablet-vision:
- you receive a block, if it's complete you render it instantly.
- if there's any problems, you ask the Wii-U to resend that block later in the stream.
- when the final block is received, you render it and flip the framebuffer.
The frame is delayed by 'time taken to render a single block + total transfer time', the memory used is approximately equal to the framebuffer size x2.

I'm not sure whether some form of MPEG can do that, but it looks like a very low latency system with low processing/memory requirements on the tablet. (doesn't appear to help the Wii-U itself in any manner that I can see)

I don't think in these types of streams you would want Wii-U to resend that block, especially if you have very frequent to all i frames. If you lose some data, big deal.
You would want the bandwidth to deal with the next frame than to fix the current frame.
 
Last edited by a moderator:
I'm personally more interested in the exact connection being used.

Considering 802.11 b/g/n are pretty much out of the picture due to possible interference issues, what exact format and spectrum is it using?
802.11n at 5GHz is very clean and interference free.

According to wikipedia:
"Gamepad wireless transmission is using 5150-5250MHz indoor frequency band and based on IEEE 802.11n using custom proprietary protocol and software co-developed by Broadcom and Nintendo"

So... it's not exactly 802.11n, it's custom?
 
802.11n at 5GHz is very clean and interference free.

According to wikipedia:
"Gamepad wireless transmission is using 5150-5250MHz indoor frequency band and based on IEEE 802.11n using custom proprietary protocol and software co-developed by Broadcom and Nintendo"

So... it's not exactly 802.11n, it's custom?


Ya it has to be. You wouldn't want Wii-U wireless frames to collide with existing wi-fi frames.
 
So... it's not exactly 802.11n, it's custom?
It's custom on the protocol-level of the network connection (instead of standard tc-pip, you have something else; "nintendovision" or whatever), but not on the lower link level as you want to remain compatible with other wireless networking gear to avoid problems, maybe regulatory issues as well.
 
Status
Not open for further replies.
Back
Top