Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
Maybe he doesn't have a clue about what he is talking about? English is his second language and his company isn't working on an XB1 game by his own admission.
 
Maybe he doesn't have a clue about what he is talking about? English is his second language and his company isn't working on an XB1 game by his own admission.

I find this strange too, what's next are we going to get power comparisons on the PS4 from a Xbox One dev that's never touched a PS4 kit?
 
It's curious then how it is mesured on a "per second" basis...

It's okay, like you, most people don't understand the difference.

Being able to deliver 2 pizzas every hour is not the same as being able to deliver 1 pizza every 30 minutes.
 
It's okay, like you, most people don't understand the difference.

Being able to deliver 2 pizzas every hour is not the same as being able to deliver 1 pizza every 30 minutes.

So tell me how this change the fact that 176GB/s is 160% faster than 68GB/s? I'm far from stupid & I understand.
 
Maybe he doesn't have a clue about what he is talking about? English is his second language and his company isn't working on an XB1 game by his own admission.

Maybe he is talking about the fact that GDDR5 is achieving it's 176GB/s by moving smaller packages of data really fast vs the DDR3 that's achieving it's 68GB/s by moving larger packages of data at a slower rate.


So the DDR3 might be moving more data one way because it's driving a bigger truck while the GDDR5 is moving less data one way in a smaller truck but moving a lot faster & making more trips back & forth.

which is probably what he meant by this


but you can’t just write to the memory, you need to read sometimes
 
So tell me how this change the fact that 176GB/s is 160% faster than 68GB/s? I'm far from stupid & I understand.

I actually have no question about your math.

But if you must ask, your comparison is basically pointless, because the overly simplification of modeling and that the utilization cases are completely different from the 176GB/s to the 68GB/s.

in simplistic terms:

case 1:
you have a one way road with 32 lanes, there can be 5500 cars driving through each lane every time unit, except that the road needs to be use in both directional, you can't split the lanes to have some lanes going one way and the rest go another, and there's a cost every time you switch the direction.

case 2:
same one way road with 32 lanes, 2133 cars driving through in each given unit, but you have another road with 256 lanes, and it was able to take 800 cars each direction at the same time.

Now you have to do the same things in each given time unit 30, or even 60 times. Which's "faster?" Can anyone answer?

It's not as simple as throwing some elementary math and go like 176 > 68, everyone knows that.
 
Maybe he is talking about the fact that GDDR5 is achieving it's 176GB/s by moving smaller packages of data really fast vs the DDR3 that's achieving it's 68GB/s by moving larger packages of data at a slower rate.


So the DDR3 might be moving more data one way because it's driving a bigger truck while the GDDR5 is moving less data one way in a smaller truck but moving a lot faster & making more trips back & forth.

which is probably what he meant by this

Both PS4 and Xbox One sport a 256 bit bus, so.....


Actually I won't read too much into the details of his comments.
He's the CEO so he may not be completely clear with details to give us a precise understanding.
 
It does seem that this was design with not just Kinect in mind then. It looks like they added special texture format and tiling instruction.

The format listed seems to be a 'match' for the type of image that the kinect unit will be outputting. (compression on the visual output shouldn't be an issue for the xb1 - nor should latency in this optional feed).

Also note that the move engine that is shared between the game and the system isn't the one with the jpeg decode.

JPEG decode would only work on the visual portion of kinect, AFAICT this isn't something that will normally be decompressed by the system. The always-on skeletal tracking would want the precise IR bits, and 'near zero latency'.

(so the developer would be able to 'opt in' to sharing the relevant DME if they wanted 2d or 3d video)
 
The Kinect camera stream isn't compressed, it's in YUY2 colour space. The JPEG pipeline might be used for colour space conversion, but I doubt it, since the GPU can read YUY2 format textures directly.

Cheers
 
It's okay, like you, most people don't understand the difference.

Being able to deliver 2 pizzas every hour is not the same as being able to deliver 1 pizza every 30 minutes.
They may be different services, but they're the same mathematical speed. You've basically said

2 pz/hr != 1 pz/ 0.5 hrs

which of course is mathematical garbage. We've already had the whole food, bandwidth, delivery service discussion. I'm not going to let it happen again in this thread. Agree to disagree.
 
The Kinect camera stream isn't compressed, it's in YUY2 colour space. The JPEG pipeline might be used for colour space conversion, but I doubt it, since the GPU can read YUY2 format textures directly.

The PS4-eye spec mentions uncompressed, but I can't find a statement that kinect's video stream is uncompressed (is YUY2 inherently uncompressable?).

AFAICT the video stream of kinect will (in most games) be read from the unit and shoved into the RAM every N frames... and almost never read (I assume it will 'automatically take a photo' when you get an achievement and/or transparent video chat etc). If the feed can be compressed it might save quite a bit of bandwidth?
 
AFAICT the video stream of kinect will (in most games) be read from the unit and shoved into the RAM every N frames... and almost never read (I assume it will 'automatically take a photo' when you get an achievement and/or transparent video chat etc). If the feed can be compressed it might save quite a bit of bandwidth?

On-screen video chat (snapped in Skype or in-game) would require it anyway.

1920x1080 @30Hz YUY2 (2 bytes / pixel) is 0.18% of total DDR3 bandwidth, ie. in the noise.

Cheers
 
For the last time, bandwidth is not speed.

if people want to be asses....

For the last time, bandwidth is not THROUGHPUT. bandwidth is the frequency range used. ie ADSL uses ~1mhz of bandwidth to deliver upto 9mbit/sec of throughput.

edit:
also speed isn't latency, speed will be ~0.7 of C. latency will be ??????
 
Last edited by a moderator:
On-screen video chat (snapped in Skype or in-game) would require it anyway.

1920x1080 @30Hz YUY2 (2 bytes / pixel) is 0.18% of total DDR3 bandwidth, ie. in the noise.

The JPEG decoder is there for "something" and the interview claimed it has some form of kinect-related usage. I can't see what else they would be using it for.

There's also 2 other (uncompressed) image streams from kinect, and a compressed/encrypted stream from the HDMI input that will also be heading into RAM along with the 'snap' thing. I get the impression that dataflow inside the XB1 was very tightly controlled.

But I'm not sure if we'll ever know for certain what it's used for :(.
 
if people want to be asses....

For the last time, bandwidth is not THROUGHPUT. bandwidth is the frequency range used. ie ADSL uses ~1mhz of bandwidth to deliver upto 9mbit/sec of throughput.

edit:
also speed isn't latency, speed will be ~0.7 of C. latency will be ??????

Bandwidth and throughput are used interchangeably, even by engineers and professionals. I have a presentation by Herb Sutter in front of me who has bandwidth in bits/sec.
 
Status
Not open for further replies.
Back
Top