Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
So I'm under the impression that Dynamic GI has to have at least 1 bounce but I'm feeling that I'm wrong on that definition given both your answers.

In general the term "GI" refers to simulating light bouncing off of scene geometry, so yes it pretty much implies having at least one bounce.

In particular when I look at Fable Legends or Drive Club, and Tomorrow Children: is that just very well done VPLs? or are those games leveraging some sort of path tracing algo? Because there is an inherent difference in quality between something like Dirt 2 and the unreleased titles. The only reason I asked was because to me it seemed like path tracing and tiled based rendering as a concept sounds more taxing than a non-tiled variant.

Fable Legends is using light propagation volumes, which basically injects direct lighting into a camera-aligned grid of sample points and then "propagates" the lighting through 3D space to simulate the light traveling after bouncing off the surface. Tomorrow Children is using voxel cone tracing, which isn't exactly "path tracing" but does compute indirect lighting by tracing cones into a voxel-based representation of the game world. I'm not familiar with with Drive Club is doing, so I can't comment on that.
 
Ran into some interesting patents from Microsoft.

Optimizing data transfers between heterogeneous memory arenas

http://www.google.com/patents/US20140223131

When data is copied between CPU and GPU memory, a GPU direct memory access (DMA) copy engine may be used. The GPU DMA copy engine may indicate that the CPU's memory to be properly aligned before data can be transferred. In such cases, data transfers may involve an extra step of copying between the actual CPU source/destination memory and a temporary memory allocation on the CPU that meets the DMA copy engine constraints. Such data transfer operations may result in the overall latency of the data transfer operation being the sum of each step in the process.

In embodiments described herein, the overall transfer operation may be broken into a series of transfers of smaller chunks. Then, the data chunks may be pipelined so that the execution of different steps in the transfer operation overlap by concurrently executing the different steps in the pipeline with each different step being performed on a different chunk of the larger transfer. The pipelining technique may be applied to any data transfer operation that involves multiple steps as part of the transfer operation. The different steps may be performed by any hardware or software resources that can function concurrently.

This pipelining technique may be applied, for example, when copying a large amount of data between the memory arenas of two different accelerator devices. In such cases, the transfer is to be routed through CPU memory, and thus includes two steps: 1) copy data from source accelerator memory to CPU memory, and 2) copy the data from CPU memory to destination accelerator memory. Since step 1 and step 2 in the above transfer operation are performed by independent DMA engines (step 1 by the source accelerator DMA engine and step 2 by the destination accelerator's DMA copy engine), the two engines can work concurrently to speed up the overall transfer time by pipelining the data copying steps.

In some cases, data may be transferred between a first memory arena and a fourth memory arena. In such cases, transfer determining module 115 may determine that for the data 131 to be transferred from the first memory arena to a fourth arena, the data chunk is to be transferred from the first memory arena to a second memory arena, from a second memory arena to a third memory arena, and from the third memory arena to the fourth memory arena. In response to the determination, the data copying module 125 may perform the following in parallel: copy a third data portion (not shown) from the first memory arena 135 to the second memory arena 140, copy the second data portion 137 from the second memory arena to the third memory arena 145 and copy the first data portion 136 from the third memory arena to the fourth memory arena. As shown in FIG. 5, the total transfer time for such a data transfer is t+t/n seconds, where n is the number of data portions (i.e. chunks). The copying is performed concurrently at each stage, once the pipeline is loaded. This concurrent data transfer among heterogeneous memory arenas allows data to be quickly transferred and accessed by the destination. Instead of serially sending a large data chunk from arena to arena, the chunks are broken down into smaller pieces and transferred concurrently, thus greatly reducing transfer time.

ENGINE FOR STREAMING VIRTUAL TEXTURES

http://www.google.com/patents/WO2014113335A1?cl=en

The engine may include at least an integrated circuit having a controller, read/write buffers and LZ and/or JPEG decoders. In an embodiment, the engine may also have LZ and/or JPEG encoders to compress texture data. The engine may be included in a computing device having at least one processor and volatile memory in a system on a chip (SoC). In an embodiment, the at least one processor may be a graphics processor and/or central processor.

The application may be a video game software application and the computing device may be a video game console...

In one example, an accelerator DMA engine copies the first data portion 136 from the second memory arena 140 to the third memory arena 145, and a destination accelerator copies the second data portion 137 from the first memory arena to the second memory arena. In this example, once the first portion of data has been transferred, each subsequent portion is processed by the independent source and destination DMA engines concurrently.

DYNAMIC MANAGEMENT OF HETEROGENEOUS MEMORY

https://www.google.com/patents/WO20...a=X&ei=vWcTVOy8A8S-8gGdvIDAAw&ved=0CCIQ6AEwAA

A method of operating a computing device includes dynamically managing at least two types of memory based on workloads, or requests from different types of applications. A first type of memory may be high performance memory that may have a higher bandwidth, lower memory latency and/or lower power consumption than a second type of memory in the computing device. In an embodiment, the computing device includes a system on a chip (SoC) that includes Wide I/O DRAM positioned with one or more processor cores. A Low Power Double Data Rate 3 dynamic random access memory (LPDDR3 DRAM) memory is externally connected to the SoC or is an embedded part of the SoC. In embodiments, the computing device may be included in at least a cell phone, mobile device, embedded system, video game, media console, laptop computer, desktop computer, server and/or datacenter.
 
it is validation of the power-architecture. Many things get validated. I think even arm cores for xbox one got validated. Something that gets validated must not get into the final product.

even NVidia products got validated for xbox one, but they decided to go with AMD hardware after the validation. Validation could mean many things e.g:
- is something cost-effective or not
- Does it work or not
- Can it be done or not
...
 
Last edited by a moderator:
it is validation of the power-architecture. Many things get validated. I think even arm cores for xbox one got validated. Something that gets validated must not get into the final product.

Yeah they were planning on including Backwards compatibility on the xbox one early on in it's conception, but they decided to drop it.
 
it is validation of the power-architecture. Many things get validated. I think even arm cores for xbox one got validated. Something that gets validated must not get into the final product.

even NVidia products got validated for xbox one, but they decided to go with AMD hardware after the validation. Validation could mean many things e.g:
- is something cost-effective or not
- Does it work or not
- Can it be done or not
...

Seems you dont know how SOC design ....
Validated design is for Final Design
it wont carrying Xbox One label !!! if it was not validated for Xbox One

What on Earth Nvidia for Xbox One ?

Some Validation Process Example
http://www.arl.wustl.edu/projects/fpx/cse535/resources/Wipro_System_on_Chip.pdf
http://www.altera.com/literature/wp/wp-01168-safe-industrial-soc.pdf
 
it is validation of the power-architecture. Many things get validated. I think even arm cores for xbox one got validated. Something that gets validated must not get into the final product.

even NVidia products got validated for xbox one, but they decided to go with AMD hardware after the validation. Validation could mean many things e.g:
- is something cost-effective or not
- Does it work or not
- Can it be done or not
...

Very true, we know they had a build-off at one point with a few different silicon options. That was in one of the early videos that they shared I think.
 
Seems you dont know how SOC design ....
Validated design is for Final Design
it wont carrying Xbox One label !!! if it was not validated for Xbox One

What on Earth Nvidia for Xbox One ?

Some Validation Process Example
http://www.arl.wustl.edu/projects/fpx/cse535/resources/Wipro_System_on_Chip.pdf
http://www.altera.com/literature/wp/wp-01168-safe-industrial-soc.pdf

I don't know how you work, but if I validate something (on work) it doesn't say anything about the final product.
it is the result of the validation and what's done with the result that counts.
The result can be "not working" or "working", ..... but someone than decides "that das not fit into budget" (power-budget, $-budget or whatever) so it is validated but not used/canceled or whatever you call it.

and that with NVidia, yes, that got validated, too (intel also) but because of other aspects (I think not so cost-effective, AMD is really cheap) it was not done. Just lets call it paper-validation (not every validation goes through real silicon) ;)
 
Last edited by a moderator:
You mean he may be referring to power gating or actual consumption?

Possibly the design and validation of the power delivery system. It's a non-trivial task to supply power to high-power components that can wildly swing in demand very quickly, in part thanks to the power gating and DVFS, along with platform-level switches in mode. There are PCB requirements, active components, one or more microcontrollers, etc.

There are skills listed that go very well with that kind of design and validation, whereas there are no POWER ISA and x86 ISA flavored electrons.
 
IF the MainSOC is AMD design, it is AMD responsibility to test the chip
MS dont have to create an super expensive space age tech to simulate those
thing, unless there is ....... :LOL:

Plus we know Xbox Silicon Group is headed by former IBM VP of Power Arch
and we know Xbox Director of SOC is ex IBM and one of smart guy that design Xenon + Cell + A2
I can't fathom the reasoning of people like you. What is your theory? XB1 has extra silicon power inside it but it's just not being used and games are performing worse than PS4?

Don't worry about replying...
 
I can't fathom the reasoning of people like you. What is your theory? XB1 has extra silicon power inside it but it's just not being used and games are performing worse than PS4?

Don't worry about replying...

Dat sekrit saus doh! Seriously are we likely to see this being dredged up until the next gen? The interpretation that he is doing nothing more than power validation is the only one that makes sense, Allandor is absolutely correct that validation != implementation. Validation is often used to answer the question 'could this work' not just 'does this work', I've heard of engineers in my company validating all sorts of exciting things that never make it to the test bench let alone the market.
 
People are looking for something that doesn't exist. I think the general population of Xbox owners have generally moved on, the graphical difference being only resolution is appearing to have less of an effect as time goes on. And will likely have less of an effect as more games take on dynamic resolution.

Technically speaking there is very little left we don't know about Xbox hardware except how the hardware utilized, how it can be optimized and how much of dx12 hardware based features actually exist in Xbox one. And how cloud based processing fits into this picture as an actual proper solution.
 
This is extremely new, pre-emptive console switching?

http://imgur.com/YLC2HNr

The story from reddit is that this user received this email from Xbox Support. I'll file this under rumor until more people start receiving these, but if this is true, and MS is actively replacing Xbox's that can no longer be upgraded, this is the weirdest thing ever, I've never seen such pre-emptive action.

edit: so people are saying it is legit. In this reddit users case, their optical disc drive was broken

edit2: this is both cool and scary at the same time.
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top