Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
@XpiderMX

Yeah pal I know. (I case you haven't notice I linked it as well)
I am confused exactly because that article pretty much confirms what VGleaks and DF said and yet SuerDae for weeks pointed how DF & VGleaks info were dated.
 
It have never been clear that the gpu used by the 720 will be southern islands or sea island based.

http://www.google.com/patents/US20050243094

Systems and Methods for Providing an Enhanced Graphics Pipeline

As described in the background and overview of the invention, the present invention improves upon the state of the graphics processing arts by introducing systems and methods for a graphics pipeline that optimize the use of resources, balance the workload in the graphics pipeline, allow access to calculated information with IEEE compliant integer or floating point values, and provide additional programmability. The invention is directed to geometry processing and programmable shaders. The enhanced graphics pipeline includes the introduction of a common core for all shaders, a stream output for computations that take place in the pipeline and a geometry shader which allows for programming primitives as well as the generation of new geometry.

Common Shader Core for Shaders in the Pipeline

As mentioned in the background, a resource contention issue sometimes occurs due to the demands of different components of the graphics pipeline where specific components are required for a specific task, which can lead to a bottleneck in that area of the pipeline. Current graphics pipelines require optimization by the programmer in order to use the resources most effectively in the pipeline. Even with the best optimization schemes, however, prior art hardware itself is fixed, and thus there are times when the pixel shader or vertex shader remain idle, for instance, when the computations involve heavy matrix manipulation and the vertex shader is unable to compute such values fast enough. Thus, the pipeline may have bottlenecks due to excessive processing by the vertex shaders while the pixel shaders sits idle, or vice versa. In accordance with the invention, since shaders are able to function as one another, where a pixel shader can be reconfigured to function as a vertex shader, the underlying resources of the graphics chip can be optimized for the tasks being asked of the graphics chip. Moreover, as mentioned, the invention introduces a new class of shader referred to herein as a geometry shader, that provides another specialized set of processing capabilities. The common core provided by the invention can thus be configured as any of a vertex shader, a pixel shader and a geometry shader.

In an exemplary non limiting embodiment, the GPU contains a farm of units 184'-1a, which can thus be scheduled to different stages on demand. This load balancing means programmers do not have to worry about utilizing every stage. Any stage may be dynamically enabled or disabled, and configured or reconfigured, thereby freeing and respecializing resources for stages that are active. The common core is able to perform the vertex shader stage, the geometry shader stage and the pixel shader stage, depending on its configuration.

Geometry Shader to Operate on Primitives

Today, as graphics programmers develop graphics applications via a set of available graphics APIs, the programmer generally indicates a set of vertices to be operated upon by a set of algorithmic elements. Having specified the information, the data is sent into the graphics pipeline, and each vertex is streamed through a vertex shader and a pixel shader, as illustrated in FIG. 1A. Although any data that fits the input structure for the vertex and pixel shaders may be specified, vertex shaders are generally suited and used for operation upon vertices and pixel shaders are generally suited and used for operation upon pixels.

In this regard, a geometry shader in accordance with the invention is a new type of shader for a GPU in a graphics subsystem that is capable of taking different types of "primitive" input including any of a vertex, a line or a triangle, whereas prior art shaders (namely vertex shaders) are limited to being able to input, operate on and output vertices. In distinction, in addition to operation on a stream of vertices, the geometry shader of the invention can operate on primitives that define lines (sets of two vertices) and triangles (sets of three triangles), receiving such primitives as input inside the pipeline and outputting primitives inside the pipeline for further operation in accordance with the graphics architecture of the invention.

One further aspect of the primitive processing in accordance with the invention is that the geometry shader enables operations on the entire primitive not just by itself, but also in the context of some additional nearby vertices. One line segment in a polyline, for example, may be processed with the ability to read the vertices before and after that segment. Although the mechanism is general (graphics data need not be "graphics" data, it can be any data defined for processing by the GPU), a frequent use for this capability to process adjacent vertices of a primitive is that the geometry shader is capable of taking information about neighboring points in 3-D geometric space into account in current calculations.

Also included in the patent,

Stream Output of Memory before Frame Buffer Rasterization

Generation of Geometry inside the Pipeline
 
Common Shader Core for Shaders in the Pipeline
Geometry Shader to Operate on Primitives

Also included in the patent,

Stream Output of Memory before Frame Buffer Rasterization

Generation of Geometry inside the Pipeline

Looks like you've re-discovered DX10. :p (Nintendo thread is the other way : D )
 
But it is a 2005 patent, I don't know if it is relevant now.

My bad. I googled the patent to get an easier version to make patent reading more accessible. I am referring to specificially to the patent below. It was filed in 6/2011 and is a continuation of the previously linked patent. I don't know maybe the complexity was too much for a mass produced silicon in 2005.

http://appft1.uspto.gov/netacgi/nph...ND+contention&RS=(AN/Microsoft+AND+contention)

This application is a continuation of U.S. patent application Ser. No. 10/933,850, filed Sep. 3, 2004, which claims the benefit of U.S. Provisional Application No. 60/567,490 filed May 3, 2004; the contents of both are incorporated by reference herein in their entirety.


From wikipedia.

A "continuation application" is a patent application filed by an applicant who wants to pursue additional claims to an invention disclosed in an earlier application of the applicant (the "parent" application) that has not yet been issued or abandoned. The continuation uses the same specification as the pending parent application, claims the priority based on the filing date of the parent, and must name at least one of the inventors named in the parent application. This type of application is useful when a patent examiner allowed some, but rejected other claims in an application, or where an applicant may not have exhausted all useful ways of claiming different embodiments of the invention

Looks like you've re-discovered DX10. :p (Nintendo thread is the other way : D )

DX10 cards allow you to reconfigure shaders on the fly?
 
Last edited by a moderator:
Kinect 2.0:

Durango Sensor

The next generation sensor improves the current sensor in many areas:

  • Improved field of view results in much larger play space.
  • RGB stream is higher quality and higher resolution.
  • Depth stream is much higher resolution and able to resolve much smaller objects.
  • Higher depth stream accuracy enables separating objects in close depth proximity.
  • Higher depth stream accuracy captures depth curvature around edges better.
  • Active infrared (IR) stream permits lighting independent processing and feature recognition.
  • End to end pipeline latency is improved by 33 ms.
More here: http://www.vgleaks.com/durango-next-generation-kinect-sensor/
 
Kinect 2.0:

Durango Sensor

The next generation sensor improves the current sensor in many areas:

  • Improved field of view results in much larger play space.
  • RGB stream is higher quality and higher resolution.
  • Depth stream is much higher resolution and able to resolve much smaller objects.
  • Higher depth stream accuracy enables separating objects in close depth proximity.
  • Higher depth stream accuracy captures depth curvature around edges better.
  • Active infrared (IR) stream permits lighting independent processing and feature recognition.
  • End to end pipeline latency is improved by 33 ms.
More here: http://www.vgleaks.com/durango-next-generation-kinect-sensor/

Thanks..mmm interesting.

Shaving off a third of latency is a definitely a bonus and nothing to sniff at..bit I fear it will not be enough for he hardcore enthusiasts.
 
Shaving off a third of latency is a definitely a bonus and nothing to sniff at..bit I fear it will not be enough for he hardcore enthusiasts.

Yep.

Not enough improvement in latency.

Not sure if it is a interface issue (usb), or again, skimping on processing ... Either way, if this is supposed to be the "killer app" of the system, and all they managed was to shave the delay by a third ... :oops:
 
Yep.

Not enough improvement in latency.

Not sure if it is a interface issue (usb), or again, skimping on processing ... Either way, if this is supposed to be the "killer app" of the system, and all they managed was to shave the delay by a third ... :oops:

My thoughts entirely. ..this is their must have stand out feature..to be honest I expected full finger tracking and latency dropped by 60% or more!! Maybe even the ability to read emotions or something I dont know..I bet it will be awesome no doubt..just smacks of budget control rather than going to the moon and back.
 
My thoughts entirely. ..this is their must have stand out feature..to be honest I expected full finger tracking and latency dropped by 60% or more!! Maybe even the ability to read emotions or something I dont know..I bet it will be awesome no doubt..just smacks of budget control rather than going to the moon and back.

So much fail going on at MS ... sad thing is, it is/was entirely avoidable.

#firebalmer

Perhaps that is a typo and it is supposed to read:
"End to end pipeline latency is improved TO 33 ms."
 
So much fail going on at MS ... sad thing is, it is/was entirely avoidable.

#firebalmer

Perhaps that is a typo and it is supposed to read:
"End to end pipeline latency is improved TO 33 ms."

The absolute best it could possibly be with a 60Hz camera is worse than 33ms.

It has to capture the frame (1/60th) transmit it over whatever link is there, process the data give the data to the game, the game has the then render the frame and swap the buffer (1/60th) plus that front buffer has to be transferred to your TV and displayed (1/60th best case) before you see the result.

The first 1/60th is really only 1/2 that on average.
If you have an analog connection to the TV and it's not buffering the data (an old CRT) technically you get to see it as it's drawn so you only have 1/2 the 1/60th on average for the last 1/60th.

The processing probably isn't a significant part of the time, the best places to save time are higher camera framerates, and a faster link to the console, but you pretty quickly hit the limitations there.
 
All sounds pretty cool to me. Especially the Identity system. I can see some great uses for that. Say you simply walk into the room (camera's field of view) and you're avatar pops up and asks if you want to join the game. So signing in etc. Or maybe your avatar could change clothes to match what you're wearing each time it see's you lol. I'm definately on board with this anyway, this is the kind of thing I was hoping for with Durango, I'm not overly concerned with high gaming performance form this platform.
 
All sounds pretty cool to me. Especially the Identity system. I can see some great uses for that. Say you simply walk into the room (camera's field of view) and you're avatar pops up and asks if you want to join the game. So signing in etc. Or maybe your avatar could change clothes to match what you're wearing each time it see's you lol. I'm definately on board with this anyway, this is the kind of thing I was hoping for with Durango, I'm not overly concerned with high gaming performance form this platform.

The problem for MS according to software sales of kinect is that the xbox user instead want high gaming performance;)
MS seems to me a little bit in trouble with the next xbox.kinect 2 will never be a factor in sales as the tablet-pad is not for the WiiU.
 
Status
Not open for further replies.
Back
Top