News & Rumors: Xbox One (codename Durango)

Status
Not open for further replies.
Thread Title Key English words: News, Rumours, Xbox

Things Not in thread title: Business, strategies, MyLife purchases, Sony
 
Thread Title Key English words: News, Rumours, Xbox

Things Not in thread title: Business, strategies, MyLife purchases, Sony

Well now that we've had a complete reversal of situation going from a glut of next gen threads to only 2? The business strategy/ big picture stuff has to go somewhere

Why did you close the NGGP in the first place, it was serving it's purpose well.
 
Well now that we've had a complete reversal of situation going from a glut of next gen threads to only 2? The business strategy/ big picture stuff has to go somewhere

I quite agree. It does have to go somewhere but NOT HERE. Perhaps it should go over here ?

Why did you close the NGGP in the first place, it was serving it's purpose well.

The mod staff respectfully disagrees.
 
Sorry, will try to keep things more on topic. I assume since news is hard to come by about Durango, I assume talking about the specs as we know them and speculating on those specs and what might, if anything, change about those specs when we get an official reveal is fair game for this thread?

If the ESRAM is the 6T-SRAM at the kind of speeds that some are expecting it to be, what would that do for latency between it and the L2 cache. How many ns or cycles should we be expecting from the ESRAM if, say, it were going to have a major impact on performance? What are the typical latencies for traditional vram and how much better would 6T-SRAM be?
 
And even though the L2 on GCN operates as a write back cache where it doesn't copy modified data back to the ESRAM before it's absolutely necessary, would the penalty for the write operation be nearly as bad going to 32MB of 6T-SRAM?

Trying to get an understanding for why low latency 6T-SRAM would benefit the overall operation of the GPU if the specs are precisely what they are expected to be.
 
Sorry, will try to keep things more on topic. I assume since news is hard to come by about Durango, I assume talking about the specs as we know them and speculating on those specs and what might, if anything, change about those specs when we get an official reveal is fair game for this thread?

If the ESRAM is the 6T-SRAM at the kind of speeds that some are expecting it to be, what would that do for latency between it and the L2 cache. How many ns or cycles should we be expecting from the ESRAM if, say, it were going to have a major impact on performance? What are the typical latencies for traditional vram and how much better would 6T-SRAM be?

There isn't enough information about how the system is arranged to say what the latencies might be.
One, we don't know how the ESRAM is subdivided or how it is accessed compared to the rest of the cache heirarchy. This can sway latencies by tens of cycles in either direction depending on how accesses are generated, and it can lead to variations based on physical locality.

Significantly smaller L3 SRAM pools can take over 10 ns, possibly more (especially with AMD's L3).
There are EDRAM L3s of similar size to Durango's memory, and they have access latencies that can fall in the same range.
The (edit: latency) benefits to ESRAM for pools this size are being overblown, since other factors such as raw physical distance and intervening layers of arbitration add latency, whereas the penalty for using EDRAM can be reduced so that it is only mildly worse.
In the case of Power7, by some metrics it's slightly better.

RAM latency is measured in hundreds of cycles, and could be between 50 and 100 ns (edit: assuming we're talking about Durango's DDR3).
 
Back on topic...

Microsoft to reveal EA partnership at Next Xbox event
Exclusive add-on content deal expected; Publisher exec says prepare for big announcements

http://www.computerandvideogames.com/393418/microsoft-to-reveal-ea-partnership-at-next-xbox-event/

That _might_ even out any perceived tech deficiencies compared to the competition, but only slightly.

Can we fast forward to April already?

Tommy McClain

Am I the only one that sees that and the first thing that pops into mind is Crytek's exclusive game they are developing for the Xbox platform? Codename Kingdoms was it?

It makes a lot more sense to launch it on the next Xbox rather than the X360.

Regards,
SB
 
Am I the only one that sees that and the first thing that pops into mind is Crytek's exclusive game they are developing for the Xbox platform? Codename Kingdoms was it?

It makes a lot more sense to launch it on the next Xbox rather than the X360.

Regards,
SB

That is Ryse correct? I guess they could have moved off of the 360 and on to the next console with it, and I am sure Crytek had valuable input into development needs for the console. I wonder how much Kinect still remains in that project, since rumors suggested it was scaled back to a "better with Kinect" as opposed to made for Kinect.
 
There isn't enough information about how the system is arranged to say what the latencies might be.
One, we don't know how the ESRAM is subdivided or how it is accessed compared to the rest of the cache heirarchy. This can sway latencies by tens of cycles in either direction depending on how accesses are generated, and it can lead to variations based on physical locality.

Significantly smaller L3 SRAM pools can take over 10 ns, possibly more (especially with AMD's L3).
There are EDRAM L3s of similar size to Durango's memory, and they have access latencies that can fall in the same range.
The (edit: latency) benefits to ESRAM for pools this size are being overblown, since other factors such as raw physical distance and intervening layers of arbitration add latency, whereas the penalty for using EDRAM can be reduced so that it is only mildly worse.
In the case of Power7, by some metrics it's slightly better.

RAM latency is measured in hundreds of cycles, and could be between 50 and 100 ns (edit: assuming we're talking about Durango's DDR3).

Pretty interesting. Yea, I realized that physical distance would come into play, but being uninformed as I am, I guess I had the strange assumption that perhaps the traditional physical location of where more traditional vram (such as GDDR5) would be located on a modern GU, that it would turn out a most ideal physical location for the SRAM, and thus avoid physical location penalties on the magnitude of what other memory options might suffer as they move further away from where the L1 cache is located on chip.

Guess it really does come down to gritty details of how Microsoft and AMD have decided to implement it, and the simple implementation of it alone won't somehow produce incredible results.

Is it common, if it had been implemented in the most ideal or impressive possible way where you would expect to see very positive gains, for it to not be something that Microsoft highlights to developers, or is it the kind of thing that they wouldn't need to bother mentioning, since it just works anyway, and whether or not devs are aware of how its accomplishing its task, it wouldn't somehow make them more or less capable of taking advantage of it anyway?

In short, if it works it just works, there's no need for anyone to be specifically made aware of how we managed to accomplish this. Would Microsoft, or anyone, ever take that approach?
 
From the look of this, I am not sure if AMD is designing the Xbox Next's SoC at all. They have hired so many senior engineers that it looks like they are designing their own chip like Apple. If they use an all AMD design like Sony, I am not sure why MS needs so many hardware engineers from the beginning.

Hmm... We might be totally surprised after couple of months :)

Backward Compatibility SoC.
 
From the look of this, I am not sure if AMD is designing the Xbox Next's SoC at all. They have hired so many senior engineers that it looks like they are designing their own chip like Apple. If they use an all AMD design like Sony, I am not sure why MS needs so many hardware engineers from the beginning.

Hmm... We might be totally surprised after couple of months :)

One would assume that if the next Xbox is really meant to be a media entertainment all-in-one device that there is some kind of always-on CPU/SoC separate from the 8-core CPU that's active when in a super low power state.
 
According to VG leaks this information about the CPU is from 2013 so not the stuff they got from superdae.
 
According to VG leaks this information about the CPU is from 2013 so not the stuff they got from superdae.

Yeah sure, they probably mean that this news story is from 2013 lol
I wont believe vgleaks. They got all the Infos and Docs from DaE and thats a fact.
We should simply wait until April.
 
http://www.microsoft-careers.com/jo...-Startup-Business-Group-Job-WA-98052/2378328/

Job Category: Hardware Engineering
Location: Redmond, WA, US
Job ID: 822470-101448
Division: Corporate Research & Development

Do you want to part of a team that builds the next big, game-changing entertainment and communication experiences? We are developing a cutting-edge new video and depth-sensor based technology that goes beyond anything on the market today.

This is a key position in the productization of a V.1 game-changing technology currently under wraps at Microsoft. This is a newly created, high-impact role in Microsoft’s Startup Business Group (SBG) within a development team that is developing multiple products, based on this new technology that will be shipped with the XBOX and Lync/Skype divisions. A critical part of this role is the implementation of complex, cutting-edge image processing and computer vision algorithms in hardware.

The mission of SBG is to drive new business value for Microsoft by identifying new business and technology opportunities, building new products and taking them to market successfully.

The Principal Hardware Engineer will reside within a product team inside of SBG, and be tasked with building new products in a highly dynamic space that crosses computer vision, image processing, video, networking and graphics. This is a small group with little hierarchy that has much interaction with business and technical leaders across the company. This group is uniquely situated outside of existing product groups, allowing it to pursue innovative, product-worthy ideas and solutions that may be unfeasible to develop inside the product groups.

Responsibilities
•Provide technical leadership and individual contribution in the design, architecture, implementation, testing and release of FPGA and ASIC implementations of image processing and computer vision algorithms
•Participate hands-on in all aspects of the hardware engineering process from the design and architecture, all the way through to simulation, implementation, testing, debugging and release
•Play a key role in technical communication, coordination and alignment of the hardware design and implementation plan up and down the organizational structure (from executives to individual contributors), across product line teams (with Architects across organizational boundaries) and research teams (with collaborating teams in Microsoft Research)

Requirements
The successful candidate will bring 15+ years of engineering experience architecting and implementing relevant hardware systems.

•In particular, the SBG team seeks demonstrated world-class technical expertise and accomplishments in the following areas:
•FPGA and/or ASIC architecture and design of complex image processing and/or computer vision solutions
•Excellent mathematical ability
•Experience with camera interface and PCIe strongly desirable
•Strong development discipline and attention to detail that incorporates rigorous design, development and testing into all aspects of the product development cycle
•Reputation for being a thought leader in particular area of specialty, prior experience setting architectural direction, vision, roadmap for products
•Ability to operate effectively and efficiently under high uncertainty and ambiguity in a very dynamic environment
•Strong academic background in Engineering and/or Computer Science (Bachelors, Masters or Ph.D. or related is preferred)


CR:SBG


Nearest Major Market: Seattle
Nearest Secondary Market: Bellevue
Job Segments: Engineer, Product Development, Hardware Engineer, Computer Science, Game Designer, Engineering, Research, Technology


Kinect is not a version one product.

Hmmm...it says V.1 technology, not product. If they are switching the way Kinect 2.0 does tracking maybe that is enough to consider it V.1? I underlined another interestring part...it says this tech will be shipping with Xbox. Would be the first direct semi-confirmation from MS that Kinect of some sort will ship with future Xbox consoles. This has Kinect 2.0 written all over it.
 
^ As is the team in Israel. What I find interesting in the posting you listed is that it is not limited to just the Xbox division. Tinfoil hat is on tight - what other MS products could benefit with integration with this new sensor? Tablet version, or phones to better allow for games use (streaming) to those devices.

Taking my hat off. Between these two job listings they are acitvely working on more tech. :)
 
Hmmm...it says V.1 technology, not product. If they are switching the way Kinect 2.0 does tracking maybe that is enough to consider it V.1? I underlined another interestring part...it says this tech will be shipping with Xbox. Would be the first direct semi-confirmation from MS that Kinect of some sort will ship with future Xbox consoles. This has Kinect 2.0 written all over it.

I personally thinks it maybe gaming glasses that provide 3d ability with any display. I can imagine one display plane for the display with an additonal display plane for each eye.

I could see it using kinect to determine your position and then use that position to correctly render the position of the graphics to your eyes.
 
I personally thinks it maybe gaming glasses that provide 3d ability with any display. I can imagine one display plane for the display with an additonal display plane for each eye.

I could see it using kinect to determine your position and then use that position to correctly render the position of the graphics to your eyes.

Why not just do it Johnny Chung Lee style?


Tommy McClain
 
Why not just do it Johnny Chung Lee style?


Tommy McClain

Here is a ms patent

http://appft1.uspto.gov/netacgi/nph...AN/Microsoft+And+eye&RS=(AN/Microsoft+AND+eye)

WIDE FIELD-OF-VIEW VIRTUAL IMAGE PROJECTOR

[0017] Head-mounted display device 102 includes processor(s) 108 and computer-readable media 110, which includes memory media 112 and storage media 114. Computer-readable media 110 also includes spatial light modulator controller (herein a "controller") 116. How controller 116 is implemented and used varies, and is described as part of the methods discussed below.

[0018] Head-mounted display device 102 also includes virtual image projector 118, which generates a wide field-of-view virtual image that can be viewed by a wearer of the head-mounted display, referred to as "viewer" herein. For example, virtual image projector 118 may be coupled to the lens of eyeglasses 104 to generate a virtual image of infinitely distant objects directly in front of the viewer's eye to cause a lens of the viewer's eye to adjust to an infinite or near-infinite focal length to focus on the objects. Virtual image projector 118 may be at least partially transparent so that the viewer can see external objects as well as virtual images when looking through a lens of head-mounted display device 102. In addition, it is to be appreciated that virtual image projector 118 may be small enough to fit onto the lens of eyeglasses 104 without being noticeable to a viewer wearing the eyeglasses.

[0019] In some cases, virtual image projector 118 can be implemented as two projectors to generate a virtual image in front of each of the viewer's eyes. When two projectors are used, each virtual image projector 118 can project the same virtual image concurrently so that the viewer's right eye and left eye receive the same image at the same time. Alternately, the projectors may project slightly different images concurrently, so that the viewer receives a stereoscopic image (e.g., a three-dimensional image). For purposes of this discussion, however, virtual image projector 118 will be described as a single projector that generates a single virtual image in front of a single one of the viewer's eyes.

http://appft1.uspto.gov/netacgi/nph...AN/Microsoft+And+eye&RS=(AN/Microsoft+AND+eye)

CHANGING BETWEEN DISPLAY DEVICE VIEWING MODES

[0019] In addition to or alternatively to displaying a virtual image behind the display screen, as a user moves the mobile device 100 closer to an eye, the mobile device may change to another type of virtual display mode, such as a near-eye viewing mode, as illustrated at 130 in FIG. 1. Examples of suitable near-eye imaging modes include, but are not limited to, Maxwellian view modes, retinal scanning display modes, and retinal holographic projection modes as well as eyepiece magnifiers such as discussed below.

[0020] Maxwellian view optics focus an image through an area of the pupil of a user's eye to form an image directly on the retina, such that the user perceives an image to be in focus regardless of the distance of the user from the device. The large effective depth of field of Maxwellian systems may benefit users with visual limitations such as presbyopia, and also reduce the adjustment needed for viewing. Maxwellian view optical systems are also very efficient in their use of light, resulting is much lower power consumption for a given level of brightness.

[0021] in a near-eye virtual display system, the user perceives a much larger, more immersive image compared to a real image displayed at the display screen 102. In this manner, a near-eye viewing mode may function similarly to a head-mounted display, but may offer various advantages compared to a head-mounted display, including portability, superior ergonomics for short-term usage, intuitive adjustment for a particular user's preferences, and integration into a commonly carried portable device. In addition to the examples above, it will be understood that any other suitable mechanism may be used to present a near-eye viewing mode, including but not limited to a virtual retinal display mechanism in which a laser is scanned onto the retina, a holographic retinal projection mechanism, an eyepiece such as the one described below, etc. Likewise, it will be understood that a mobile device may be configured to switch between any suitable number of viewing modes, and that an image plane of an image displayed by such a mobile device may vary within a continuous range of image locations and/or between two or more discrete locations.

http://appft1.uspto.gov/netacgi/nph...AN/Microsoft+And+eye&RS=(AN/Microsoft+AND+eye)

VIRTUAL IMAGE DISPLAY DEVICE

[0023] Microlens array 126 is an array of small lenses that is placed between display 124 and a viewing surface (e.g., a screen) of display 124. Thus, microlens array is positioned between display 124 and a viewer of display 124. Microlens array 126 is configured to receive images from display 124 and to generate virtual images of distant objects placed behind the viewing surface of display 124.

[0024] Pupil tracker 128 is configured to locate positions of pupils of a viewer that is viewing display 124. Pupil tracker 128 provides these positions to controller 122 to enable controller 122 to control virtual image display device 102 to render the virtual image based on the positions of the pupils of the viewer. Pupil tracker 128 may be separate or integral with virtual image display device 102. Integral examples include pupil tracker 128-1 of television device 108 and pupil tracker 128-2 of tablet computer 112, whereas separate examples include stand-alone pupil trackers, such as pupil trackers operably coupled with virtual image display device 102, a set-top box, or a gaming device.

[0025] FIG. 2 illustrates a detailed example 200 of virtual image display device 102 configured as a hand-held display device that is typically operated close to the eyes of a viewer 202. In this detailed example, virtual image display device 102 generates a 2D virtual image 204 behind a viewing surface 206 of virtual image display device 102. Note that the size of virtual image 204 is larger than the size of viewing surface 206 and that the plane of virtual image 204 is behind viewing surface 206. This enables viewer 202 to focus on virtual image 204 when the display is operated close to the viewer's eyes (e.g., as a mobile device) even if the viewer suffers from presbyopia, farsightedness, or some other condition causing a lack of focal accommodation.

[0026] FIG. 3 illustrates a detailed example 300 of virtual image display device 102 configured as a 3D display device that is viewed by a viewer 302. In this detailed example, virtual image display device 102 generates a 3D virtual image 304 behind a viewing surface 306 of virtual image display device 102. By generating 3D virtual image 304 behind viewing surface 306 of virtual image display device 102, the 3D virtual image appears realistic because the parallax is rendered to make virtual image 304 appear as though it is behind viewing surface 306 in real space.

[0027] In both FIG. 2 and FIG. 3, virtual image display device 102 includes pupil tracker 128 which locates positions of pupils of viewer 202 or viewer 302, respectively. Pupil tracker 128 provides these positions to controller 122 (not pictured) to enable controller 122 to control virtual image display device 102 to render the virtual image based on the positions of the pupils of viewer 202. For example, by locating the positions of the pupils, virtual image display device 102 can adjust a focus, parallax, and/or perspective of the virtual image in reference to the positions of the pupils so that the virtual image is viewable by viewer 202 or viewer 302.


One these patents became available today another referring to determine depth using lasers referenced what MS calls a "three dimensional gpu". This my first time seeing MS referring to a gpu in such fashion.

To be honest if MS releases a console with 3d lenses that projects a illumni room like wide view while projecting 3d images that appear as if they are within your living space and it works well. Its a wrap and the lack of gpu performance wont matter.

Thats a big "if", a real big "if", because the moment we are able to inject convincing 3d images into a person retina and with the use of something like kinect to allow pseudo or physical like interaction of those images, we have moved beyond minority report and i am buying a bunch of MS stock.
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top