News & Rumors: Xbox One (codename Durango)

Status
Not open for further replies.
If you can do it with a low enough overhead, it allows the game a single consistent view of the system regardless of what the system is doing with the game.

Suspend/Resume being the obvious case, but it's equally true for reducing the resources available to a game transparently while something else is in the foreground.

An example might be if you are acting as the server in an online game, and swap to a browser mid gameplay, rather than pausing the game, you could reduce the resources available, and make the browser more responsive while it's in use. The game doesn't even know it's been shuffled to less cores, though you probably would inform it that it no longer needs to render.

I do wonder if games have to deal with device lost like events on Durango though, In a VM like world the hyper visor can probably reset the GPU. You wouldn't have the resource recreation issue you have on PC, but any incomplete frames and longer running compute jobs would have to be restarted.

Maybe this MS patent is applicable. Maybe more applicable to Yukon or the hardware configs in the QOS patent.

Operating system decoupled heterogeneous computing
http://www.google.com/patents/US201...a=X&ei=egyZUf6-L4WY9QSP_4HABQ&ved=0CDkQ6AEwAA

A heterogeneous processing system is described herein that provides a software hypervisor to autonomously control operating system thread scheduling across big and little cores without the operating system's awareness or involvement to improve energy efficiency or meet other processing goals. The system presents a finite set of virtualized compute cores to the operating system to which the system schedules threads for execution. Subsequently, underneath the surface, the hypervisor intelligently controls the physical assignment and selection of which core(s)—big or little—execute each thread to manage energy use or other processing requirements. By using a software hypervisor to abstract the underlying big and little computer architecture, the performance and power operating differences between the cores remain opaque to the operating system. The inherent indirection also decouples the release of hardware with new capabilities from the operating system release schedule. A hardware vendor can release an updated hypervisor, and allow new hardware to work with any operating system version the vendor chooses.


In some embodiments, the big and little cores may have architecture equivalence, micro-architecture equivalence, a global interrupt controller, coherency, and virtualization. Architecture equivalence may include the same Instruction Set Architecture (ISA), Single Instruction Multiple Data (SIMD), Floating Point (FP), co-processor availability, and ISA extensions. Micro-architecture equivalence may include difference in performance but the same configurable features (e.g. cache line length). A global interrupt controller provides the ability to manage, handle, and forward interrupts to all cores. Coherency means all cores can access (cache) data from other cores with forwarding as needed. Virtualization is for switching/migrating workloads from/to cores.

In some embodiments, the heterogeneous processing system allows some processing tasks to be migrated to a cloud computing facility. The system can present the cloud computing facility as just another processing core to which tasks can be scheduled. For appropriate tasks, the system may be able to offload the tasks from the computing device entirely and later return the output of the task to the guest operating system. This may allow the system to enter a lower power state on the computing device or to transition work from a datacenter at peak electricity cost to one of lower electricity cost.
 
Sigh,


What do they mean by 'Alpha Specifications'?

He isn't saying the specs are the same. Read the sentence again. He says the architecture is the same.

Capitalising it and all, suggests it's an actual technical term -maybe one MS is using in documentation somewhere, so by that do they mean alpha kit specifications?
Because if they do then it's wrong to say that the architecture design is the same but beefed up (since the alpha kits are quite different to the vgleak specs).
The alpha kits from 2012 were using the same architecture as VGLeaks iirc. From what everyone has been saying beta kits only went out in early 2013.

Yield rates for what chip? the original one or the beefed up one? You'd assume they mean the beefed up one, yet that's clearly false since the chips being produced then were for the beta kits which went out at the end of the year and were 1.2TF.
It is saying the new chips have some moderate yields, which is an improvement over the yields from last year. I don't see why you are having trouble understanding what you are reading here.

Again...I don't believe any of this. There is nothing that seems implausible there to me, but that doesn't mean anything without evidence suggesting it is actually true. So don't misunderstand me here...just noting errors in the logic you are presenting. Very little reason to get too worked up at this point anyhow. We will know the specs in some detail in 2 days by the sounds of it.
 
Such changes would require a total redesign of the system and practically chucking out everything they have already done. This doesn't happen quickly.

You, nor anyone else on this board, has any idea about this. The position you'd have to be in to have an informed position on this issue disqualifies you from being able to post about it openly.

I see ppl routinely asserting it left and right, here and elsewhere. Such assertions aren't warranted and rely intensely on assumptions that you wouldn't have any access to verifying. We heard the exact same rhetoric leading into the PS4 reveal regarding the radical architecture changes that would surely be necessary to increase the RAM amount and the imminent year+ long delay that such a move would require. Afterwords ppl all decided it was totally plausible in hindsight since they'd just beef up the density of the RAM chips and ppl conveniently walked back their prior assertions. The same thing can happen with MS's design(s).

We don't know what flexibility these designs have. I've worked on many competitive engineering projects. I am telling you from my own experience that these types of projects include significant flexibility within the various moving parts and parameter selections. Do changes cost money? Yes. Does any design change that notably improves performance require a significant redesign? NO. That is NOT necessarily true, as one would have hoped ppl would find sinking in after the PS4/RAM announcement. You can keep the exact same architecture and simply beef up certain components without breaking through the leeway that has been engineered into the design for exactly that reason in the first place.

This notion that notable improvements to the VGLeaks specs automatically mean year long delay and drastic redesign of the entire platform's architecture is very, very misinformed. Armchair engineers ought not be making assertions like you have here without being in a position to establish all the assumptions such chatter invokes.
 
The alpha kits from 2012 were using the same architecture as VGLeaks iirc. From what everyone has been saying beta kits only went out in early 2013

I disagree. The VGLeaks material covered the specs communicated to developers as being the final configuration of the machine, the target design if you will; they weren't alpha specs to my knowledge.
 
If you can do it with a low enough overhead, it allows the game a single consistent view of the system regardless of what the system is doing with the game.

Suspend/Resume being the obvious case, but it's equally true for reducing the resources available to a game transparently while something else is in the foreground.

An example might be if you are acting as the server in an online game, and swap to a browser mid gameplay, rather than pausing the game, you could reduce the resources available, and make the browser more responsive while it's in use. The game doesn't even know it's been shuffled to less cores, though you probably would inform it that it no longer needs to render.

I do wonder if games have to deal with device lost like events on Durango though, In a VM like world the hyper visor can probably reset the GPU. You wouldn't have the resource recreation issue you have on PC, but any incomplete frames and longer running compute jobs would have to be restarted.

I had the notion that having a consistent pool of resources avaliable for games regardless of what the OS/other stuff is doing is the main point of using consoles to get a consistent experience, and transparently reducing stuff like CPU time, RAM, is actually a bad thing, instead of a benefit. Of course this remains up to debate so I would like to hear some more opinions.

I understand opening up VMs to run stuff like "dedicated servers" (yeah, dedicated) for games (which is actually probably a good idea) using up reserved OS resources, but I still don't see clear benefits from using it for games themselves.

My gut feeling is that the strength of consoles is in itself having a consistent hardware, one of the major problem VMs are designed around to solve at a small cost to absolute performance and the ability to control the hardware almost down to the lowest level.
 
Need no...just what seems more plausible. I have no hard evidence but if I was going to build two platforms a 5x and 20x processing target increase seems reasonable. And when I hear $500 and 5x over a 8 year old console that doesn't jive. I wouldn't go 2x and 5x because of all the competition from smart phones, tablets, and Google.

Jen Hsun Huang predicts by 2019 a game console will be at 40 TF. Rewind the clock back 5 years and that would be a 4 TF console in 2014. So 4 or 5 TF seems aggresive reasonable. The speculation is going to be academic soon enough.

Nvidia should make a console next gen. I am sure it will be most powerful and 999$ only. Considering these days their top end card is 999$ :p
 
I
My gut feeling is that the strength of consoles is in itself having a consistent hardware, one of the major problem VMs are designed around to solve at a small cost to absolute performance and the ability to control the hardware almost down to the lowest level.

The VM doesn't remove this.
I would imagine that when a game is the "foreground" app, it gets a guaranteed set of resources, and the abstraction can be very thin.
Most of the optimizations are at the balancing resources level, there shouldn't be much to gain going around a well designed API at this point.

It's an interesting approach, and I it will be interesting to see how MS utilize it.
I heard recently that for example they support suspending of multiple games, so a player can swap between the running states quickly. They'll still have to load that state, but I wonder if the box has some amount of fast flash for this purpose.
 
Nvidia should make a console next gen. I am sure it will be most powerful and 999$ only. Considering these days their top end card is 999$ :p

lol The funny thing is that 1k$ would be a relatively reasonable price tag for a console with 5-year life span, while 1k$ for a videocard is utterly ridiculous given that those enthusiasts who go for it, will most probably throw other 1k$ a year or two later
 
from IGN board:





the whole thing is derailed out control, this seems to me a joke Markberry/misterxteam on ign too?, but in 2 days we will see what's impossible or what is not
The rumour comes from here.

http://www.360crunch.net/forum/threads/4520-Xbox-Now!-leaks-just-ahead​-of-its-May-21st-event?p=47642#post​47642

Unless you want to pay more than 1000€ for a console and increase your wattage outlays -and paying a sky high monthly electricity bill-, I hope it isn't true.
 
That's it.
The Xbox name is Xbox Pie.
I thought maybe it would be called Xbox Σ or Xbox Sigma since the Σ looks like a backwards 3, but then I found this & thought maybe it's just Xbox since the sum of every number equals zero. LOL

fnw04.gif




Only if it turns out to be a tablet or comes with one.

Tommy McClain
It's Infinity guys, trust me on this one. I am no insider though, just conjecturing on my part, with info to support the idea.

I have been thinking about this... ;) and... wouldn't you like to have the games being named after the console?

For example:

Halo Infinity

Forza Infinity

Banjo & Kazooie Infinity

League of Legends Infinity

Gears of War Infinity

Battlefield Infinity

Project Gotham Racing Infinity

Crysis Infinity

Hawken Infinity
 
Last edited by a moderator:
It's an interesting approach, and I it will be interesting to see how MS utilize it.

Seems to be three main benefits to this. First is that it frees them from the shackles of typical console development as they are no longer bound to any one manufacturer for any one part, they can source any part from anyone they want so long as its as quick or quicker than the old part it replaces, even doing this mid generation. They can potentially hop to different cpu, gpu, ram, etc manufacturers and not care anymore. Second, it frees them from the typical console cycles as they can now release new hardware anytime they want on any cycle they want based on market conditions due to them now having forward compatibility. Finally it will allow all the apps/games built for this platform to be ported elsewhere far easier, basically any other hardware that Microsoft makes just has to support the Durango vm. This is a boon to publishers as all their apps/games written to the Durango vm will be easily reusable elsewhere. It seems like a great and long overdue idea to me, win-win-win all around.
 
What would be the benefit to doubling the ESRAM if its just used as a "scratchpad" anyway? Is there anything about upping the compute power or the RAM the requires a corresponding bump to the ESRAM?
 
Status
Not open for further replies.
Back
Top