Even doing it in software would only cost like one half of one percent of the Orbis GPU's total processing power. So unless the Orbis version inexplicably needs 100+ HUDs composited in one at a time it should be fine.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
That last little bit also made me think that it could give a graphical advantage in some respects if some game, for example, only ran at 720p or any other sub 1080p resolution, but Durango's Game UI was at 1080p while Orbis's Game UI was at the same resolution as the game.
For some side by side comparison like Digital Foundry, you'd be able to pick it apart. For your average consumer though they may think the game with the sharp native res UI is actually better looking than the other one, even if the other one had, say, better shadows and lighting.
Of course, if that were the case, that would force Orbis developers to also have a native res UI. But if there's no hardware support it may require more GPU resources, thus leveling the playing field regardless.
Regards,
SB
The DME is an inherent part of the system. It will be used for "everything".
Where dynamic resolution is concerned, the QoS is guaranteed by sacrificing something, as the patent put it. I didn't say it's not a component of optimization.
I pointed out there are other software optimization techniques developers may use to avoid sacrifices in a standalone game. The developers will/should focus on those techniques first.
The display planes are helpful in compositing (for free). But within a game, I doubt the saving is great. They can control the update rate, timing, quality, quantity, and region of different elements by software.
It seems that the display planes are more useful and their benefits more apparent when you have layers of information from different sources. e.g., Custom software HUD over a dedicated video decoder output, OS mouse cursor over an app window, miniaturized game output in OS screen, AR over Kinect video or Blu-ray. It may mean Durango can compose all the different output together while ensuring the responsiveness of the OS. These are part of the "new" experiences I expect in the Durango OS.
As others have pointed out, without the display planes, it is not uncommon to have color conversion, scaling, compositing done as part of the display engine too (Many have simple hardware overlays to do this). The Durango display planes seem more elaborate in that you can divide them into 4 quadrants to mask obscured output.
As such, they may be there because Durango's OS relies on them for a new and consistent experience.
PS2 also had a small army of processors and did quite well compared to relatively more straightforward Dreamcast. That one turned out differently.
Regards,
SB
Why utilize a software solution that requires additional processing first and foremost when hardware does it for free? I can see doing both, but why prioritize the software approach ahead of the hardware approach instead of the other way around? Just by the sound of it wouldn't the hardware approach be objectively smarter to leverage first and then use software if need be?
If it wasn't useful we wouldn't be having a conversation about which is the better way to implement it would we?![]()
I'm not prioritizing it based on h/w vs s/w.
I'm saying developers will look at the whole "picture" first and see if they can do it without compromises.
You said devs should start with a software approach first. That kinda statement makes it sound like you were prioritizing sw over hw.
I pointed out there are other software optimization techniques developers may use to avoid sacrifices in a standalone game. The developers will/should focus on those techniques first.
Right...and this gives those devs one more option to tweak/manage while trying to decide how to maximize the meaningful detail displayed on screen. This is in addition to any software approach they may want to take.
Yes, but ti's not as exciting as you think. Firstly, DOF applied to a plance is just a background blur. Planes won't help in creating high quality DOF effects. Secondly, the game engine still needs to render out two seperate passes into two separate buffers and combine them. The rendering output resolution and refresh is developer controlled. Any game can choose to separate out the background, render it a lower resolution (and not just background, but particles and other 'layers' which happens frequently), and then composite. The difference with Durango is the compositing is happening in a hardware video out device. I still expect games to use software multiresolution buffers and composite just as they do now. Particles and reflections will be rendered in a separate pass to a lower resolution, blurred, upscaled, and composited with the many geometry and lighting. If Durango isn't a hardware deferred renderer, it'll have no advantage in any of that. And with deferred rendering, you'll have lots of buffers where the ability to alpha blend two in hardware isn't going to help.So just to ask, does anyone see anything either in the patent or the VGLeaks article that suggests devs couldn't use one plane to display a foreground in a game at one res and the background of the game world in another, with DoF or whatnot applied to it? It seems to me that if the 2 application/game planes are the same in their operation you could do something like that, leaving the lower LOD background at dynamic res/HDR, etc and potentially save tons on the fillrate. Or no?
What are the average latency differences between:
-6T - SRAM
-1T - SRAM
-DRAM
This article about SiSoft Sandra´s caches latency measurement is very very good to understand how a low latency main pool of memory could improve efficiency in a GPU in a radical way:
http://www.sisoftware.net/?d=qa&f=gpu_mem_latency
The latency of the main memory directly influences the efficiency of the GPU, thus its performance: reducing wait time can be more important than increasing execution speed. Unfortunately, memory has huge latency (today, by a factor of 100 or more): A GPU waiting for 100 clocks for data would run at 1/100 efficiency, i.e. 1% of theoretical performance!
Damn that latency!! If durango has a very low latency sram..then thatooks to be a VERY smart thing to do...interesting how that article says improved latency would increase gpu performance better than faster execution resources. ....mmm.
If latency is 20 cycles or less we could see the 800 mhz CUs behave like a 3x800 = 2400 mhz CUs, and so, getting the rumored 680GTX performance.
What if it is called Kryptos because the ESRAM is the system kryptonite?.
So that's why Microsoft put an outdated GPU inside the 720 , because of the ESRAM ... maybe it will more than make up for any limitations ... that's how you get the"680gtx performance" .
That's amazing .
Well, the ESRAM if really 6T-ESRAM will be very big but the TDP that will provide will be very little. And yields, well, will be very low. But if performance / tdp is so good then MS could be in something.
The size of the embedded RAM array won't affect yield at all. It is trivial to build rendundancy into the array to counter any yield issues.
The low latency won't help rendering much, but it might very well boost GP compute.
The extra bandwidth will help both, the alternative is to cut capacity and spend more money doing so.
Cheers
When you say rendering you include also avoiding ALUs stalls?. Because in the article i posted you can see how when the more random accesses to main memory the more cycles ALUs are waiting for data. So, the boost would not restrict only to GPGPU ops, but also to GPU ordinary shader ops.
The article you linked to specifically mentioned SiSoft Sandra's cryptography benchmark, a GPGPU benchmark.
Cheers