Hey you like those!!I have to stop reading these threads. Any random person can jump in and start a rumour, and then it devolves into comparisons between Sega Saturn and PS1.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Hey you like those!!I have to stop reading these threads. Any random person can jump in and start a rumour, and then it devolves into comparisons between Sega Saturn and PS1.
Doubling the putative Durango spec to 16 cores or going for a dual APU/mainboard/GPU strategy is pure pie-in-the-sky nonsense. Any of these changes would require communication with devs well in advance of now for the software to work right/at all. A dual sku strategy fragments your user base from Day 1 and is even worse than the situation devs complained about with the hdd in the 360. Imagine if I have to code my game to work in a scenario where I may have 50% of the cpu budget (let alone GPU)? Madness, pure madness.
I think this thread should stick to discussing a single Durango box until we hear about multiple SKUs, otherwise there's an awful lot of non-hardware business speculation needed.
Two separate GPUs is an awfully big risk to the overall platform. GPUs have never found a universally satisfactory, reliable, or consistent implementation of multi-GPU rendering, and their implementations have always had it as an afterthought.
Perhaps Microsoft invested serious money into making that not a problem, but there are obvious demands for inter-APU communication, GPU coordination, and serious questions as to how reliable such a setup would be. It might be a matter of saving a few pennies after all the packaging and board routing complexity is taken into account, but making the 720 wildly inconsistent or faulty in return.
Two separate GPUs is an awfully big risk to the overall platform. GPUs have never found a universally satisfactory, reliable, or consistent implementation of multi-GPU rendering, and their implementations have always had it as an afterthought.
There's a need for high-bandwidth communication between the APUs. The need for coordination between two separate graphics systems is going to add complexity, and latencies at the level of workflow control are harder to hide.At the same time multi gpu set ups are really only found in pc's and not closed box platforms.
A console with two gpu's could get high levels of synergy by coding for what does work with 2 gpus well and avoiding what doesn't.
The same reason why one game might scale 70% with an SLI rig while the next has zero scaling, or why there are multiple game patches or driver releases for stability problems.I'm also not sure why the 720 would be inconsistent and faulty .
In the absence of interposer integration, and even then, it is not more power-efficient to pass scads of data over external wires. Some of that data is going to be command processor or front-end communication between GPUs, which is data traffic that does not exist in a unified design. Or there is minimal coordination, and the platform is inconsistent or unreliable.Instead of one huge chip that is more power hungry and produces more heat.
Only if the dual GPUs are abstracted away. If the state and command systems of the graphics chips are exposed to software, it will be a source of trouble for existing code if you suddenly take half of it away.As time goes on you phase out the single APU unit and perhaps your able to combine the Dual APU unit into a single one.
Wrong three-letter named GPU provider, unfortunately.IMG does fine with multi-GPU rendering. They're on the same chip but that doesn't necessarily matter.
I agree that 16 cores would be real overkill but why should they be relevant in the scenario? Even 8 is already overinflated so why would they all needed to be available for the game? They could be disabled or used for the OS.
There has already been speculation here about forward compatibility and a fast HW update strategy. What would be a difference here between 2 SKUs where one has just the dual GPU setup?
Wrong three-letter named GPU provider, unfortunately.
Lalaland said:http://arstechnica.com/gaming/2011/0...snes-emulator/
Here's an article discussing the challenges of emulation using the SNES 25Mhz cpu that requires a 3.0Ghz x86 core to emulate correctly. And correct emulation matters for game code, I suspect if they do Classics releases this gen the code will be based off the PC builds (if they exist) rather than attempting to emulate PPC
The problem with dual anything being 'transparent' to deveopers is that has two outcomes it's either a) variable benefits (witness SLI and the unending wait for profiles every time a title launches) or b) no benefit. It's not possible to create a compiler that can statically analyse a body of code and decide 'OK these functions are fine for the slow CPU the rest goes on the daddy'. If there are multiple chips then developers have to be expose to that explicitly so they can exploit the benefits. Part of the problem with Cell was that simply setting the output flag for your compiler to PS3 produced awful unoptimised results. It was only when developers specifically started to code their hardware for the odd heterogenous PPC + SPE architecture that PS3 started to look good in comparisons.
If MS have a compiler that can allow developers to exploit unannounced hardware by 'auto-magically' re-aligning code then they are wasting this technology on game code
A second GPU only would not be anything like two cpu complexity but it comes back to what I referred to as scenario A, the results are too inconsistent. Developers couldn't code to the metal for SLI (e.g. splitting the rendering of objects and terrain and then compositing later) because they couldn't be sure it would be available on any given users system. More likely they would focus on delivering the best experience possible for the singe card scenario and leave the gains from SLI to driver/system updates much the same way PC developers leave that to AMD/Nvidia. Anything more complex from a design perspective comes back to the problem of spending resources for only a subset of your userbase that could deliver a better overall result if focused on things that benefit both groups of users.
Extra RAM is a much simpler thing to exploit as it just lets you do 'more', so if your engine loads 50 km2 of terrain with 4GB maybe it can load 100-110 with 8GB. It doesn't alter your code much beyond that, any extra functional hardware such as CPU or GPU resources have to be specifically coded for. The exception is clockspeed that really does just go faster without any extra work involved unless you've coded up something that exploits timing tricks based on the slower clockspeed.
I don't know enough to say whether a second smaller, slower GPU could really benefit a larger, faster GPU. In the PC space the only benefit I've seen is using an old card as a PhysX device, the attempts to SLI different cards or to SLI an APU + discrete GPU have never really seemed to deliver (I'm thinking of VirtX, was it?, that m/b chip). The disparity between dissimlar architectures and clockspeeds really hurts any attempt to use the same code for both so we're back then to having to consciously exploit it making the system more complex. If it was in every box and MS had a great API that could exploit it 'automagically' then it would be quite cool but from what little I learnt about compilers it just sounds far fetched to me.
There is at least some precedent for having a high-powered Application APU and a lower powered System APU (the yukon leak), but really none for having two high powered APUs in "Crossfire" mode. So as cool as the idea of Durango being twice as powerful as imagined, i just don't see it.
What really are the issues with an OS APU and Application one? If the display planes composite the two outputs together and they have their own DDR3 memory is there really that much crosstalk needed between the two?
This is all I'm saying. They would optimize to their heart's content on the main apu gpu, and then leave the rest of the gains, gains that they would be able to ascertain for themselves without ever shipping their game, up to the secondary gpu much like they do in pc game development.
That's all I'm saying. So this makes it very possible for console game development without the need for extra complexity.