The 360 isn't DX9 though. It's DX10.1 + some - some.
You really think that a 2009 gpu that learned from a 2005 gpu (and build upon) that also have much more transistors, and more units of each thing is actually slower or equal to the 2005 gpu
The 360 isn't DX9 though. It's DX10.1 + some - some.
You really think that a 2009 gpu that learned from a 2005 gpu (and build upon) that also have much more transistors, and more units of each thing is actually slower or equal to the 2005 gpu
The 360 isn't DX9 though. It's DX10.1 + some - some.
Citation needed! I get that this is the console forums, but remarks like this aren't helpful to anyone. If you're going to make a blunt and controversial claim, you better back it up.
You don't know where the transistors went, and yes, a 2005 GPU could easily be capable of matching a 2009 GPU. You see, the GPU doesn't actually care what year it is.
Once again, could you point to where on the GPU die shot all these "much more units of each thing" are supposed to be, and which games show all of them in action? Because no-one else can.*
*Including developers who've worked with platform lol etc
pc999 said:Also IIRC on of the advantages of a DX10 gpu (besides higher quality) is that it should be able to do what a DX9 do but a litle faster, at least according to what they said at the time.
function said:The 360 isn't DX9 though. It's DX10.1 + some - some.
sebbbi from the post you linked said:Efficiency is of course harder to estimate.
Also IIRC on of the advantages of a DX10 gpu (besides higher quality) is that it should be able to do what a DX9 do but a litle faster, at least according to what they said at the time.
The 360 isn't DX9 though. It's DX10.1 + some - some.
Citation needed! I get that this is the console forums, but remarks like this aren't helpful to anyone. If you're going to make a blunt and controversial claim, you better back it up.
The argument was can the WiiU's gpu (d3d 10.1) be more efficient at various tasks than the 360's gpu. While the 360 of course is more flexible than a typical d3d 9 gpu (which I think was sebbbi's point), it doesn't mean it does those tasks as efficiently. I'm still waiting on why you believe that is the case.
That's not even a subtle backpedal.
No, that wasn't the argument..
Your post makes it seem like the 360's gpu might as well be classified as a d3d 10.1 gpu, which clearly isn't the case.
While it has features that allow it to sometimes achieve an equivalent effect, that doesn't mean 1) it's as efficient 2) it's "DX10.1 + some - some". Your post was nothing more than a gross oversimplification. I asked you to elaborate and you responded with "I'm busy". Great discussion!
Actually it would be quite efficient because CPUs are good at data processing and have highly-featured instruction sets, it just would not be as fast since they aren't nearly as wide/parallel as GPUs. XGPU doesn't suddenly become DX10.1+/- because you can achieve many similar effects in other ways; to get DX10.1 compatibility, you need to be able to achieve it THE SAME way. Otherwise you just aren't compatible; if you can't run the required shader code (because the features are missing) then your shader won't work. The shader can't magically grow a workaround branch to emulate a desired result in another way.Look, when someone says it's DX10 level it has nothing to do with how efficient it is. You could be running on a software renderer. That won't be efficient but it would still be DX10.
Actually it would be quite efficient because CPUs are good at data processing and have highly-featured instruction sets, it just would not be as fast since they aren't nearly as wide/parallel as GPUs.
XGPU doesn't suddenly become DX10.1+/- because you can achieve many similar effects in other ways; to get DX10.1 compatibility, you need to be able to achieve it THE SAME way. Otherwise you just aren't compatible; if you can't run the required shader code (because the features are missing) then your shader won't work. The shader can't magically grow a workaround branch to emulate a desired result in another way.
Also, please calm yourself the fuck down. I don't think willard wrote anything to deserve such a spittle-frothing response in return.
Actually it would be quite efficient because CPUs are good at data processing and have highly-featured instruction sets, it just would not be as fast since they aren't nearly as wide/parallel as GPUs. XGPU doesn't suddenly become DX10.1+/- because you can achieve many similar effects in other ways; to get DX10.1 compatibility, you need to be able to achieve it THE SAME way. Otherwise you just aren't compatible; if you can't run the required shader code (because the features are missing) then your shader won't work. The shader can't magically grow a workaround branch to emulate a desired result in another way.
Also, please calm yourself the fuck down. I don't think willard wrote anything to deserve such a spittle-frothing response in return.
DirectX 10 API has the following efficiency improving features (in order of importance): constant buffers, state blocks, command buffers, resource views and geometry shaders. It also mandated support for vertex texturing and geometry instancing (both optional in DX9 SM 3.0).Sebbi said it included a similar feature set, but then in that same post said he wasn't sure about efficiency. No one is disagreeing that the 360 included a larger feature set than d3d 9 and that it was probably closer to 10/10.1 in that regard.
You have failed to demonstrate that the 360 not only has the same general feature set as a generic d3d 10.1 card (in this case the wiiu), but can execute that feature set at the same general efficiency. I have never backpedaled/changed/etc. from this position.
DirectX 10 API has the following efficiency improving features (in order of importance): constant buffers, state blocks, command buffers, resource views and geometry shaders. It also mandated support for vertex texturing and geometry instancing (both optional in DX9 SM 3.0).
This shouldn't be the case for AMD DX10 hardware especially if there's not a lot of data per vertex. High amplification cases should actually perform better. Synthetic tests have shown this though maybe you have shaders that perform otherwise.geometry shaders in all existing DX10 hardware are designed for small geometry amplifications only (performance falls off a cliff after that). So in general, the efficiency boosting features of the API are pretty much tied.
Geometry shaders and unlimited shader length support certainly required major hardware changes, and laid important base for hardware capability for the future (compute shaders and programmable tessellation).As a hardware engineer my viewpoint of Xenos is DX9+. For the hardware geometry shaders and unlimited shader lengths made possible by an instruction cache are two of the defining features of DX10 and Xenos lacks them. So really this discussion is semantics though your explanation of your reasoning is educational.
PC game development goes though a fat API that hides most of the interesting hardware specifics. There might be features in the Radeon 4600 series that DirectX doesn't expose that could improve the performance of some algorithms, or allow some new algorithms. Console developer would of course take advantage of these features, but on PC, you cannot directly optimize to every single GPU model. I have never directly optimized for 4600 series Radeons, or even had one installed in my workstation. So it's impossible for me to say what are the exact bottlenecks of that specific GPU model, or how it would compare against Xenos.venture your opinion, given xgpu or a 4650 (not wiiu gpu), which one would you prefer to work with for a console ???