_xxx_ said:Ahem, and that would be where exactly?
Well, in the context of the quoted text (if you're talking about where the battle should be taken to) I was refering to DX10.
_xxx_ said:Ahem, and that would be where exactly?
I think its almost a certainty that ATI would feel their approach is better for games as well, otherwise they woudln't do it - this is, afterall, where they make their money. On the otherhand, you have to wonder why NVIDIA are now saying the opposite and suggesting that Unified artchitecures probably will become a necessity.Chalnoth said:But I don't see any reason a priori why nVidia's approach won't be better for games for some time to come.
xbdestroya said:Well, in the context of the quoted text (if you're talking about where the battle should be taken to) I was refering to DX10.
Dave Baumann said:On the otherhand, you have to wonder why NVIDIA are now saying the opposite and suggesting that Unified artchitecures probably will become a necessity.
I don't see that as being the point of unified at all, I'd see it as benefitting any shader title as the point of its design should be to be able to better balance whatever workload is given to it._xxx_ said:But whatever, I'm pretty sure that the unified parts will take some time to establish and to get enough software to show off it's capabilities.
It'll be of benefit to anything that uses the programmable pipeline - given that every architecture will be dedicating most of its silicon to the programmable pipeline, they are all likely to be in similar boats where the fixed function element are leant on most of the time.It may be playing "older" (non-SM4) games much slower than the traditional architectures and actually hurt in the beginning.
Dave Baumann said:On the otherhand, you have to wonder why NVIDIA are now saying the opposite and suggesting that Unified artchitecures probably will become a necessity.
Oh, I agree. And the second part has me wondering if we'll see a true unified architecture from nVidia sooner than expected.Dave Baumann said:I think its almost a certainty that ATI would feel their approach is better for games as well, otherwise they woudln't do it - this is, afterall, where they make their money. On the otherhand, you have to wonder why NVIDIA are now saying the opposite and suggesting that Unified artchitecures probably will become a necessity.
Oh, absolutely, of course it is to do with their own internal roadmap - if they have started down a unified approach already then its nothing to do with competetive performance as performance figures for a unified platform would only just be getting back to them now. It got to do with their own internal projections, analysis and testing, but if they really were set on their initial statements then that should probably still be the case even as the unified API matured.PaulS said:I suspect this has more to do with behind the scenes politics and long term projections than any short term performance considerations. NVIDIA already knew ballpark performance figures for a non-unified DX10 architecture when they made their initial comments, so it seems pretty unlikely that they've now done a u-turn because they think they'll be significantly behind any time soon.
Well, since nVidia's first DX10 architecture will at least be unified in software, the worst case scenario would probably one in which the pixel pipelines are used exclusively for the "unified" shading, so I don't expect that it should be that bad at such algorithms.Jawed said:The kinds of algorithms that benefit from XB360's unified architecture are going to make an awful lot of developers biased towards the first PC GPU that also implements a unified architecture.
_xxx_ said:You say it yourself, it's a big question mark. I'm pretty sure nVidia is doing EVERYTHING they can to kill any competition. And I also see nV as the more aggressive party. They also openly stated some time ago that they're about to kill ATI. I can't remember where it was, though.
But whatever, I'm pretty sure that the unified parts will take some time to establish and to get enough software to show off it's capabilities. It may be playing "older" (non-SM4) games much slower than the traditional architectures and actually hurt in the beginning.
Just a few stupid guesses, all I want to say is that such clames are totally off base.
? There is no such thing as "unified shading" - in DX10, would have VS, PS and now GS operations and at the very least the VS and PS will have the same programmable capabilities and the same instruction set; in a non-unified hardware platform the VS will be performed on the VS units and the PS on the PS units, but on a unified hardware platform the same hardware is shared for the operations, thats it.Chalnoth said:Well, since nVidia's first DX10 architecture will at least be unified in software, the worst case scenario would probably one in which the pixel pipelines are used exclusively for the "unified" shading, so I don't expect that it should be that bad at such algorithms.
So efficent utilisation of the available execution units is not of use to games? Again, take my prior example of a GS unit in a hypotetical non-unified architecture - how often will that just be die unused, but would be used (for non GS operations) in a unified platform?As a side comment, though, while I still don't yet see any reason for games to really benefit dramatically from a unified architecture, it would have tremendous benefits for GPGPU type stuff.
I'm thinking of algorithms that are heavy on vertex shading. I assume it's going to lead to a sophisticated, phased rendering of each frame, with some phases being solely vertex shader work and it's also going to obviate techniques in which pixel shaders are used to generate data for the vertex shaders.Chalnoth said:Well, since nVidia's first DX10 architecture will at least be unified in software, the worst case scenario would probably one in which the pixel pipelines are used exclusively for the "unified" shading, so I don't expect that it should be that bad at such algorithms.
I realize that. The real question is, does this 'complex' scheduling take so much die space that it becomes more efficient to have dedicated shader units?Jawed said:Xenos does the complex scheduling that you're alluding to, in hardware.
Generally, I don't think it makes sense. The complexity of the hardware to execute a MADD in a vertex shader is roughly the same as the hardware to execute a MADD in a pixel shader - certainly looking forwards to DX10.Sigma said:It occurred to me that NVIDIA may have another solution to the unified shader scenario, and part of that can be seen in the multiple clocks of the G70. For example: one VS running at 1GHz and 16 PS running at 700MHz. With this solution, they can mantain dedicated units to the vertex and pixel shaders and yet save some space in the units they use...
If this is possible or not, I don't know...
however, unlike NVIDIA who basically only have one current high end architecture (G70~RSX).. ATI also have their Xenos architecture in addition to their R520 architecture, and I imagine they would like to get the technology developed for Xenos out into the PC space as soon as possible, and take further advantage of the R&D they spent on developing that part.. I think this may especially be the case if Xbox 360 does well & 'unified shaders' receives positive hypeChalnoth said:ATI, on the other hand, will only be about one year into the R5xx architecture, and will be keen on milking that architecture as much as they can.