Next gen graphics and Vista.

_xxx_ said:
Ahem, and that would be where exactly? :???:

Well, in the context of the quoted text (if you're talking about where the battle should be taken to) I was refering to DX10.
 
The kinds of algorithms that benefit from XB360's unified architecture are going to make an awful lot of developers biased towards the first PC GPU that also implements a unified architecture.

In that race, I just don't see any advantage for NVidia.

The only saving grace is that these games are prolly 2 years away from the PC platform.

Jawed
 
Chalnoth said:
But I don't see any reason a priori why nVidia's approach won't be better for games for some time to come.
I think its almost a certainty that ATI would feel their approach is better for games as well, otherwise they woudln't do it - this is, afterall, where they make their money. On the otherhand, you have to wonder why NVIDIA are now saying the opposite and suggesting that Unified artchitecures probably will become a necessity.
 
xbdestroya said:
Well, in the context of the quoted text (if you're talking about where the battle should be taken to) I was refering to DX10.

You say it yourself, it's a big question mark. I'm pretty sure nVidia is doing EVERYTHING they can to kill any competition. And I also see nV as the more aggressive party. They also openly stated some time ago that they're about to kill ATI. I can't remember where it was, though.

But whatever, I'm pretty sure that the unified parts will take some time to establish and to get enough software to show off it's capabilities. It may be playing "older" (non-SM4) games much slower than the traditional architectures and actually hurt in the beginning.

Just a few stupid guesses, all I want to say is that such clames are totally off base.
 
Dave Baumann said:
On the otherhand, you have to wonder why NVIDIA are now saying the opposite and suggesting that Unified artchitecures probably will become a necessity.

Maybe they've been reading DX10 spec? :oops:


;)
 
_xxx_ said:
But whatever, I'm pretty sure that the unified parts will take some time to establish and to get enough software to show off it's capabilities.
I don't see that as being the point of unified at all, I'd see it as benefitting any shader title as the point of its design should be to be able to better balance whatever workload is given to it.

It may be playing "older" (non-SM4) games much slower than the traditional architectures and actually hurt in the beginning.
It'll be of benefit to anything that uses the programmable pipeline - given that every architecture will be dedicating most of its silicon to the programmable pipeline, they are all likely to be in similar boats where the fixed function element are leant on most of the time.

I think your argument can be flipped around entirely for DX10, in fact. Lets take the case of a unified architecture that unifies the VS, PS and GS, and a non-unified architecture that has discrete VS, PS and GS; load balancing between VS and PS operations aside, most apps are not going to be using that GS unit for a long time, so that an element of the die that goes unused, which is not necessarily the case for a unified architecture.

The biggest gamble on the unified front is not "some application using it", because thats transparent, its the cost of the control mechanism in relation to the scalability and effectiveness - we've seen that ATI have implemented more ALU's with the unified control structure in a non-enormous chip, we've got to get more of an indication as to its payoff.
 
Dave Baumann said:
On the otherhand, you have to wonder why NVIDIA are now saying the opposite and suggesting that Unified artchitecures probably will become a necessity.

I suspect this has more to do with behind the scenes politics and long term projections than any short term performance considerations. NVIDIA already knew ballpark performance figures for a non-unified DX10 architecture when they made their initial comments, so it seems pretty unlikely that they've now done a u-turn because they think they'll be significantly behind any time soon. Perhaps it's been made clear to them by MS that they're not going to be able to continue to follow a non-unified approach beyond DX10, and so they're changing their stance? It won't have been a change to DX10 itself that's forced their hand, because we're way too far down the road for Microsoft to be making those kinds of changes to the spec.
 
Dave Baumann said:
I think its almost a certainty that ATI would feel their approach is better for games as well, otherwise they woudln't do it - this is, afterall, where they make their money. On the otherhand, you have to wonder why NVIDIA are now saying the opposite and suggesting that Unified artchitecures probably will become a necessity.
Oh, I agree. And the second part has me wondering if we'll see a true unified architecture from nVidia sooner than expected.
 
PaulS said:
I suspect this has more to do with behind the scenes politics and long term projections than any short term performance considerations. NVIDIA already knew ballpark performance figures for a non-unified DX10 architecture when they made their initial comments, so it seems pretty unlikely that they've now done a u-turn because they think they'll be significantly behind any time soon.
Oh, absolutely, of course it is to do with their own internal roadmap - if they have started down a unified approach already then its nothing to do with competetive performance as performance figures for a unified platform would only just be getting back to them now. It got to do with their own internal projections, analysis and testing, but if they really were set on their initial statements then that should probably still be the case even as the unified API matured.
 
Jawed said:
The kinds of algorithms that benefit from XB360's unified architecture are going to make an awful lot of developers biased towards the first PC GPU that also implements a unified architecture.
Well, since nVidia's first DX10 architecture will at least be unified in software, the worst case scenario would probably one in which the pixel pipelines are used exclusively for the "unified" shading, so I don't expect that it should be that bad at such algorithms.

As a side comment, though, while I still don't yet see any reason for games to really benefit dramatically from a unified architecture, it would have tremendous benefits for GPGPU type stuff. For example, I have some code running now whose most intensive algorithm is one that uses log/exp instructions and does 1-d integration via a recursive algorithm (simply dividing the integration region in to N parts takes 100x more time for similar accuracy).

So it'd probably be really fun to get my hands on some DX10 hardware to see if I can make the code that much faster. It'd be especially nice for GPU implementation because the number of inputs is very small, and there is just one output per integration.
 
_xxx_ said:
You say it yourself, it's a big question mark. I'm pretty sure nVidia is doing EVERYTHING they can to kill any competition. And I also see nV as the more aggressive party. They also openly stated some time ago that they're about to kill ATI. I can't remember where it was, though.

But whatever, I'm pretty sure that the unified parts will take some time to establish and to get enough software to show off it's capabilities. It may be playing "older" (non-SM4) games much slower than the traditional architectures and actually hurt in the beginning.

Just a few stupid guesses, all I want to say is that such clames are totally off base.

Well NVidia is certainly the more brash company, that's for sure. They may say they want to end ATI's existence, but I think it's going to take more than words and one gen beyond the current one to acheive it. To tell you the truth I'm a big NVidia fan (all my cards have been NVidia), but I've always found ATI's solutions to be the more elegent ones (architecturally, since R300) - and I think that the unified shaders follow brilliantly in that tradition.

And as for NVidia now growing 'warmer' to the unified shader idea, I truthfully just think they were dead set against it - publicly mind you - while they knew ATI was ahead on the DX10 front, and now that they're feeling a little more comfortable with their current situation, I think they feel they can afford to be a little more honest about the whole thing. It behooves a company to talk down on the architectural landscape their competitor is talking up, regardless of what one truly feels about it.
 
Last edited by a moderator:
Chalnoth said:
Well, since nVidia's first DX10 architecture will at least be unified in software, the worst case scenario would probably one in which the pixel pipelines are used exclusively for the "unified" shading, so I don't expect that it should be that bad at such algorithms.
? There is no such thing as "unified shading" - in DX10, would have VS, PS and now GS operations and at the very least the VS and PS will have the same programmable capabilities and the same instruction set; in a non-unified hardware platform the VS will be performed on the VS units and the PS on the PS units, but on a unified hardware platform the same hardware is shared for the operations, thats it.

As a side comment, though, while I still don't yet see any reason for games to really benefit dramatically from a unified architecture, it would have tremendous benefits for GPGPU type stuff.
So efficent utilisation of the available execution units is not of use to games? Again, take my prior example of a GS unit in a hypotetical non-unified architecture - how often will that just be die unused, but would be used (for non GS operations) in a unified platform?
 
Chalnoth said:
Well, since nVidia's first DX10 architecture will at least be unified in software, the worst case scenario would probably one in which the pixel pipelines are used exclusively for the "unified" shading, so I don't expect that it should be that bad at such algorithms.
I'm thinking of algorithms that are heavy on vertex shading. I assume it's going to lead to a sophisticated, phased rendering of each frame, with some phases being solely vertex shader work and it's also going to obviate techniques in which pixel shaders are used to generate data for the vertex shaders.

Sure, I hardly know about this stuff since I'm not a developer. That's what I've picked up, though.

Jawed
 
Jawed said:
Xenos does the complex scheduling that you're alluding to, in hardware.
I realize that. The real question is, does this 'complex' scheduling take so much die space that it becomes more efficient to have dedicated shader units?
 
Given PowerVR have a unified platform for handheld devices, I would guess the costs aren't that big.
 
It occurred to me that NVIDIA may have another solution to the unified shader scenario, and part of that can be seen in the multiple clocks of the G70. For example: one VS running at 1GHz and 16 PS running at 700MHz. With this solution, they can mantain dedicated units to the vertex and pixel shaders and yet save some space in the units they use...
If this is possible or not, I don't know... :)
 
Sigma said:
It occurred to me that NVIDIA may have another solution to the unified shader scenario, and part of that can be seen in the multiple clocks of the G70. For example: one VS running at 1GHz and 16 PS running at 700MHz. With this solution, they can mantain dedicated units to the vertex and pixel shaders and yet save some space in the units they use...
If this is possible or not, I don't know... :)
Generally, I don't think it makes sense. The complexity of the hardware to execute a MADD in a vertex shader is roughly the same as the hardware to execute a MADD in a pixel shader - certainly looking forwards to DX10.

Although to be fair there are significant differences in numbers of objects processed by each half (way more pixels than vertices, per frame).

VS is MIMD, whereas PS can get away with being SIMD, too.

Jawed
 
I wonder if future unified architectures are going to continue to be SIMD or if MIMDs are on their way instead in the foreseeable future.

By the way I think it's a good time to remind that this demo here also contains geometry textures:

http://www.pvrdev.com/pub/PC/eg/h/Cloth.htm

I can see at least twice the performance on G70 compared to NV40 in that one.
 
Chalnoth said:
ATI, on the other hand, will only be about one year into the R5xx architecture, and will be keen on milking that architecture as much as they can.
however, unlike NVIDIA who basically only have one current high end architecture (G70~RSX).. ATI also have their Xenos architecture in addition to their R520 architecture, and I imagine they would like to get the technology developed for Xenos out into the PC space as soon as possible, and take further advantage of the R&D they spent on developing that part.. I think this may especially be the case if Xbox 360 does well & 'unified shaders' receives positive hype
 
Last edited by a moderator:
Back
Top