WaltC said:Humus said:I've heard the NV50 eats babies.
Actually, I believe the correct phrasing was nv50 is eaten by a baby.
So they are finally going to make a small card again eh?
WaltC said:Humus said:I've heard the NV50 eats babies.
Actually, I believe the correct phrasing was nv50 is eaten by a baby.
DaveBaumann said:stevem said:I kinda wondered why Ati would bother with a re-write of their OGL ICD for current parts (apart from the obvious), if a new architecture was scheduled for R520. Xbox2 & Longhorn probably indicate the timeframe for their next core based on R400 type tech.
The best I can fathom, as I mentioned here that that the next core is actually and extension of the R300 line. This also corresponds with the comment I heard that the OGL rewrite wouldn't be available "for a couple of product releases". It seems like the rewrite is actually targetting R600, which, if I'm correct, will be the first PC product to be based on the unified shader architecture. They should be able to prototype the drivers on similar silicon as they will have the XBox chips back, which (again, if I'm correct) will be the platform R600 is developed from.
Architectural Lines:
R300 --> R420 --> R520
R400 (not released) --> R500 (XBox) --> R600 (PC / Longhorn)
kenneth9265_3 said:SO which is the more powerfull chip, The PC's R520 or the XBox2's R500?
Humus said:chavvdarrr said:no, its great
I don't see what's so great about killing innovation and competition, cause that's what it's going to do if they end up doing it that way. The IHVs will have no choice but to implement the exact functionality that Microsoft think is good, rather than thinking for themselves. MS becomes the industry dictator, while IHVs end up building so similar products that basically only the brand name will differ.
Lezmaka said:Does Microsoft take input from companies like Nvidia and ATI and take that into consideration when plotting the course DirectX will take?
ET said:Lezmaka said:On the other hand, I agree with Humus that not allowing new options will prevent companies from differentiating their products except by performance. Perhaps future version of DX will allow more differentiation after the baseline is set, assuming it's actually required. It's possible that a general enough programming model will make further development of new features rather less needed than it is now. The way I see it, that's what DX10 is aiming at.
I see this an an excellent opportunity for OpenGL to get back some market share. If they can pull the finger out...
Just a nitpick, but I really think that NVidia's support for PS1.4 is simply a side-effect of supporting PS2.0. If you support the latter, it's probably not all that much work to emulate the former.ET said:Yes -- you can see it in the way DX has been developed until now. However, they also force the industry's hand somewhat, which is probably why NVIDIA even supports shaders 1.4. Seems to me like Microsoft wants to do some more forcing, because IHVs have traditionally deliberately avoided each other's features.
But I still believe they wouldn't have done it if they didn't need to. Any little bit to help kill PS1.4, you know... (I'm curious as to whether NVIDIA implemented the PS1.4 modifiers in hardware. That'd show that it did special work for that. If performance with modifiers is the same as without them, then likely NVIDIA did some PS1.4 work.)Ostsol said:Just a nitpick, but I really think that NVidia's support for PS1.4 is simply a side-effect of supporting PS2.0. If you support the latter, it's probably not all that much work to emulate the former.
Scott C said:Um, yeah how would this kill competition?
The details of the architectures and how they achieve a more unified, consistent set of features will differ markedly.
Is a Pentium 4 the same as an Athlon 64 just because they both run x86 with MMX and SSE2?
Humus said:You end up selecting between two cards with more or less the exact same featureset, rather than having a choice between a card that does FP32 slow but FP16 fast, or a card that only does FP24 but does it fast.
DaveBaumann said:As far as DirectX is concerned developer only need care about default or partial precision - they need not know the underlying internal precisions.
Most consumers don't know much more than "it has a higher number so it must be better" (I still meet people who think that the GeForce4 MX is better than the GeForce3 -- I suspect they're the majority). They also don't buy the high end, usually. We're talking about enthusiasts, who drive the market. For them the market will become less interesting.jimmyjames123 said:Also, I don't really think that most consumers really know much about the intricacies of FP32/FP16 vs FP24.
I don't see the differentiation of hardware vs. a monolithic DX10 as the thing that will help get OpenGL back. That'd only happen if there'd be truly significant hardware advances that will not be in DX10. Considering the feature set of DX10, I find that unlikely.euan said:I see this an an excellent opportunity for OpenGL to get back some market share. If they can pull the finger out...
jimmyjames123 said:The problem with having FP32/FP16 vs FP24 precision from the two major IHV's is that it makes developers lives more complicated.
Lezmaka said:And what about OpenGL? Nvidia and ATI would still be able to add other features not supported by DirectX and make them available via extensions, with Nvidia's UltraShadow being a current example (unless of course I'm mistaken and it can be used in DirectX)
jimmyjames123 said:True. However, this is ultimately dependent on a definition of "full" and "partial" precision. Under DirectX 9.0c, full precision is FP32; through DirectX 9.0b, full precision is FP24. The general problem for some developers using differing types of hardware is that they have had to code separate paths to take advantage of certain characteristics of each respective hardware type. This situation could be even worse if there was more competition, and possibly even more product differentiation, in the GPU market.
DaveBaumann said:Again, the precision is not really a consideration in these cases since the precisions were taken at the most likely usage scenarios with the number of instructions the different shader models.
Carbonated Vodka said:The NV50 is supposed to have native PCI-X support .