It's, er, a key thing that everybody continues to work on.Deathlike2 said:It's fascinating how this seems to be a key thing ATI continues to work on... efficency.
Quick example from the nVidia side of the fence: multisampling AA.
It's, er, a key thing that everybody continues to work on.Deathlike2 said:It's fascinating how this seems to be a key thing ATI continues to work on... efficency.
DemoCoder said:Isn't the danger of opening up low level GPU access, a reduction of abstraction, and therefore, less freedom of GPU implementation technique in the future? Do we really want developers depending on low level details of GPU implementation that should be subject to change, and are will not always be relevant to rendering?
Rather than expose internal GPU workings, I think the better approach is to expose high level APIs for Physics, and certain problems in GPGPU space and then let the driver do the translation work if it can. But exposing the GPU as a general purpose computation device, and promoting performance on GPGPU in PR I think is dangerous. GPGPU performance should be secondary to rendering performance, and should not come at its expense.
BRiT said:The X800/R420 series has a programmable memory controller but it's not as programmable or expansive as the X1000 series' programmable memory controller.
Except these two things do not follow. Firstly, I really don't see how you can quantify ATI as doing more "forward thinking." It was, afterall, nVidia was the first one to implement a large number of the technologies that we take for granted in 3D graphics now, including anisotropic filtering, FSAA, MSAA, programmable shaders, and hardware geometry processing.Deathlike2 said:There's a lot more "forward thinking" going on with ATI.
There obviously was a reason to have a programmable memory controller.
Chalnoth said:It was, afterall, nVidia was the first one to implement a large number of the technologies that we take for granted in 3D graphics now, including ... FSAA, MSAA ...
nVidia was the first company to allow driver-forceable FSAA.BRiT said:3dfx would beg to differ on that one....
Well, I know I wouldn't. With my GeForce4 Ti 4200, my standard play settings became 1024x768 with 2x MSAA and 8-degree anisotropic filtering.Or did you mean all of them together as a collective whole? And if you mean as a collective whole, many would say the ATI R300 was the first one to truely offer it where it wasn't just a gimmick and able to be used together.
I thought my post was long enough as it was. I was attempting to argue a very specific, focused point.Skrying said:LMAO! Oh noes, and ATi hasnt ever done anything that's forward thinking.... give me a break Chalnoth.
ChrisRay said:The G70 is NV4x based. Actually an NV47. I dont see the problem saying the Nv4x line is competing with the R5xxx. Because it is. Dont forget the 6800/6600 line which atm is competing with ATIS lower end SM 3.0 hardware. I dont think Chalnoths comments are out of line in that regard at all.
geo said:NV picked their own codename, Chris. No one forced it on them. What does that "7" stand for in G70?
What NVidia has done seems to fall along the lines of the evolution of 3D graphics. It was inevitable to include such things as MSAA and anisotropic filtering simply because it was the next step up (and the extra power enabling people to use such features)
No moreso than ATI's programmable memory controller. That is to say, a memory controller that isn't as programmable, but is just very highly optimized could, in principle, be every bit as good. There is no user front-end that allows programming this memory controller, so it's no more than an implementation detail.Deathlike2 said:The whole unified driver architecture is more of a marketing feature and for driver writer convienence. (It would be tons of fun to have 50 different drivers for 50 different generations of video cards ) It was NVidia that first emphasized this. Personally.. you're falling into the marketing speak of the concept if not just using word for word that they have said
I don't see how that's relevant to my discussion at all.Also, you've made the most flawed response Chalnoth. First off, ATI hasn't released any official drivers supporting the X1000 series.
Every new architecture has relatively immature drivers at release. But that doesn't mean that you should ever purchase a product with the expectation that newer drivers will boost performance significantly. If, for example, you don't think that an X1800 XL's performance warrants a purchase right now, but you think that drivers may improve performance, you should wait until new drivers are released that do improve performance.Second, the drivers released for the debut of the hardware is very similar to the performance of the Geforce 3 when it was released.
Philosophical? How is anything ATI is doing philosophical?Maybe it's just me, but I like at ATI's philosophical decisions on what they have done to be better than a diverging opinion that NVidia has. It shows a lot.. and the best part is seeing how ATI engineers contribute to the discussion. It would be nicer if Beyond3D had more NVidia engineers discussing different aspects and thoughts.. even if they differ in general.
Chalnoth said:Philosophical? How is anything ATI is doing philosophical?
I don't see how. Philosophy is a discipline which, by its very definition, is disconnected from reality. Talking about the posibility of a new programming interface doesn't seem philosophical at all.AlphaWolf said:I'd say what sireric is talking about wrt a lower level api is philosophical.
Chalnoth said:nVidia was the first company to allow driver-forceable FSAA