Two new (conflicting) Rumors

.13 micron for the R300 is no rumour that is in the making and will be a refresh for the current .15 also the 9700 is DDR II ready so they can plug that on the card.......
 
DemoCoder said:
Either

#1 NVidia is going to be throughly beaten this round
#2 NVidia has a 256-bit external bus
#3 NVidia has some other exotic solution (deferred tiling, embedded dram, etc)

#2 seems to be the most conservative, predictable, approach if you were designing a new chip and wanted the highest probability of success.

I guess that we can all agree that there's is just no way in hell they would design a chip with 8 pixel pipelines with a 128-bit external bus and a mildly refined LMA II-architecture.

I really feel confident that they will have the bandwidth they need (which I base upon hints from especially SA). But how? I think that they could go the 128-bit DDRII high-speed route if they also have an exotic solution like a semi-HSR (although deferred tiling is a no-go from them).

Now, regarding the Creative quote about NV30 being much faster than R 9700: It's part marketing, part based on the assumed fact that the NV30 will be clocked higher. I don't give much for this second TMU per pipeline talk, but a brilliant semi-HSR architecture could make a big difference.

BTW: I was kind of dissapointed when I found out about nVidia keeping their ol' register combiners in NV30 as a parallel unit to the Fragment Program shader (and the old texture fetchning unit along with a Texture Shader).

Why didn't they make the full jump like ATI and use that silicon estate to make the shaders even more powerful to handle both DX8 and DX9 very well?
 
LeStoffer said:
Why didn't they make the full jump like ATI and use that silicon estate to make the shaders even more powerful to handle both DX8 and DX9 very well?
If NV30 will beat hands down R300 on 'older' software then you will know why Nvidia has made that choice.

ciao,
Marco
 
nAo said:
If NV30 will beat hands down R300 on 'older' software then you will know why Nvidia has made that choice.

ciao,
Marco

Yeah, but from the reviews of the R 9700 it seems that ATI has been able to make a DX9 pixel pipeline that is very fast on ol' software already...

It not that I think they are idiots of course, it's really me that is the stupid one here. ;) I was wondering out loud if anybody might have an idea why they would feel the ol' software cannot run fast enough on a DX9 pixel pipeline... Beats me
 
duncan36 said:
They managed to convince everyone to buy a GF SDR and dump their TnT2 Ultra to get this mythical T&L which was going to revolutionize gaming down the road.

Of course this was total marketing.

If the Nv30 isnt much faster,or even slower, than the R300. I expect to see a blitz of information about features the Nv30 has that will revolutionize gaming.

GF SDR was still noticeably faster than the TNT2 Ultra, so T&L wasn't the only selling point. T&L has revolutionized gaming, but of course owners of the original GF never saw it when they owned the card. ;)
 
LeStoffer said:
I guess that we can all agree that there's is just no way in hell they would design a chip with 8 pixel pipelines with a 128-bit external bus and a mildly refined LMA II-architecture.

I hate to say it but we probably can't all agree on that. ;)

I'm not sure I agree conclusively: It does make sense, but then again everyone has been known to screw up sometimes.
 
GF SDR was still noticeably faster than the TNT2 Ultra, so T&L wasn't the only selling point

Actually you're incorrect. Any advantages would in no way justify someone spending the $250+ they were charging for the Geforce SDR when they'd just popped $200 on a TnT2Ultra.

On an Athlon 650:
---------------------
TnT2Ultra Q3 Demo001-1024x768@32bit- 43fps
GeforceSDR Q3 Demo001-1024x768@32bit- 41.6fps

Before you say its CPU limited

TnT2Ultra Q3 Demo001-1280x1024@32bit- 24.6fps
GeforceSDR Q3 Demo001- 1280x1024@32bit- 23.7fps

As you can see the GeforceSDR was totally not worth the money, but because of the T&L hype people bought the card.
Either you weren't surfing the hardware boards at the time or you have a selective memory. Because the PR push for T&L, no lets call it propaganda push by Nvidia was massive.
This was at the time that the Nvidia stock was sky-rocketing, so the boards were infested with morons pushing the 'party line' claiming that T&L on the GeforceSDR was the second coming.

If Nvidia trys to push features over speed this time because the Nv30 underperforms, I think they'll have a harder time of it, once bitten twice shy as the saying goes.
 
duncan36 said:
GF SDR was still noticeably faster than the TNT2 Ultra, so T&L wasn't the only selling point

Actually you're incorrect. Any advantages would in no way justify someone spending the $250+ they were charging for the Geforce SDR when they'd just popped $200 on a TnT2Ultra.

But it certainly was compelling enough to get instead of a TNT2Ultra, especially when the DDR came out. Not everybody (in fact, MOST everybody) isn't upgrading from the previous latest and greatest.
 
Here we go again.

If no one pushes features, no one will buy features, and hence no developers will support those features.

Thus, in a hypothetical race between a card that dedicated all of its silicon to pure performance, and a card that added new features (without any marketing to convince people to buy features instead of performance), consumers will choose the performance card.

We end up with a world in which T&L, pixel shaders, and all of the other neat things don't exist because no developers are supporting them and no one is buying anything except "faster pixels"


NVidia took a big risk trying to push T&L on the market. It was an uphill battle that tooks several years to get developers onboard just like it took years to get developers onboard the 3D acceleration bandwagon when 3dfx started.

So do you think NVidia's first card should have been a GF3 equivalent? These things are evolutionary. The first generation of everything always sucks as people learn about the implementation and improve upon it. DX9 hardware is way better than DX8. Perhaps we shouldn't have even gone through the DX8 pixel shader step, since it didn't really offer any anything totally new, and mostly was marketing.


Why aren't you complaining about Voodoo2SLI vs Voodoo3 as well?


Some people buy cards based on feature sets with the hope that they will one day be used. No one can predict how the market will go, so it is a risk. Can you predict that DX8 games will sweep the market? It will be be DX9 that takes off? Or, will by the time DX9 cards hit $50 level, will DX10 be here?

NVidia is hardly evil for evangelizing every feature their card has.
 
So Nvidia flat out lying about the T&L capabilities of the Geforce SDR and the numbers of games that would use T&L was justified because it pushed features along?
And all for the good of the games industry, what saints Nvidia are.

:LOL:
 
duncan36 said:
GF SDR was still noticeably faster than the TNT2 Ultra, so T&L wasn't the only selling point

Actually you're incorrect. Any advantages would in no way justify someone spending the $250+ they were charging for the Geforce SDR when they'd just popped $200 on a TnT2Ultra.

On an Athlon 650:
---------------------
TnT2Ultra Q3 Demo001-1024x768@32bit- 43fps
GeforceSDR Q3 Demo001-1024x768@32bit- 41.6fps

Before you say its CPU limited

TnT2Ultra Q3 Demo001-1280x1024@32bit- 24.6fps
GeforceSDR Q3 Demo001- 1280x1024@32bit- 23.7fps

As you can see the GeforceSDR was totally not worth the money, but because of the T&L hype people bought the card.
Either you weren't surfing the hardware boards at the time or you have a selective memory. Because the PR push for T&L, no lets call it propaganda push by Nvidia was massive.
This was at the time that the Nvidia stock was sky-rocketing, so the boards were infested with morons pushing the 'party line' claiming that T&L on the GeforceSDR was the second coming.

If Nvidia trys to push features over speed this time because the Nv30 underperforms, I think they'll have a harder time of it, once bitten twice shy as the saying goes.

Well I'm not sure where exactly you're getting your numbers, but a CPU limitation doesn't mean the graphics card is slower. Besides I really doubt Quake 3 is CPU limited on a 650 MHz.

Refer to these benchmarks here, it's actually a review for a GF2 MX which is really the same speed as the GF SDR. As you can see the GF2 MX/GF SDR are both the same speed and much faster than the TNT2 Ultra. Granted this was after driver tweaks, but performance always increase over time with new cards.

http://www.anandtech.com/showdoc.html?i=1266&p=10

Ok, Quake 3 does use Hardware T&L you say? Well the card is also faster in UT despite the fact the game is heavily CPU limited.

Even on a P3 550 the GF SDR is noticeably faster than a TNT2 Ultra. Although I wouldn't want either card at this time, I'd definitely prefer the GD SDR.

So basically the card is faster, it didn't just feature a new T&L unit onboard. And with DDR memory it blew the TNT2 away.
 
You really are a pest now swallow your medicine and admit you're wrong.

In Tom's very recent review of VGA cards.
http://www6.tomshardware.com/graphic/02q2/020418/vgacharts-01.html

In Aquanox a supposed T&L game the TnT2 Ultra is within 3 FPS of the Geforce SDR. In Max Payne another T&L game the TnT2 Ultra beats the Geforce SDR, his Quake 3 scores for the TnT2 Ultra seem a bit off considering Reactor Critical benches in the same Demo 001 are much higher, so the Geforce SDR is ahead in Quake 3 and Jedi Knight.
Then the TnT2 Ultra comes steaming back and takes the 3dmark crown from the Geforce SDR.

My point being all along that theres not much difference between the TnT2 Ultra and the Geforce SDR in games.
You say that the Geforce SDR was the natural upgrade for those buying a 3d card at the time. But you must not have been around during the time of the Geforce SDR.

Obviously the benchmarks for a supposed new technology in the Geforce SDR were awful, and as I've shown the TnT2 Ultra often outpaced it.
Without some angle no one in their right mind would upgrade from a TnT2 Ultra to a Geforce SDR. But if you were around the hardware scene in those days you know that many many people did upgrade from the TnT2 Ultra to a Geforce SDR.

Why? Because Nvidia was happy to perpetuate the myth that T&L on the Geforce SDR was going to revoultionize gaming. And that hundreds of games would be coming out that in less than a year that would require T&L.
Of course all of which was a total fabrication. And all the people who bought a Geforce SDR while owning a TnT2 Ultra got ripped off, because as I've shown the card often benches worse than the TnT2 Ultra even in T&L games.
 
duncan, writing a game that really takes advantage of T&L isn't easy. That's why both ATI and NVidia have to hold developer workshops and browbeat developers into writing optimal code.

At ATI Mojo day, there were still people *today* who weren't using stripification, optimal mesh apis, or triangle batching. The problem with T&L isn't the hardware, it's using the hardware optimally. As Richard Huddy said, if you aren't using batching and strips, you're lucky if you can achieve even TEN PERCENT of the Radeon9700's true speed. And this is exactly what happen to T&L games hack-ported to the GF1.


Before T&L, developers only had to worry about the rasterization pipeline. Now they had two pipelines to worry about. Just like with the Sega Saturn or the PS2, it has been a LEARNING PROCESS. What works fast on a Pentium CPU has to be rearchitectured to work efficiently when you have to juggle AGP memory, local video memory, and a separate GPU interacting with a CPU.


When NVidia introduced T&L, they made a lot of claims of what T&L would make possible. There was nothing mythological or counterfactual to what they were claiming. Where things fell down was when developers tried to port their existing games in development to T&L.

You bring up Max Payne is yet another example of an engine that doesn't use T&L hardware properly. (among other things, it also thrashes system memory with too many millions of small mallocs() IIRC). AquaMark? Another non-mature, poorly scaling engine.

What are you going to choose next as your benchmark, Giants Citizen Kabuto?

Quake, Unreal, and Lithtech are the only 3 engines that really scale, and UT2k3 is the first game engine to really take advantage of T&L and make good use of it. Tim Sweeney has been boasting of his prototypes of UT2k3 on the GF2 for along time.


What is your proposal? That NVidia never have introduced T&L on any graphic cards and simply waited for developers to learn how to use it, on what, a simulator?!?


NVidia never lied about T&L, they simply mispredicted the market. There is a difference between lying, and underestimating. NVidia thought "if we build it, they will come", and that somehow, everyone would produce T&L games, because it is patently obvious that T&L is a great feature.

The reality was, developers had learned everything they knew about Hardware 3D engine programming on the Voodoo, TNT, and Rage128. It was not as simple as changing a few API calls and magically enhancing every game for T&L. Moreover, many developers simply didn't want to add support for the extra features because there was no money in it.


Everything you have said about T&L could equally apply to bump mapping, for example.


NVidia underestimated the complexity of educating developers and getting them to support new features. The same learning curve will apply to DX9.

The first DX9 games will be DX7 games where the developer sprinkled a few special effect shaders in, or used displacement mapping, or FP pipelines sparingly. Only in the second or third generation DX9 titles will anyone who bought a Radeon 9700 today, be able to see what this card is truly capable of.


Your rhetoric seems to indicate you harbor negative feelings toward NVidia. Are you an 3dfx user still sad that they god put out of business and that we all aren't using Voodoo5-6000's today?
 
Obviously the benchmarks for a supposed new technology in the Geforce SDR were awful, and as I've shown the TnT2 Ultra often outpaced it.

Ok, there's a few games were the TNT 2 Ultra is about as fast as the GF256 SDR. Maybe even 2-3% faster. But, there's also a lot of games where the GF256 SDR is 50+ % faster.

So, how can you draw the conclusion that the cards are equal from these bencmarks ?
 
Ok, there's a few games were the TNT 2 Ultra is about as fast as the GF256 SDR. Maybe even 2-3% faster. But, there's also a lot of games where the GF256 SDR is 50+ % faster.

So, how can you draw the conclusion that the cards are equal from these bencmarks ?

Isn't that obvious?
He just chose the benchmarks where the cards were perfoming pretty much the same to prove his point, which is plain wrong!
 
I imagine the T@L engine took a large amount of transistor realestate on the GF1 especially with the older frabrication technology etc...

If NVIDIA's goal was to dream up a feature (that nobody else had) simply to market their way into all our wallets don't you think they would have used a less costly feature? Or even a assortment of cheap features to market...
 
Nvidia's goal was to get into the workstation market... and to do that they needed T&L... To fund it, they needed to sell it to the mainstream market.

Whether you like how they accomplished the task... They did manage to do both.
 
Isn't that obvious?
He just chose the benchmarks where the cards were perfoming pretty much the same to prove his point, which is plain wrong

What a ludicrous statement.

I correctly stated that Tom's review showed the Geforce SDR outperforming the TnT2 Ultra in 3 benchmarks and the TnT2 Ultra outperforming the Geforce SDR in 2 benchmarks.

I also quoted the Reactor Critical review which showed the TnT2 Ultra outperforming a Geforce SDR in Quake 3, and Anands review was quoted showing the Geforce SDR outperforming the TnT2 Ultra in Quake 3.

Typical net nuckleheads arguing for the sake of arguing.
 
Back
Top