A Few 9700 Screenshots

My ordered 9700 PRO is expected to arrive next week in Hong Kong, it will be at most a week away, although the price is over US$400 (~$43x) for the first batch. :)
 
Joe DeFuria said:
Eh...whatever, Chalnoth. What's so difficult about "ship comparable product first by at least 3 months = right decision for that product?"

Oh, and ATI has shipped to consumers who ordered directly from them. People have shipping tracking numbers and everything.

It's shipping, OK? Get over it. :rolleyes:

Shipping first isn't everything, not by far. While everything about the Radeon 9700's performance and image quality seems to be good, we have yet to hear anything about compatibility, stability, problems with power supply, heat dissipation issues, etc.

And from a business standpoint, even with the premiums that these boards are going to demand, I seriously doubt that ATI will be able to turn a significant profit on their .15 micron product. Given the monstrous die size, I do feel that nVidia made the better decision, especially when you consider that ATI will have to redesign their part for the .13 micron die shrink (Presumably for 1Q'03).

Since nVidia will almost certainly regain the performance/feature crown (the only questions remaining will be with respect to anisotropic and AA) in about three months, and on a design that will last nVidia longer than ATI's current design, I feel that using the .13 micron process will prove to be a better move by nVidia in the end (Whether or not this is true should be apparent by about this time next year).

Additionally, building the NV30 on a .13 micron process will almost certainly put nVidia in a better position to optimize their refresh part even further than ATI can.
 
And for the record, I don't think ATI's decision to use .15um is any better than 3dfx's decision to use SDR memory and older processes. ATI really pushed the limits of .15um. Their design is a huge, customized die and power hungry beast.

So customized and power hungry that its able to reach 400Mhz with regular stock air cooling :rolleyes:

http://www.anandtech.com/video/showdoc.html?i=1686&p=1


I for one am going to enjoy this card more than any other since my voodoo5 :D
 
Couple of humble questions...

compatibility, stability, problems with power supply

Did any of the reviews mentioned anything? The only thing I remember is "we didn't connect the externalt power, but the system booted up nevertheless". Not much of a problem, is it?

And from a business standpoint

You're a businessman? ;)

I seriously doubt that ATI will be able to turn a significant profit on their .15 micron product.

The same person said that he would never use anyting that goes by the name of Rendermonkey and "I mostly just skimmed it". ;)
I call in question the reasoning behind bith these statments... but the wonderful thing about boubts and beliefs is that you actually don't have to know anything to believe or doubt... weren't you told that "you shouldn't talk about things you don't really seem to be knowledgable in"?

Given the monstrous die size, I do feel that nVidia made the better decision

I don't. Who's right? ;)

ATI will have to redesign their part for the .13 micron die shrink

Yes, there will be R350 with smaller die and DDRII support. But tell me - if DX9 indeed incorporates 2 versions of shaders 2.0 and 3.0 and if NV30 supports something in between 2.0 and 3.0 - is it good or bad? How do you feel - does NVidia have to redesign their chip for higher clock rates and complete DX9 compliance?

I feel that using the .13 micron process will prove to be a better move by nVidia in the end

When do you feel is the "end"? When R350 comes out? Or NV31/2/5/...? After 10 years?

optimize their refresh part even further

How do you feel about this refresh happening next summer/fall? When R350 (refresh for R300) comes out Q1/2 next year, when does ATI do their next refresh? Or do you feel that R400 is coming next fall?


Any ideas/guesses/feelings when UT2003 demo comes out, everyone? 8) We have seen benchmarks for about 3...4 months already :LOL:
 
Chalnoth said:
Shipping first isn't everything, not by far. While everything about the Radeon 9700's performance and image quality seems to be good, we have yet to hear anything about compatibility, stability, problems with power supply, heat dissipation issues, etc.

I suppose early adopters are prepared that there may be issues to be solved later on, and 9700 PRO is not alone on all the issues that you've mentioned.
 
DemoCoder said:
And for the record, I don't think ATI's decision to use .15um is any better than 3dfx's decision to use SDR memory and older processes. ATI really pushed the limits of .15um. Their design is a huge, customized die and power hungry beast. Clearly, they are going to ship a .13um R300 and follow on products that use less power, have higher clock, and are cheaper.

I wouldn't be so sure that there's no further heardroom for R300

http://www.beyond3d.com/forum/viewtopic.php?p=33873#33873
 
There's still alot of headroom in .18um as Intel and AMD showed. But you're gonna get diminishing returns.

I hesitate to debate this any further until the NV30 actually ships. I'm actually going to buy an R300 for development purposes. But I loathe these "company X has a 3 month lead on company Y. They are doomed" threads. The number of people who actually bought GF2 Ultras, GF4 4600's, and will buy R300 Pros and NV30s pale in comparison to the people who buy crappy cut down chipsets. The only thing the NV30/R300 affect right now is prestige and bragging rights. But in 3-4 months, NVidia will probably trumpy ATI, and then 3 months after that, ATI will probably trump NVidia. Round and round we go..

Whether or not ATI or NVidia's bottom line is improved by these products is way more complex than benchmarks.
 
SvP said:
Did any of the reviews mentioned anything? The only thing I remember is "we didn't connect the externalt power, but the system booted up nevertheless". Not much of a problem, is it?

Got this from HotHardware, first page of their review:

Since the R300 VPU draws so much power, it may actually stress the capability of some systems to deliver enough power through the AGP connector. This additional power source to the board ensure cross platform compatibility. We did try powering up with the connector unplugged and the system indeed would not post.

And I really don't feel like responding to the rest of your post, SvP. I don't like to respond to personal attacks.
 
9700

FWIW Silver BulletPC are listing ETA on the as 9700 Tomorrow!!

So I'd say with or without volume yields, it's real and it's shipping, unlike Nvidia's (currently) mythical NV30. Besides who honestly expects a premium product to be 'in volume' at the start of it's release?

As for the NV30 regaining the performance crown etc etc etc blah blah blah. Maybe, Maybe not. The R300 certainly seems to clock well. [Sarcasm mode]Oh sorry, I forgot Chalnoth, whatever the R300 clocks to the NV30 will beat it because it's from Nvidia.[Sarcasm mode]
 
I have a feeling that the NV30's performance on old games will be about the same as the R300. R300 may even retain the crown in running 1600x1200 6XFSAA 16X anisotropic (but as long as both are above 60fps, most people won't care). But I'm not buying a $400 card so Counter-Strike or Q3 can run with slightly better resolution. Those games run fine on my current hardware and the improvement is so marginal as to not justify another $400 solely for a small improvement.

(but since I am a developer, I have other reasons for buying top end hardware)

<speculation mode on>
The NV30 seems to be designed around the idea that there will be a paradigm shift to increasing computation per pixel, away from raw pixel fillrate. (e.g. bandwidth/fillrate now "good enough"). NVidia may be eyeing the workstation/renderfarm market as a side, but in general, they think the future is in shader performance.

Therefore, the NV30 might have traded off some silicon (vertex shader function units, mpeg/video stuff) in favor of adding a bunch of units to each pixel pipeline so that the NV30 may be able to issue many more instructions per cycle than the R300.

Even being able to co-issue twice as many vector ops per cycle as would yield a massive benefit for pixel shaders, as you'd effectively double the "effective" fillerate. This means that a 10 instruction shader that ran in 10 cycles on one card, would run on 5cycles on the other. @300Mhz and 2.4gpix/sec, this yields 240mpix/sec vs 580mpix/sec.

This means it would be computationally more efficient per cycle, and, potentially, clocked higher given the process. 400Mhz NV30 would yield additional shader execution gains. NVidia may have gone as far as to design the pipelines to be highly clockable ala the Pentium4, scaling the NV30 "core" into the future as yields increase.

Or not.

But that's one scenario. If the company did realize that the future is in execution speed, and if the NV30 is designed around this paradigm, than it may show little to no gain over old multitexture games, but show huge gains when games start using complex shading (e.g. Doom3 and beyond)


</speculation mode off>
 
Joe DeFuria said:
But I digress as my initial response to Joe was to make a point that you've replicated in your reply to me.

"...this was a new product but was bareley edging out a TNT Ultra"

That statement is misleading

Let me refine that point, and maybe we'll all be happy: ;)

The measurable difference on "today's games" between Ti 4600 and the 9700 is much larger than the measurable difference between the GeForce SDR and the TNT-2 Ultra....
And the non-measurable differences are even larger than that (quality).
 
Sorry for bringing this thread back to topic. 8)

These screenshots don´t impress me one bit; they seem to look just the same as on earlier generation h/w. I guess that´s not a fault of the 9700, but it obviously can´t do anything about the fundamental flaws of current games.
Just look at the shadows of the cars, look at the blocky wheels, see how distant objects perpendicular to the field of view are sharp while closer objects more in line with the viewing angle get blurred, an so on. I really hope coming games and sims (and I don´t care one sqaut about Doom3 and other FPS:s) will look that much better, because until they do I find no reason to by the latest h/w.
Demos are nice but they don´t justify the expence ;) (~$500 in Sweden)
 
Yep, 1600x1200 6XFSAA 16x aniso @ 60fps makes flat lit monotextured boxy scenes look really nice. But it fails to impress. ATI's SIGGRAPH demos are lightyears more impressive.
 
I don't think we'll see superscalar pixelprocessors (shaders) for some time to come. Given the embarassingly parallel nature of rasterization, I'd think that we would be better off with doubling the amount of single issue shaders instead.

Cheers
Gubbi
 
ATIs decision to stick to a .15u process for now looks like the best call. Why?

It puts them in front as the leader which MAY bring in more OEM contracts for the 9000/9000Pro cards and COULD help win some market share. That alone gives them a huge PR boost. It gives them a product that can be in stores by volume for the xmas buying season. It generates some revenue for them now. It gives them the flexibility to wait a few months until the design of the .13u process has matured.

If I get one it would be so I can run todays games at a higher res, better AA and better filtering options that my current 8500 or GF4 Ti4200 can provide.
 
Well, the GF3/4 can already "issue" two vector ops per cycle per pipeline. (two register combiners take 1 cycle). I'd expect the NV30 to surpass this.

The problem with increasing # of pipelines instead of execution units is that pipelines eat up way more silicon than an extra functional unit, and as triangle size goes down, so does the efficiency. The R300 already has 8 pipelines. The NV30 probably will too, and the die is already huge. I doubt anyone will put more than 8 pipelines on a chip in the near term (next 1-2 years) Remember, you're not rendering more pixels, just more complex ones.

Doing superscalar execution for GPUs is almost trivial. No branching. SIMD instructions. Very little data-dependency. No aliasing/pointers. The algorithms are already highly parallelized. Programming model is "pure-functional", no mutation of external state allowed, etc. Reordering the instructions for maximum co-issue is relatively easy.
 
jb said:
ATIs decision to stick to a .15u process for now looks like the best call. Why?

It puts them in front as the leader which MAY bring in more OEM contracts for the 9000/9000Pro cards and COULD help win some market share. That alone gives them a huge PR boost. It gives them a product that can be in stores by volume for the xmas buying season. It generates some revenue for them now. It gives them the flexibility to wait a few months until the design of the .13u process has matured.

It may look good now, but what if retooling the R300 for .13um takes a significant design effort? Then in spring/summer when the mainstream refreshes are shipping, NVidia has NV31/35/38/Mobile chips with significantly ramped up clocks and low power consumption, and ATI is delayed retooling or forced to ship .15um versions (think Mobile R300 @ .15um vs Mobile NV30 @ .13um). Since the bread and butter of these companies is their mainstream parts, a delay or pooring performance mainstream part next Spring/Summer could hurt ATI far more than NV30 delaying a high-end part few people buy.


My point is, no one can predict what is going to happen. I don't think a 3 month window is going to mean much in the long run, but who knows? But just remember, there are arguments to be made on both sides, and we've had them before. (e.g. 3dfx going with "proven memory" and "proven process", rumors of NVidia's problems going from .22 to .18um, rumors of X-Box problems, rumors of .15um problems)
 
My point is, no one can predict what is going to happen.

You're right - which is why you're own argument is pretty pointless. All there is is the present, and at present ATI have executed and the competition is apparently months behind; EOA. So, whats your point?
 
You still need to issue two instructions, you need twice as many ports on your register file, and even though instruction to instruction data-dependency is low you still need a scoreboard to resolve them.

Unless of course you go VLIW (which makes more sense the more I think about it).

On vertex shading: Are multiple vertex shaders used on the *same* vertice ? Again I would think that each vertex shader did all the work one one vertice. Throughput would still be double (for a two VS GPU vs a single VS GPU)

Cheers
Gubbi
 
Gubbi said:
On vertex shading: Are multiple vertex shaders used on the *same* vertice ? Again I would think that each vertex shader did all the work one one vertice. Throughput would still be double (for a two VS GPU vs a single VS GPU)
Generally speaking the answer is no. Each vertex shader pipeline is dedicated to one vertex (or more..)

ciao,
Marco
 
Back
Top