ATI RV740 review/preview

2) And yes, to boost sales, why not? The 9800GTX+ is a perfect candicate for the GTS250 position. It lacks no features, it lacks no performance, and its cheap. If consumers are irrationaly shying away from buying a 9xxx card simply because it sounds "last generation" then I see no problem with "re-educating" them with this name change. Afterall, for all intents and purposes, the 9800GTX+ IS a midrange GT2xx.

They should have named it GTS250 right from the start. GF9800GTX+ wasn't launched until after the GTX280 and GTX260. So that would have been the perfect timing to reposition the G92(b). I'm guessing a lot more people would have 'accepted' the namingscheme if that had happened at that time, since nV was moving to a new process (55nm), they were using a new cooler and the GPU was running at a higher speed than the 9800GTX.

Now it's just a name-change for the sake of changing names and hoping that the line-up will become clearer to the consumers. Instead they're making it even more confusing since both the 9800GTX+ and GTS250 will have a certain amount of time that they're both on the same storeshelves.

How do you figure that? There are considerable architectural differences between the two and that results on functional differences as well.

Yes, dude... didn't you know they were exactly the same? Shame on you! And did you also know that "whatever review you take GTX+/GTS250 will be faster than 4850. It's just the way it is"?
 
They should have named it GTS250 right from the start. GF9800GTX+ wasn't launched until after the GTX280 and GTX260. So that would have been the perfect timing to reposition the G92(b). I'm guessing a lot more people would have 'accepted' the namingscheme if that had happened at that time, since nV was moving to a new process (55nm), they were using a new cooler and the GPU was running at a higher speed than the 9800GTX.

Now it's just a name-change for the sake of changing names and hoping that the line-up will become clearer to the consumers. Instead they're making it even more confusing since both the 9800GTX+ and GTS250 will have a certain amount of time that they're both on the same storeshelves.

Yep, that I can agree with. Naming it the 250 from the start would have been a much better option for the consumer. Hell, I think it would have even been a beter option for NV. I wonder if that was a simple mistake or if they thought (at the time) that they would be coming out with a true GT2xx based mid range GPU to take up the GTS 250 mantle?
 
You said it right. "In its 1GB form". The new board design is required only on the 1GB model, while on the 512MB the new design is not required.
Isn't that just so board partners can get rid of old boards (selling with the new name)? I'd expect the newer version to be cheaper to produce, hence expecting everybody switching to the new board.
 
I wouldn't expect you to. However, for the vast majority of people on this planet, GTS vs GTX is much harder to distinguish than Ti vs MX (and I knew plenty who couldn't get that right).



Since the new extension theme Nvidia seems to be going with has only existed for 1 generation and has just started. I think its a bit premature to say that. The "MX" lasted from Geforce 2 to the Geforce 4 generation. Honestly MX/TI wouldnt mean anything to me if I didnt already know what they meant back then.

This for the retail chain. So retailers can clearly understand how to sell the differences between the extensions.
 
This for the retail chain. So retailers can clearly understand how to sell the differences between the extensions.
So the previous naming scheme was bad enough that even the people selling the items had difficulty understanding where they fit... comforting.

I think its a bit premature to say that.
It isn't premature. If the naming scheme is intended to help those with no prior knowledge, MX vs Ti is a much better option. At least the lettering is significantly different to indicate to the uneducated consumer that it might be worth their time to inquire about the differences in the products.
 
Before you know it we're back at XT, Pro, LE for AMD.

Radeon XT 490 instead of HD4890 doesn't sound too bad... XT 470 for the HD4870 doesn't sound bad either... how about Radeon Pro 450 for the RV740XT and Radeon LE 430 for the HD4350? Could even throw in a Radeon GTO or Radeon XL in there somewhere if needed.. :p

/runs and hides
 
So the previous naming scheme was bad enough that even the people selling the items had difficulty understanding where they fit... comforting.

It isn't premature. If the naming scheme is intended to help those with no prior knowledge, MX vs Ti is a much better option. At least the lettering is significantly different to indicate to the uneducated consumer that it might be worth their time to inquire about the differences in the products.


MX/TI would not work because there are now 3 classifications of performance. The problem isnt in the lettering. Its simply in the consistency. If Nvidia can maintain GTX, GTS, and GT for the next 3 generations or so. It will simply become as "accepted" as the TI/MX situation was back then.

Back in the TI/MX days there was much larger distinction between the high end and the low end. And most middle range hardware didnt exist. The Geforce Ti 4200 was the closest thing we had to that. When they moved away from this system. We already saw people confusing the new numbering system with the old.

IE FX 5200 verses Ti 4200/4400/4600. Like I said before. The problem lies with Nvidia's consistency. If they can stick with a similar extension monitor for a good amount of time. I think they'll be set.

Chris
 
....
The problem lies with Nvidia's consistency. If they can stick with a similar extension monitor for a good amount of time. I think they'll be set.

Chris

This makes the most sense. Not to bash nVidia, but I think they are avoiding this on purpose to get the extra sell out of the confused customer.
 
How do you figure that? There are considerable architectural differences between the two and that results on functional differences as well.

how so? except msaa performance I don't know of a functional difference between R6xx and R7xx. unless I miss some tesselator stuff that will never be used.

there's the switch away from the ring bus and all, towards more classical stuff (I'm thinking now of the X1300 which didn't have the ring bus, though I don't know about TMU stuff on it vs the bigger R5xx's)
 
Look at some good architecture article, there are significant differences in ALU (also concernig GPGPU) capabilities.
 
Different ALUs (small SPs are beefier), different TMUs (half-speed FP16, removed samplers), different ROPs (removed fog ALU, doubled Z performance, fully functional resolve), different MC (ring-bus -> combination of hub, ROPs hardwired to MC and internal crossbar for texture cache) and different internal ordering (separated quads, renders in tiles, not like R6xx, but in R5xx style, texels are not shared throught whole engine - only for tile borders, if I'm not mistaken) - is it too little? ;)
 
But all thats beside the point anyway. The point is, what advantages would a 250 based on GT2xx have had over the current one? And aside from supporting a later version of CUDA which is meaningless to the vast majority of consumers, I see none. And yet it would have had to cost more to recoup its R&D. Hence a net loss for the consumer.

The real midrange GT2xx should be the GT215, which shouldn't have been scrapped out from the plans as it probably happened with GT212. ;)
 
no-X's brief overview gives some of the insights into the architectural differences, and like AnarchX mentions, its worthwhile taking a read at some of the early reviews / architectual articles to see some of the differences and changes. Abstracting a level, though, it does relate to differences from a (games) developer perspective and also results in different level of GPGPU support. Additionally, R7xx has other functionality not found in R6xx such as 7.1 HDMI audio support, UVD2 with hardware PiP decode and DisplayPort Audio on all RV7xx other than RV770.
 
The first number denotes generation of chip and the second two numbers denote its performance within that generation of chip.
Should be getting myself a Radeon 9800 then - cheap as hell also. Only confusing thing: Which is better: "XT" or "Pro". ;)

Sorry mate, but for someone not into the details of GPU or vga card developments just walking into a store deciding from naming schemes - he'll be lost anyway.
 
Last edited by a moderator:
The 9800GTX+ is a perfect candicate for the GTS250 position.

[...]

Afterall, for all intents and purposes, the 9800GTX+ IS a midrange GT2xx.

No, it is not.... I might not be speaking for many people here, but GT2xx is Compute 1.3 (CUDA wise), while 9800GTX+ is Compute 1.1 (besides not supporting doubles, you have less registers available and more restrictions when dealing with transfers to and from global memory/device memory).

This might not matter much for people who play only games, but considering the inroads CUDA is making in various applications (Ahead Nero being one of the latest to add CUDA support), I think that having consideration for CUDA/OpenCL capabilities for your GPU is not a bad idea...
 
Also DX_11 will incorporate DX_10.1 and Tesselation, so by DX_11 time this ATI GPU can enable features of DX_11.
If I am not mistaken, that's not what it means.

In DX11 there are going to be techlevels - AMD is clearly able to support 10.1, but not 11 since their feature set would not allow that. Nvidia is only going to be able to support 10.0 - just the way it is right now.

For API consistency reasons there's -AFAIK- no driver hacks allowed, enabling only one feature of a higher techlevel. It's like DX10: All or nothing.

You can, of course, do drivers hacks with application detection and stuff, but no API-Level-exposure.
See also: http://www.pcgameshardware.de/aid,6...en-in-Windows-7-und-Vista/Technologie/Wissen/

Google-Translate:
http://translate.google.de/translat...chnologie/Wissen/&sl=de&tl=en&history_state0=
 
No, it is not.... I might not be speaking for many people here, but GT2xx is Compute 1.3 (CUDA wise), while 9800GTX+ is Compute 1.1 (besides not supporting doubles, you have less registers available and more restrictions when dealing with transfers to and from global memory/device memory).

This might not matter much for people who play only games, but considering the inroads CUDA is making in various applications (Ahead Nero being one of the latest to add CUDA support), I think that having consideration for CUDA/OpenCL capabilities for your GPU is not a bad idea...

I already mentioned the CUDA differences in previous posts. IMO they simply have no baring whatsoever for the average consumer this generation. There are currently no practical benefits that i'm aware of for a consumer to have a Compute 1.3 compliant GPU. Even DX10.1 with its current questionable worth is hugely more valuable to the consumer because at least it does demonstrate some minor practical benefits.

Also, anyone with the knowledge to understand the differences between CUDA versions sure as hell isn't going to be fooled by this name change.
 
I really hope CUDA dies soon. Time to get something more open and widely supported out there.
 
Back
Top