AMD: R7xx Speculation

Status
Not open for further replies.
AFIR, they had 1 billion in cash before take over. So I don't think they were absolutly struggling to be bought out, no reason to. Looking from the outside in, seems AMD was set out on a very ambitious goal to merge the GPU with the CPU as well has have a stronger chipset family and their own IGP.
 
AFIR, they had 1 billion in cash before take over. So I don't think they were absolutly struggling to be bought out, no reason to. Looking from the outside in, seems AMD was set out on a very ambitious goal to merge the GPU with the CPU as well has have a stronger chipset family and their own IGP.

No immediate reason, perhaps...but I seem to recall even Orton admitting, between the lines, that their longer term outlook wasn't all that great and that they were in search of solutions.
 
No immediate reason, perhaps...but I seem to recall even Orton admitting, between the lines, that their longer term outlook wasn't all that great and that they were in search of solutions.

Thing is if you look at where the chip industry is going, the way forwards is multicored chips. Intel and AMD have got to find workloads to fill all those extra cores, and one of those new workloads is going to be graphics (in fact, to an extent there is a convergence of GPU and CPU workloads going on).

I think with Intel looking to fill those extra cores with Larribee-style workloads, AMD took the view that ATI could give them the same advantages in the form of Fusion for all their extra cores. It also gave them a lucrative in to the chipset market and complete platform for OEMs. In fact, if it wasn't for the sucky Phenom, we'd all be saying how impressive the whole Spider platform is and what a great advantage it is for AMD.

ATI in turn wanted to find a sugar daddy to see them through the same industry reorganisation, and give them a way to sidestep the crushing weight of Nvidia (and eventually Intel)

Now the music has stopped, and Nvidia is left standing without a chair as Intel and AMD wind up to providing complete GPU/CPU/Chipset platforms in the next couple of years.
 
Thing is if you look at where the chip industry is going, the way forwards is multicored chips. Intel and AMD have got to find workloads to fill all those extra cores, and one of those new workloads is going to be graphics (in fact, to an extent there is a convergence of GPU and CPU workloads going on).

I think with Intel looking to fill those extra cores with Larribee-style workloads, AMD took the view that ATI could give them the same advantages in the form of Fusion for all their extra cores. It also gave them a lucrative in to the chipset market and complete platform for OEMs. In fact, if it wasn't for the sucky Phenom, we'd all be saying how impressive the whole Spider platform is and what a great advantage it is for AMD.

ATI in turn wanted to find a sugar daddy to see them through the same industry reorganisation, and give them a way to sidestep the crushing weight of Nvidia (and eventually Intel)

Now the music has stopped, and Nvidia is left standing without a chair as Intel and AMD wind up to providing complete GPU/CPU/Chipset platforms in the next couple of years.

That's a fairly optimistic outlook on what AMD and Intel will actually provide in terms of GPU/CPU mixes. I'd rather be a tad bit more circumspect, if you don't mind.;)
 
This might easily be the dumbest thing I ever read :LOL:
Ok... I definitely worded that wrong.
They weren't bought because they were doing so good but at the time they were bought they were doing quite well.

Yea, sure, it's economically sound to buy companies which are doing great. It gets you a wonderful price. And their silly investors...they have the golden goose, but they sell it because they need the cash-and everyone knows it's better to have cash once than have a constant cash-cow.

Do you think that if they were actually doing so great and were having super dooper outlooks on the future, they would've accepted to be TAKEN OVER by AMD?Really?
Well look what happened...
For some reason, AMD was quite happy to spend ~$5bil on them which they are now considering too much.
 
So we had NV1 - complete failure although interesting.
Riva 128 - Better, but still a failure when compared to the competition.
TNT - Nvidia finally get's it right...still not one of my favorites...
Geforce 256 to 4- Improves even more...image quality still not what I'd like, but it was fast.
FX Series - Again a relative failure compared to the competition. Probably would have been good on it's on.
6xxx series - Nvidia hits stride again but remains "generally" slower and less image quality than ATI.
7xxx series - Still "generally" slower than ATI and still lower IQ.
8xxx series - Absolutely blows away ATI in speed and marginally better IQ, although I prefer R600 AA quality (speed is another matter entirely).

If I'd be personally looking for major "milestones" in GPU history, I'd personally vote for G80 and R300 for the most recent past; mostly because both came with a healthy increase in performance and IQ at the same time.

However that list above is still a tad too loopsided for my taste; ATI started IMHO to take the mainstream gaming market more seriously with the dawn of the initial Radeon (R100), both on the hardware as well as on the software level. If you would had compared R100 to NV1x I don't see there any IQ winner between the two. Both had OGSS (whereby R100's AA performance wasn't exactly breathtaking at the time), while AF was either limited to 2xAF yet unoptimized or something with supposed more samples yet caused far more side-effects that I could have tolerated.

R200 vs. NV2x was Supersampling vs. Multisampling; the fact that NV2x was restricted to 2xRGMS/4xOGMS wasn't much of a problem since R200 was capable of some RGSS only under specific presuppositions. AF on R200 wasn't in the realm of superior AF either qualitywise; apart from the weird angle dependency here it was limited to bilinear AF only like R100.

R300 took the market by storm because NV3x came with a huge delay, was vastly inefficient in the arithmetic department and was still limited to 2x samples for RGMS. AF still wasn't better though on R300, yet considering the much longer list of advantages it had over it's competitor I considered it nitpicking to even mention it. ATI's 6x sparse MSAA was a complete gem for CPU bound scenarios back then.

R420 vs. NV40: NV finally went to 4xRGMS; ATI still has the 6x sparse MSAA advantage in terms of IQ. No major advantages in the AF realm for either/or, with the difference that NV introduced a similar angle dependency to R3x0. However NV since the NV2x introduced their hybrid MSAA/SSAA modes which might have been (and still are) of limited usability, but for cases where someone is either resolution or CPU bound those can have their uses also. Remember we didn't have transparency AA back then and for cases where scenes where overloaded with alpha textures the only other sollution was unfortunately full scene supersampling.

G7x vs. R5x0: ATI implemented a far less angle dependent algorithm for AF which was a significant advantage back then; AA remained for both sides nearly the same with the exception that we saw the first signs of transparency AA (adaptive supersampling for alpha textures).

G8x vs. R6x0: NV bounced back to lower angle dependency (as found up to NV3x), inserted a shitload of TF/TAs in order to supply the chip with insane bilerp fillrates and introduced coverage mask AA alongside MSAA. ATI kept roughly the same less angle dependent AF as in R5x0 and replaced 6x sparse with 8x sparse MSAA. They also introduced custom filter AA, which with the exception of the edge detect mode (which looks outstanding IMHO, yet costs quite a bit of performance) isn't something that knocked me out of my socks either.

Pardon for the rather long list, but at all times "huge" difference in IQ are debatable. For one IQ is far more a subjective matter and as a second for all those years analysis for that department leaves a lot to be desired. To be honest it's the most difficult one, because a reviewer would have to do a lot more than to run a series of benchmarks and write down the results and a second due it being rather a subjective matter there's always some risk involved.

The bare truth is that both IHVs have a long track of transistor saving implementations for AF (higher angle dependency) and various performance optimizations. If one would sit down and would compare the persentage of a scene Multisampling takes out it vs. the persentage in data for AF, the differences are like night and day. Logically to me personally there's more weight on AF then on AA quality; it's first thing I notice anyway. Both of them are at least equally guilty for fooling around with that quality for one or the other reason. It was simply a rather tragic irony for NV that when ATI finally removed their high angle dependency they went back to mimic R3x0's angle dependency.

That said I wouldn't be in the least surprised if either/or or even both in the future end up sacrificing quality again if their transistor budgets for target X should get tight again. While it might be understandable, I say let the innocent cast the first stone.

So, yeah, they've certainly been successful, but I count at least 2 failures prior to the FX failure. And while successful other than having better marketing they still trailed ATI in speed and IQ until the 8xxx series.

Did you count the failures and successes of ATI prior to the R300 too?


Basically, when it comes right down to it. Nvidia isn't all that different than any other tech company. They have hits and misses with regards to their hardware.

Nvidia however has certainly excelled when it comes to marketing. Don't think anyone would argue that point.

That's most certainly true. The point where I disagree is that any of their past successes where mostly due to marketing. It's not like anything ATI touched turned into gold and anything NV touched was a steaming pile of poo (or vice versa); both did the very best possible within their own capabilities for each timeframe. If NV's successes would be alone due to marketing then I wonder where terms like execution, developer support and many others would fit in such a picture.

I don't recall who said in the past that one should let ATI to design HW and NV to execute; while it's of course a wild exaggeration for both, there's still some truth hidden within it.

And by the way these type of discussions have been recycling for years now. Despite each and everyone's opinion on issue A or B, it remains an undisputable truth that it's for everyone's benefit that the desktop graphics market has and will continue to have at least two contenders. W/o any significant competition it'll get utterly boring and I'm afraid that in somewhat monopolistic scenarios we'd face far less evolution than up to now.
 
Last edited by a moderator:
Riva 128 - Better, but still a failure when compared to the competition.
I wouldn't call Riva 128 a failure -- it was the best 2D/3D accelerator of its' time. Voodoo 2 was better speed-wise, but it was 3D add-in card and many people (and OEMs) prefered R128 as a simplier solution.

Geforce 256 to 4- Improves even more...image quality still not what I'd like, but it was fast.
Any examples of better image quality during the GF256-4 era? R7xxx was crap overall, R8xxx was crap from rendering quality POV. What else?

FX Series - Again a relative failure compared to the competition. Probably would have been good on it's on.
Relative to R300, yes.
It was quite bad in speed and quality even compared to GF4 actually.

6xxx series - Nvidia hits stride again but remains "generally" slower and less image quality than ATI.
You need to seriously check your data: 6xxx series never were 'generally' slower nor it had less image quality than R4x0 line.

7xxx series - Still "generally" slower than ATI and still lower IQ.
Oh yeah, 7800GTX was definately slower then RX850XT, no doubts about that. I mean, WTF?

8xxx series - Absolutely blows away ATI in speed and marginally better IQ, although I prefer R600 AA quality (speed is another matter entirely).
No.
Todays 8x/9x vs 6x0 fight is quite the same as the fight between 4x and 4x0 only the sides have changed and ATI failed to provide a good enough top-end solution (somewhat redeemed itselft with 3870X2 however).

So, yeah, they've certainly been successful, but I count at least 2 failures prior to the FX failure. And while successful other than having better marketing they still trailed ATI in speed and IQ until the 8xxx series.
TNT quality was WAY better than Rage 128 quality, speeds were better too.
GF256-4 cards was WAY better quality-wise then Radeon 256-7xxx-8xxx lines, speeds were better too.
GFFX was worse then R3x0 in quality and speed.
GF6 was comparable to R4x0 in quality and speed, but had SM3 support.
GF7 was WAY faster then R4x0 and had approx. the same quality level.
GF7 was comparable to R5x0 in speed and worse in quality.
GF8 is better that R5x0/6x0 in speed and mostly the same in quality.
I wouldn't call that 'trailed ATI in speed and IQ until the 8xxx series'. NV was mostly faster and better/the same in IQ. The only era that's really stand out of this line is the R3x0 vs NV3x era.

Nvidia however has certainly excelled when it comes to marketing. Don't think anyone would argue that point.
No amounts of marketing can help you if you don't have a good product. GFFX fiasco and R3x0 rise to heaven is a good example of that.
So if NV is where it is today, that's because NV knows the hardware side first and only after the good hardware they have some amounts of good marketing.
 
Any chance of getting back to tasty/untasty rumors of the config/technical details of R700 yet?
 
I
GF256-4 cards was WAY better quality-wise then Radeon 256-7xxx-8xxx lines, speeds were better too.
Where exactly were the GF cards better in rendering quality? True, they had an advantage with anisotropic filtering (at least GF3/GF4), with r100 and r200 chips being limited to 16xAF, bilinear and very angle-dependant. GF3/GF4 also had more useable AA (faster, but often still too slow to be useful). But internal precision was lower on GF (though probably not really visible usually). Don't forget the horrible DXT1 banding problems neither.
And not to mention that the radeons of that time had actually VGA outputs which were usable above 1024x768 - strictly speaking not the fault of the GF chips, but nonetheless an important quality criteria (you could have found a card with good output quality, if you'd invested some time of course).
 
Where exactly were the GF cards better in rendering quality? True, they had an advantage with anisotropic filtering (at least GF3/GF4), with r100 and r200 chips being limited to 16xAF, bilinear and very angle-dependant. GF3/GF4 also had more useable AA (faster, but often still too slow to be useful).
Why ask me if you can answer yourself? =)

But internal precision was lower on GF (though probably not really visible usually).
It didn't matter in the same way the low precision of bilinear filtering didn't matter for all the Radeons up until R5x0 cores.

Don't forget the horrible DXT1 banding problems neither.
It was fixable by patching the driver to use DXT5 instead of DXT1 -) Not that big of an issue anyway.

And not to mention that the radeons of that time had actually VGA outputs which were usable above 1024x768 - strictly speaking not the fault of the GF chips, but nonetheless an important quality criteria (you could have found a card with good output quality, if you'd invested some time of course).
We're talking about 3D IQ, no?
I personally never had any problems with GF's 2D quality and i always thought that R's 2D wasn't that much better -- Matrox Millenuim's was but not Radeon's =)
 
I agree with hoom. It seems like any ATI/AMD topic seems to be pulled off of technical speculation and turned into a historical fan-bois disertation. Those should be kept out of these speculation threads.

How the GF1/r300 worked or did not work does not add to the topic at hand R700 Speculation thread.
 
RV770 and R700 to use GDDR5

ATI's new flagship performance and high end chips will have the ability to use GDDR5 memory. This is the next big thing that ATI plans to embrace.

It looks that GDDR4 didn't really hit off well, as it doesn’t brings a lot of advantages over GDDR3, but GDDR5 promises some crazy frequencies all the way up to 2.5GHz / 5GHz effectively, which is more than two times faster with what you can achieve today with GDDR4.

This mean that the new memory will have more bandwidth and this is something that graphic card always enjoys.

At this time, we are not aware that GT200 has GDDR5 support.

News Source: http://www.fudzilla.com/index.php?option=com_content&task=view&id=5675&Itemid=1

GDDR5 - Wow! I wonder if Nvidia will do anything regarding that rumour.

US
 
Just a minor nitpick, you could get 1.6Ghz gddr4 (hynix - samsung has 1.4Ghz parts) today easily (actually more like yesterday...), so gddr5 at 2.5Ghz isn't exactly more than twice as fast. It should be more than twice as fast (if it'll actually ship at those frequencies announced, which would be surprising considering the announcement history of older gddr parts) compared to what's used in current shipping graphic cards, however.
 
Just a minor nitpick, you could get 1.6Ghz gddr4 (hynix - samsung has 1.4Ghz parts) today easily (actually more like yesterday...), so gddr5 at 2.5Ghz isn't exactly more than twice as fast. It should be more than twice as fast (if it'll actually ship at those frequencies announced, which would be surprising considering the announcement history of older gddr parts) compared to what's used in current shipping graphic cards, however.

I've read it before somewhere that GDDR5 is to start at 2.5GHz, but the grain of salt feeling remains until proven otherwise. Either way the jump in frequency looks a lot healthier than from GDDR3-->GDDR4 and it hopefully will be more conservative in terms of power consumption than the other two ram types.

By the way minor nitpick for the author of that former newsblurb:

2.5GHz / 5GHz

2.5GHz / 5.0 effective DDR; 5.0 is not a frequency for crying out loud.
 
One thing I just thought of. Since the r680/670 were just a half step on the way to AMD's new strategy. I think it's quite the possibility that r770 will be fabbed on AMD's fabs. They are looking to include it in their fusion chip. So they are going to have to fab it there eventually. Furthermore, I wonder what architectural bonuses they would get from doing so?


Perhaps that is why they are making the chip dual core? AMD has experience in their fabs for making them, so it seems like a no-brainer to transfer that knowledge base for making GPUs. Plus they could get higher margins for their parts and i'm sure that AMD has some extra fab capacity avaliable.
 
One thing I just thought of. Since the r680/670 were just a half step on the way to AMD's new strategy. I think it's quite the possibility that r770 will be fabbed on AMD's fabs. They are looking to include it in their fusion chip. So they are going to have to fab it there eventually. Furthermore, I wonder what architectural bonuses they would get from doing so?


Perhaps that is why they are making the chip dual core? AMD has experience in their fabs for making them, so it seems like a no-brainer to transfer that knowledge base for making GPUs. Plus they could get higher margins for their parts and i'm sure that AMD has some extra fab capacity avaliable.

Nope, still TSMC AFAIK. AMD doesn't really have available fab capacity right now, not to mention that they only have a rather mediocre 65nm process in any kind of mature form.

And the second paragraph is quite wrong really.
 
Perhaps that is why they are making the chip dual core? AMD has experience in their fabs for making them, so it seems like a no-brainer to transfer that knowledge base for making GPUs. Plus they could get higher margins for their parts and i'm sure that AMD has some extra fab capacity avaliable.

GPUs have been multicores forever in the same sense as CPUs are now dual/quad cores
 
GPUs have been multicores forever in the same sense as CPUs are now dual/quad cores

Cpu's have multiple subunits, yet they can still be called dual core. What I mean is that I've heard that it's the same architecture doubled up instead of just being one large chip. (For flexibility)

Actually about fab capacity, I heard that they had a new fab up near Canada, in New York state. I heard that this fab could be used to make GPU's.

Wouldn't it be expected that they would start making them now on their fabs, as a prelude to making the fusion chip with an r770 core?

:)
 
Actually about fab capacity, I heard that they had a new fab up near Canada, in New York state. I heard that this fab could be used to make GPU's.

They might have a New York fab in the future. It might be available maybe 2012/2013, I don't remember off-hand.
 
Status
Not open for further replies.
Back
Top