AMD: R8xx Speculation

How soon will Nvidia respond with GT300 to upcoming ATI-RV870 lineup GPUs

  • Within 1 or 2 weeks

    Votes: 1 0.6%
  • Within a month

    Votes: 5 3.2%
  • Within couple months

    Votes: 28 18.1%
  • Very late this year

    Votes: 52 33.5%
  • Not until next year

    Votes: 69 44.5%

  • Total voters
    155
  • Poll closed .
I don't care about gaming performance right now... I hope they will allow to publish some technology/architecture previews. But if they leak some Crysis 2 videos,running on FullHD@Dx11,smoothly on a single Cypress.. well, that would be great :D

Did anybody posted here the rumored Vantage score? Somebody on an italian forum seems to be in touch with Andrea Yang, who has declared to own the cards and that they score pretty high on the P Vantage.
4x5870 = 60k
1x5870 = 21k with the cpu running at default.

Well, let's just wait. :D
 
CJ already posted more "sane" scores with ~P16K fro a single Cypress

oh, i see. :D

I'm wondering how come this time neither Chiphell nor Expreview leaked significant informations.. Did Ati spotted their sources?
I don't think there has ever been a time when we were so close to the actual launch without knowing almost anything on the cards. RV770's specs were around a week before i believe, and G80's ones a month before.
I mean we still don't know how many SPs those things have.. 1280?1600?2000?
 
ChipHell has been saying 1600 for a while, Charlie confirmed recently. So we have some information, but very little. AMD's done a great job keeping everything secret, this time.
 
Alternatively, the launch is not so close afterall. R600 came also a lot later than expected...
The difference here is that AMD has actually set and confirmed the dates for these briefings, which wasnt the case with R600.
 
OT

"Il Dominati" doesn't make sense in italian, because "dominati" is plural. You can say "Il Dominato".. which is quite close to "Illuminato" phonetically.
And "dominato/dominati" means to be dominated , not to dominate.
"Il Dominatore"/"I Dominatori" are who dominates.
"Dominati" phonetically is not too far from "Don Minati".. that sounds like a mafia name. :LOL:

End OT

By the way, when we can expect the first leakage of informations?
If the London conference is to be held today, i will expect something for this evening!
OT

LOL
That was almost as the legionaire from Monthy Python who was explaining why "Romani etu domus" or whatever what Brian had written on the palace wall is incorrect :D

mea culpa, I found it
[Brian is writing graffiti on the palace wall. The Centurion catches him in the act]
Centurion: What's this, then? "Romanes eunt domus"? People called Romanes, they go, the house?
Brian: It says, "Romans go home. "
Centurion: No it doesn't ! What's the latin for "Roman"? Come on, come on !
Brian: Er, "Romanus" !
Centurion: Vocative plural of "Romanus" is?
Brian: Er, er, "Romani" !
Centurion: [Writes "Romani" over Brian's graffiti] "Eunt"? What is "eunt"? Conjugate the verb, "to go" !
Brian: Er, "Ire". Er, "eo", "is", "it", "imus", "itis", "eunt".
Centurion: So, "eunt" is...?
Brian: Third person plural present indicative, "they go".
Centurion: But, "Romans, go home" is an order. So you must use...?
[He twists Brian's ear]
Brian: Aaagh ! The imperative !
Centurion: Which is...?
Brian: Aaaagh ! Er, er, "i" !
Centurion: How many Romans?
Brian: Aaaaagh ! Plural, plural, er, "ite" !
Centurion: [Writes "ite"] "Domus"? Nominative? "Go home" is motion towards, isn't it?
Brian: Dative !
[the Centurion holds a sword to his throat]
Brian: Aaagh ! Not the dative, not the dative ! Er, er, accusative, "Domum" !
Centurion: But "Domus" takes the locative, which is...?
Brian: Er, "Domum" !
Centurion: [Writes "Domum"] Understand? Now, write it out a hundred times.
Brian: Yes sir. Thank you, sir. Hail Caesar, sir.
Centurion: Hail Caesar ! And if it's not done by sunrise, I'll cut your balls off.
 
DC = display controller. You need as many display controllers as independent outputs. It's possible, that development of R200 began earlier than development of RV200 or RV100. Maybe it was a design decision - videophiles bought cheaper products, not the most expensive parts... Anyway, both reference boards (R8500/R8500LE) had the secondary DAC present. The last possibility is bug - R600 was said to be the most buggy design since the R200. So R200 is possibly even more buggy.

I must admit that i don't really know beyond guess what DC really works (i know it's not a thread about it but we really need some b3d wiki like diyaudio has :D ). I more or less figure out what TMDS and RAMDAC works into scheme. But that there's something else along the line (beyond HDCP fuss nowadays) i really newer give a thought. Is that just an interface to DAC/TMDS and what it really does?

I give it a day thought about that R200 problems you're taking. And yep that might be a main reason cause R200 was 180nm based (0.18um in those days:LOL:) and it was mostly paper launch with many LE and LELE editions coming around to fill the gap of the missing real stuff. But they probably design it as you say much earlier than VE editions RV100 was a prototype of lightweight R100 with video enthusiastic feature and DDR memory controller, and all the did they cut'n'paste out properly working part of R100 and copy-paste memory-controller and video parts (dual DACs, built in TMDS TV-out) from RV100 to build RV200 ... another DX7+ based part with misleading R200 series name (i always called it RV150 as it fits it better)
I believe mczak said R200 even had advanced DACs @400Mhz which i believed was the RV250/R300 and after feature.

And about not implementing DACs into R200 was most possibly it was huge design (buggy as you said) and in those times needed redesign probably take up more than 10-12w we ned in R500 time. So they cut down r&d of old project. And as yield of R200 was so low they expunge all that would burden this 180nm design and die price. So they can sold as many as they can working kukulele salvaged R200 based products and make some money.

In the time of R300 and especially after it they had huge cash inflow (of "laundered money" :D) so until R600 design they didn't have such problems. And R300 was in fact the successful apex of their past R&D and market failures in R100/R200 incarnations. And R600 was ingenious design but probably w/o AMD huge cash injection we would never saw successful story with many buggy RV770 cards (hidden from general public for 9 month until RV790 came out) and definitely now with polished up product like bla-bla DX11 RV870 ... probably something that could be easily done with R600, few tweaks and 4x smaller die :D

That only DAC was already 400Mhz though
Might be right but only if you're not relying on Everset's info sheet ;)

Hmm in fact I think GF2MX was released a bit earlier than rv100 based cards (Radeon VE) though it might have been close...

afair gf3/gf2mx are released pretty simultaneously, and i after ati introduced it's first dx7 flavored product in Radeon incarnation (256bit chip internally, that was a big deal in that time). First ATi chip that break that was r580+ that had 512b (2x256b) ring bus 6 years later as a proving ground for delayed r600.

Not arguing about output (TV) quality that's another issue but really the presence of two display controllers is key, there were cards with several outputs before they just couldn't be used at the same time (at least they were not independent). In any case looks like both ATI and Nvidia thought it's a feature they need around the same time (after Matrox did it).
Though of course Nvidia missed this feature for some reason on the (later released) GF3.

I really didn't see any gf2mx based cards with two outputs in fact even mx420 products didn't have that feature just fully gf4mx products like 440/460 did. Maybe some quadro based gf2mx had it so that's the reason for two DC. As in that time softmodding gf2mx into Quadro's was real popular :p afair.
 
RV770 worked well enough on its initial version that that's what was released. That's reasonably rare.

Nobody said that, it worked out extremely good in the lack of the real competition. And maturity and processes, i really dont want to follow you're mind 3 jumps behind when you even don't try to follow me one jump behind in this neverending thread, so excuse me if i miss something.
You were trying to point out that RV740 doesn't have salvage parts while turtle explaind it pretty well for you.

Anyway with RV770 maturity we could pretty well saw how mature it was not even 6 month later when hardware enthusiasts reported that RV770's PCIe low bandwith bug. Or even much worse for gamers and OCers that chips sucks more juice than pcb card manufacturers poorly equipped it with in their "referential" pwm and pcb design. So that was my referral on yours RV770 claimed maturity.

Other older stuff is self explanatory when there was no buyers for old R600 (RV670 was in production) they expunge pretty damn good product as HD2900Pro w/ 256b pcb only and that could run on XT clocks with almost same performance w/o any impact from its halved memory bw. Why i don't hope that you'll even try to understood, maybe cause you read my last reply with too much sarcasm of yours while you read it.

6.0Gbps doesn't seem like a worthwhile tradeoff for the ~25% extra power it'd use compared to 5.5Gbps RAM.

It's only 14% of extra power than 1.5V part ;) But In therms of extra performance enthusiasts always pays premium and asking no questions at all abot it's power consumption. So it has it's buyers as XTX always had.

Bandwidth isn't bottleneck for HD4870. It has 125% more bandwidth than HD4770, but only 20% more performance when using MSAA 4x.

Well you can look at it that way. I'd say HD4770 is memory starved and with more memory bandwidth it would close the gap even more :LOL: But is it all, since original G80 flavor raise the performance bar high with AAx8 and AF16 at least and mainstream cards are simply not meant to do that. Even i believe RV740 could pretty much disappoint ATi if someone of their partners would put faster memory 1.2GHz and more decent coolers on it. And with that later chip batches on real HD4770 review class cards w/ all that capacitors and more stable pwm design. But forget it now. It's time for dx11 parts to claim the realm.
 
What I was pointing out is that RV740 launched at 750 Mhz, not 850. So, at that point the process seemed not to be mature enough to allow good yields (which are also affected by the desired clock)

Something many of us pointing Jawed, trying to explain him thatwhole RV740 expedition itself was based on salvaged parts itself so that's main reason why HD4770 didn't get any market siblings until now when yield accordigly raise over 60% and newer parts have so little leakage that they're sold as cheper flavoured HD4750 w/o need of additional pci-e molex on it to reach projected 850MHz or even more. ofc ATi wont miss an opportunity to mix old batches into shipment of new chips if they'll need to sell it cause as in the past (Jawed-HD2900) first they get rid of poor rolling parts and then they produce some underclocked version of product that clocks like hell even without 2x pci-e connectors needed for zealously priced XT parts. Heh.

More on TSMC an their ill fated process ... an f*king interesting news .... where was T2+ (terminator2+ :LOL:) produced? Dresden?
http://cens.com/cens/html/en/news/news_inner_29124.html

If they can produce CPUs that's at least twice as complicated than GPUs w/ 85% yields in 6 month timeframe i'd say they can deliver some hot selling-Niagara leaking products like GPUs right now.

Also, RV770->RV790 took at least 8 months. RV740->Cypress four and half. With all the process problem we heard on the 40nm and that seems to be much higher than those probably encountered on the 55nm node. So, very high clocks are possible? Yes. Are they likely? Given the history, I would not say so.

I dont get you, do you deliberately miss that RV770 wasn't first ATi's 55nm part they put on market. Six month before RV670 parts based on same 55nm hit the market. And all they did with RV790 was fixing some bugs in current chip to compete with elephantiasis infected nvidia and their new 55nm gt200b chip. Maybe if nvidia responded better we'd saw something more of R700 than just bug fixed chip with huge ring-cap .... or ditch defence.

*Sigh*
1+ Ghz are needed with 1200 SP to achieve 2 teraflop (5850) + 20+% (5870). If there will be 1280, it will be different, but you'll anyway need more than 900 Mhz.
55 nm chip are representative of a manufacturing process that had not so big problems.

Why are you all so fixed on SP's. My bet was 1920SPs and only 48TMU/24RBEs somwhere at the middle of this thread. When that German site spoiled it all when they hit us with "sure spec-ula(ifica)tions" ... 1200SPs @950MHz. And 1600SPs on 320mm2 seems that they projected these chip with that huge leakage as default 40nm TSMC process characteristic more than DX11 parts really need to waste so much of die space. And that ludacris (if we look ATis conservative historical approac) 64TMUs and 8-quad RBE seems that might add to that size a lot.



It has the same shader count as 4870 (according to charlie), and I dont see it having slower clocks, then why should it just equal 4870 in perf. There are no dx11 games to test it with, then why wouldn't it match in dx10 games(if not beat 4870)?

I believe they all use to put simple calculations: more memory bandwidth (HD4870) more performance. Just like L2 cache or memory does calculations. Duh. Well, for the P4 OC generation more GHz and enormous cache means more performance :LOL:



Bandwidth limited?
128bit w/ maybe 5ghz GDDR5 so 80GBps max, most likely it will have slower memory though.

Shouldn't they'll be 192bit parts (pcb memory bw)? Just to allow usage of the slowest possible memory parts that are cheapest at that time on the market. And that give us 76.8Gbps w/ cheapest 1GHz gddr5 parts.


And 4770 is bottlenecked. In 1920x1200, it does great without AA, well with 4X, but completely chokes with 8X. See here: http://www.pcworld.fr/article/radeo...as-prix/recapitulatif-des-performances/84031/ (sorry about the French, but the charts speak for themselves). Obviously it's capacity-limited as well, but bandwidth is an issue too, as evidenced by the fact that HD4770 drops behind 4830, which is far slower but has a slightly higher bandwidth, and probably shorter latencies too.

I think latency problems are just the same as in time 128b cards use to be top performing/upper mainstream and when they sell 128b cards with "doubled memory capacity" for premium price. While all they did put more chips on back side on the same lanes and these cards often show lower performance caused by latency and sometimes even cheaper ddr memory.

So only issue with HD4770 is that ATi decided like in Juniper 192b case to use TWO 32b memory chips on the same lane, and these chips need some ganged/shared relations to work on same lane --some interrupts, and latencies sky rocketed for that reason. They could use costlier 1Gbit chips instead, but this way they limiting greatly performing budget mainstream chips w/o need to pick up some slower memory chips which became hard to work out in the late days, and they could yous more cheaper memory chips for future 1.5GB/3Gb solution on future Juniper class cards. It was easy to limit 128b RV250 chips when DDR memory was so expensive by picking lower clock parts, while 800/900 MHz gddr5 parts already become obsolete. So all theyca is mess up cheapest 1Ghz parts adding them additional latency in play.

Nowadays they turn to green concept of same pcb for all parts like in HD4870/4850/4830/4730 case with different memory arrangements and different core clocks. But that could be hardly done in shorter living but more ROI attractive budget mainstream, and they need something to differentiate new cheap RV740 from pricely paid high end chips from the same, still alive R700 series.

On the other hand they need too see how 40nm chips perform so they couldnt really cap core clock. Even thought i think that leakage problems came out as a great excuse not to see how 40nm parts could easily scale well above 1GHz as projected. Just like in RV570 case when they produce chip that could hardly reach 670MHz even heavily OCed while more complex R600 easily could reach 750MHz and heavily OCed somewhere around 830MHz.
 
So who will buy the 5,5Gbps & 6,0Gbps GDDR5 from Hynix ?

Every semiconductor manufacturer has a paper launch from time to time :LOL:

Anyway Samsung did announce it's 6.0Gbps prototype (1.5Ghz gddr5) (i did link tha already somwhere in this thread) somewhere last year so it not really big news that Hynix now has it in mass production. They're pretty neck stabbing bastards to each other and i believe Samsung also has something in their arsenal piled up already ;)

Also they have 7.0Gbps per pin which are 1.75GHz gddr5 parts in production since mid February and they prodly announced it as Lv GDDR5 with 1.35V from stock 1.4-1.575V they use to produce with old litography
 
Back
Top