AMD: R7xx Speculation

Status
Not open for further replies.
So, is it true that the codename for the HD 4850 card is "Makedon" ?
It's hard to believe he would only get to be the third in line (behind HD 4870 an HD 4870 X2)... ;)
 
So, is it true that the codename for the HD 4850 card is "Makedon" ?
It's hard to believe he would only get to be the third in line (behind HD 4870 an HD 4870 X2)... ;)

I think he's been out of the office for a while up until the first of May, so they probably took advantage of that and quickly distributed the codenames:D
 
I wonder how much impact GDDR5 will make over GDDR3, I'm talking about ATI Radeon HD4850 vs. HD4870 in performance wise.

Should we expect something similar to:

A. Radeon X1900XT 512 GDDR3 vs. Radeon X1950XTX 512 GDDR4
B. Radeon X2900XT GDDR3 vs. Radeon X2900XT GDDR4 1GB
C. Radeon HD3850 512 GDDR3 vs. Radeon HD3870 GDDR4
D. -OR- none of the above.
 
I think it could be a good strategy. Speaking for myslf I want something that can run Crysis a lot better than my 9600GT. But I'm not going to pay 549 for it.

But reasonable prices in the 200-349 range as rumored for the AMD cards I might.

If Nvidia is going to do GT200 at a very high price and nothing else, at least initially (leaving only G92 for mid-low range, which should be trounced by the 4800 parts), as it looks, AMD could make a lot of hay. I think they might have a winner on their hands if 32 tmu's is true,


Given current rumors that the 4870 = 3870X2 which = 1 8800GTX, I dont see your 4870 trounces 9800GTX/GX2. And lets not forget the G92b which will be G92 on 55nm which will alow for higher clocks.
 
Given current rumors that the 4870 = 3870X2 which = 1 8800GTX, I dont see your 4870 trounces 9800GTX/GX2. And lets not forget the G92b which will be G92 on 55nm which will alow for higher clocks.

Well, in some benchmarks HD3870X2 sucks really bad and in some - it trounce GF8800GTX.

It well be possibly 2 out of 10 benchmarks HD4870 may trounce GF9800GX2 in some games - due to the game is not optimize for Dual-GPU and it will run better on single GPU.
 
I wonder if it will stay forever - in the way ATI hardware will be full generation behind NVIDIA.

ATI RV770 will finally catch-up to Nvidia G80; NEXT, after that happens - Nvidia will release GT200 and it will be ahead for about 1 Year before ATI will catch-up with RV870 for single chip.
 
Well, in some benchmarks HD3870X2 sucks really bad and in some - it trounce GF8800GTX.

It well be possibly 2 out of 10 benchmarks HD4870 may trounce GF9800GX2 in some games - due to the game is not optimize for Dual-GPU and it will run better on single GPU.

1. Define trounce
2. 1 bench that isn't a flyby, walkthur with nothing happening, in-game bench or cutscene
 
1. Define trounce

This is what I mean trounce :D


oblivionscale.png


http://www.anandtech.com/video/showdoc.aspx?i=3256&p=5
 


"Our Oblivion test takes place in the south of the Shivering Isles, running through woods over rolling hills toward a lake."

And that there is exactly what I was referring to. Gee, lets run a striaght line, avoid fights and anything that might cause the cards to work harder and possibly cause them to tank.

"This benchmark is very repeatable, but the first run is the most consistent between cards, so we only run the benchmark once and take that number"

The reason for this issue in Oblivion is that the second time you load an area after having started the game is the game starts to thru in enemies for you to fight. Gee, I wonder why they didn't want that?
 
Last edited by a moderator:
Again, same test run, it may as well be labeled, walktrhu. If I wanted to know how well a card performed at not doing anything, I wouldn't ever get anything accomplised gaming.

The whole point is Radeon3870X2 does not perform equal in all benchmark. "Single GPU solution run more worthy vs. Dual"
 
If older chips had 30 clock domains don't you think newer ones would as well.

Allow me to point to earlier comments in this thread.

Context is everything. ;) Lukfi was clearly talking about a separate shader clock, not clock domains in general...

I'd be very surprised if it clocked lower than the 3870. Hopefully they could just focus on improving the speed of the stream processors (and texture units if possible) and clock those higher.
Well, that's really the point of my post: if you want to increase the clock speed of a monolithic clock domain, you don't have the luxury of improving one block and not the other. It's all or nothing. Improving 'just' the shader and texture unit would imply that they are running on different clocks. Since this is not the case for RV670, it can only be done by changing the architecture.

The only thing I'm not sure I agree/understand your point with 45nm/40nm. I mean, that kinda goes against what I said in my latest 40G news piece - even if you optimized mostly for power efficiency, it'd still be very easy to get a 100MHz bump. Am I missing something here?
I haven't had the chance to play with 40/45nm libraries, I'm just not holding my breath: the trend is very clear in that the speed improvement in going to smaller processes is getting progressively smaller. The step from 90 to 65nm was really quite disappointing. Also note that fab houses (pretty much all of them) have a long history of being too optimistic about performance of new processes. I've seen cases where initial spice decks were 20% faster than the final production ones (over the years they've been getting better at it, but it still pays to be very skeptical.)

As for stepping up 100MHz: that depends on your initial speed right? Going from 1.5GHz to 1.6GHz is going to be much easier than going from 200MHz to 300MHz, but you knew that. ;)
In the context of a hypothetical RV770 in 40nm: beats me. These days, RAM speed is particularly dicey, but I guess going from 750MHz to 850MHz is not all that unreasonable...

I don't know the trade-offs or complexities involved in using multiple kinds of transistors for the same chip, but perhaps others would know better.
There are processes that support 3 types of standard cells, with different transistor threshold voltages: LVT (Low), SVT (Standard) and HVT (High), in order of decreasing leakage and decreasing speed. (See this presentation.) They can be freely mixed, but LVT cells should be avoided like the plague: their leakage can be many orders of magnitude higher than HVT cells, for only a 2x or so speed increase!
Backend tools are supposedly robust enough to upgrade cells to HVT when there's enough timing slack.

There are also options to mix multiple processes on the same chip, but AFAIK this is not very common and tool flows are still immature.
 
The whole point is Radeon3870X2 does not perform equal in all benchmark. "Single GPU solution run more worthy vs. Dual"


And I understand that. I've seen the becnhes where the X2 was slower than a single because of the XFire hangup. But you are choosing benches to show it trouncing a single card solution is what is nothing more than a walk thru that doesn't really show what the card(s) can and can not handle. Now to be fair to anand, I can somewhat understand why they did what they did because Oblivion is by far the hardest game to try and get the same run on the same load time after time after time after time. Each new load will bring a different scenerio no matter what you try to do. Try it yourself and see

runs(this is what can happen with each load after 1. 1 being the first load for a save):
1. 20 second run from point a to b, no enemies, no nothing
2. this time 1 enemy
3. 2 deer running
4. 3 enemies
5. oblivion gate and 2 enemies
6. 1 deer, 1 oblivion gate and 6 enemies


With that kind of random ness, it can cause massive problems in trying to bench from a certain area, hell any area in Oblivion and get consistent numbers. But it would provide a better feel as to how well cards are able to handle actual game play as opposed to a walthru.
 
Given current rumors that the 4870 = 3870X2 which = 1 8800GTX, I dont see your 4870 trounces 9800GTX/GX2. And lets not forget the G92b which will be G92 on 55nm which will alow for higher clocks.

One current rumor. No plural.

Your logic is sound but I really take it all with salt. If true at 32 tmu's 480sp's and higher clocks we should see a greater performance increase.

My greatest indicator is the price, they're not going to mess around with that. The rumor 4870 is priced at 349 suggests a decently performing part to me. One would think somewhere above 9800GTX which runs about 329 at least.
 
And I understand that. I've seen the becnhes where the X2 was slower than a single because of the XFire hangup. But you are choosing benches to show it trouncing a single card solution is what is nothing more than a walk thru that doesn't really show what the card(s) can and can not handle. Now to be fair to anand, I can somewhat understand why they did what they did because Oblivion is by far the hardest game to try and get the same run on the same load time after time after time after time. Each new load will bring a different scenerio no matter what you try to do. Try it yourself and see

runs(this is what can happen with each load after 1. 1 being the first load for a save):
1. 20 second run from point a to b, no enemies, no nothing
2. this time 1 enemy
3. 2 deer running
4. 3 enemies
5. oblivion gate and 2 enemies
6. 1 deer, 1 oblivion gate and 6 enemies


With that kind of random ness, it can cause massive problems in trying to bench from a certain area, hell any area in Oblivion and get consistent numbers. But it would provide a better feel as to how well cards are able to handle actual game play as opposed to a walthru.

There are Crysis benches from the great Nvidia friendly Hardocp benching methods that show the same thing, at least in DX9.

Do you really not believe that when it's raw power is correctly utilized 3870X2 cant occasionally trounce a single G80?
 
There are Crysis benches from the great Nvidia friendly Hardocp benching methods that show the same thing, at least in DX9.

Do you really not believe that when it's raw power is correctly utilized 3870X2 cant occasionally trounce a single G80?

Must be a new set of benches they have done then as I have only ever seen the DX10 ones between the 2 cards. Let me get to those new HOCP benches and get back to you, wanna see this first hand. I no longer like looking at canned benches.

PS: Did you get what I was referring to with oblivion?

Saw your next post after having read the article.
 
Last edited by a moderator:
From the benches I've seen, I would conclude that the HD 3870 X2 is generally a faster card than single-chip nVidia offerings (at least the G92 based ones). There are games where CF doesn't scale very well, then it's either a driver problem and it will or won't be fixed, or it's a game engine (optimization?) problem and it affects all multi-GPU setups.

And by the way, you can't objectively benchmark Oblivion while fighting enemies. Remeber the point is to compare the cards to each other, not to determine which one provides a playable framerate with the given settings. You can get comparable number by running around peacefully, not by fighting enemies.
 
Status
Not open for further replies.
Back
Top