AMD: R8xx Speculation

How soon will Nvidia respond with GT300 to upcoming ATI-RV870 lineup GPUs

  • Within 1 or 2 weeks

    Votes: 1 0.6%
  • Within a month

    Votes: 5 3.2%
  • Within couple months

    Votes: 28 18.1%
  • Very late this year

    Votes: 52 33.5%
  • Not until next year

    Votes: 69 44.5%

  • Total voters
    155
  • Poll closed .
No, there is a USB version as well, so you can sync to the display or the vid card.

-Charlie

From what I've heard the demonstrations had the USB ports on the display, no?

I'd have more faith in display-synced too, but in the case 60hz 3D works on my existing panel it'd be a fun diversion to see if immersion is as what a certain group of people promised.
 
From what I've heard the demonstrations had the USB ports on the display, no?

I'd have more faith in display-synced too, but in the case 60hz 3D works on my existing panel it'd be a fun diversion to see if immersion is as what a certain group of people promised.

There is a VESA stereo connector on most new TVs that just puts out a frame strobe, and possibly has a control to swap which eye is up and down. If you read my article on it, there is a pic of the connector and cable. If you have it on the PC USB port, it should be trivial to keep the glasses in sync.

The Bit Cauldron guys also said there are special version being discussed for custom hardware, but didn't go into detail. If you think about it, it is just a 802.something (ZigBee) broadcaster that puts out a pulse on a regular interval. The custom hardware that plugs into anything should be a pretty simple thing to make. It doesn't have to transmit any real data other than setup, and that is already in the glasses and transmitter.

-Charlie
 
I see in the article you expect them to be avalible soon. Seen anyone selling them yet ? I doub they work on my 2 year old vizio but they should work with my 4850 and that monitor. Would give it a shot.
 
Don't forget, they can increase die area by 50% and still be smaller than NV. They can increase power use by 50% and still be smaller than NV. Given that they have a VERY modular architecture, neither would be much of an engineering problem. Also, ATI has demonstrated a much better ability to product parts on TSMC's 40nm process, and has notably better yields at similar die sizes (No, I won't quantify that, but it is not a guess).

Given all of that, you could see a 2400 shader, 384-bit, 250W Evergreen that likely crushes GF100 in almost every measurable way, and a yield-salvage 2000 shader part as well. That said, I don't think it will happen, but should ATI want to, it is well within reach technically.

On the other hand, they have a 3200 shader card now that is cheaper to produce than a single die 2400 card, and faster as well.

-Charlie

I find it a litte bit funny, because your pre-cypress prediction about transistors, die size and performance was wrong.

If NV doubles the transistor count and only keeps the clock the same, they are in deep trouble. I think 2x performance will be _VERY_ hard to hit, very hard. The ways to up that are mostly closed to them, and architecturally, the aimed wrong.

ATI on the other hand can effectively add in 4x the transistors should they need, but 2x is more than enough to keep pace, so they will be about 250mm^2 for double performance. Power is more problematic, but if you need to throw transistors at it to control power/leakage better, ATI can do so much more readily than NV.
 
He's being right though in that AMD has more headroom in transistors, thus die-space (and presumably power) to tackle arising problems. If Nvidia was facing some major headache now, they just couldn't afford to make the chip 50 mm² bigger to accomodate for that.
 
He's being right though in that AMD has more headroom in transistors, thus die-space (and presumably power) to tackle arising problems. If Nvidia was facing some major headache now, they just couldn't afford to make the chip 50 mm² bigger to accomodate for that.

Even a simpler thing like upping the voltage for faster clockspeeds (something that factory overclocked 5870s already do) is fairly easy for ATI, whereas Fermi is already at the limits of what's possible.
 
I see in the article you expect them to be avalible soon. Seen anyone selling them yet ? I doub they work on my 2 year old vizio but they should work with my 4850 and that monitor. Would give it a shot.

I think it is going to be a couple of months. I don't know the exact date, but I seem to recall that it was March. I could be wrong though, but it is no later than Q2.

-Charlie
 
Even a simpler thing like upping the voltage for faster clockspeeds (something that factory overclocked 5870s already do) is fairly easy for ATI, whereas Fermi is already at the limits of what's possible.

That is exactly my point. There are any of half a dozen ways that ATI could erase a 20% performance disadvantage to Fermi if they wanted to, and that is before any major architectural mucking about. I am sure they are waiting to see what NV can actually make, and then pull the trigger on the least costly option to slap them back.

-Charlie
 
He's being right though in that AMD has more headroom in transistors, thus die-space (and presumably power) to tackle arising problems. If Nvidia was facing some major headache now, they just couldn't afford to make the chip 50 mm² bigger to accomodate for that.

The only issue for that is they have to consider both Cypress and Hemlock. If they make a chip 50mm^2 bigger then they will have two close performing but slightly different top level chips with the larger of the two being unavailable to be practically used on their top flight dual chip card and the bigger single chip card may be uncomfortably close to the performance of the dual chip card to boot. In addition to this, splitting production between two chip runs is a complication and it reduces the number of 'good chips' for the Hemlock configuration.

I can understand the concept of upping the cockspeed. They don't complicate anything, they can slot it into their product lineup with larger margins and they can indeed give 10-20% extra performance without going overboard.
 
Correct me if I am wrong but aren't the HD5000 series limited to 4 outputs per GPU.. yet the 5870 (6) is s single GPU.. does this mean that 2 additional DP were "tacked on" ? Any chance of seeing DP 1.2 compatible ports (at least from the 2 "extra" ports) ??

((I know .. I know.. you can't talk about unreleased/unannounced products with fears about giving important information to your competitors))

The chip itself was designed with 6 outputs in mind, IE - the chip supports up to 6 outputs.

Regards,
SB
 
The only issue for that is they have to consider both Cypress and Hemlock. If they make a chip 50mm^2 bigger then they will have two close performing but slightly different top level chips with the larger of the two being unavailable to be practically used on their top flight dual chip card and the bigger single chip card may be uncomfortably close to the performance of the dual chip card to boot. In addition to this, splitting production between two chip runs is a complication and it reduces the number of 'good chips' for the Hemlock configuration.

I can understand the concept of upping the cockspeed. They don't complicate anything, they can slot it into their product lineup with larger margins and they can indeed give 10-20% extra performance without going overboard.

I dunno. crossfire 5870's are faster than a single 5970. Its just that they would amount to $200 more than the 5970. Like wise a faster 5870 will be more expensive than the 5970. And they hsould still be able to design a hemlock out of it if they so choose. They would just clock it at 5870 speeds instead of 5890 speeds.
 
I dunno. crossfire 5870's are faster than a single 5970. Its just that they would amount to $200 more than the 5970. Like wise a faster 5870 will be more expensive than the 5970. And they hsould still be able to design a hemlock out of it if they so choose. They would just clock it at 5870 speeds instead of 5890 speeds.

Faster is good, yes. However larger is not good when considering how long the HD 5970 is already and if they redesign an expensive but limited run of boards to fit a larger chip it may not be worthwhile in the longer term.
 
Faster is good, yes. However larger is not good when considering how long the HD 5970 is already and if they redesign an expensive but limited run of boards to fit a larger chip it may not be worthwhile in the longer term.

Mabye but then again we already know cypress chips will run at faster speeds because of the 5870. So like I said just clock them higher , it may require a better cooler or a fixed cooler. But at least hten we are in the same situation we are in now. The hemlock wont be as fast as two of the high end single card. But it wil lbe cheaper than that .

THe 5870 has a 125mhz core clock advantage over hemlock and a 200mhz advantage in ram speeds
 
Mabye but then again we already know cypress chips will run at faster speeds because of the 5870. So like I said just clock them higher , it may require a better cooler or a fixed cooler. But at least hten we are in the same situation we are in now. The hemlock wont be as fast as two of the high end single card. But it wil lbe cheaper than that .

THe 5870 has a 125mhz core clock advantage over hemlock and a 200mhz advantage in ram speeds


From Anands 5870 review

"Ultimately with the throttling of OCCT it’s difficult to make accurate predictions about all possible cases. But from our tests with it, it looks like it’s fair to say that the 5870 has the capability to be a slightly bigger power hog than any previous single-GPU card"

I don't suspect the boards ability to supply the power, I have suspicions about the load die temperature and noise. Its almost as loud as a GTX 295 under load (although the accoustics are quite good as far as tone) so the limiting factor here is infact cooling. They would probably have to use a vapour chamber design for any 'ultra' similar to the 5970 because the cards can draw a LOT of juice.

An HD 5890 'rediculous edition' would probably have to be quite expensive, and it would likely be sold in very limited numbers. However because of this they can really go nuts as far as installing fast GDDR5 and clocking is concerned.

http://www.overclockersclub.com/reviews/sapphire_hd5870_5750_vaporx/7.htm

The core clock speed on the 1600 processors went from an 870Mhz starting point all the way up to 986Mhz, a 116Mhz improvement, while the memory was disappointing with an increase of only 40Mhz over the 1250MHz clock speed on the Vapor-X 5870! The memory was a shocker, but it really fell into the same bucket as the reference cards memory clocks when overclocked. By using the combination of the two overclocking utilities I was able to pull the 986/1290 clocks from this card. This increase gave the HD 5870 just that much more muscle. When it came to the cooling capabilities, the Vapor-X shines - at the overclocked speeds, the temperatures were 58 degrees Celsius under load and 41 Celsius at idle with the fan at 100%. At the stock clocks, the idle temperature was much better at 34 Celsius, while the load temperature peaked at 60 Celsius, two degrees hotter. But the fan speed never hit 50%! There is no doubt the lower resolutions are a little CPU bound at the clock speeds I test at, so I ran the Extreme preset in 3DMark Vantage, while giving the CPU a small bump up to 3.6GHz, and the scoring jumped by 250 points. All in all, the Vapor-X cards do a fine job of overclocking above what the reference cards can achieve without voltmods or extreme cooling.

If Saphire makes all the referrence boards and they also make the overclocked Vapour X cards then they can make a referrence rediculous edition card with cherry picked chips, rediculous price and faster ram as well.
 
there isn't much of a difference between the geometry performance between those two cards probably because of the triangle rates.
 
Back
Top