AMD: R8xx Speculation

How soon will Nvidia respond with GT300 to upcoming ATI-RV870 lineup GPUs

  • Within 1 or 2 weeks

    Votes: 1 0.6%
  • Within a month

    Votes: 5 3.2%
  • Within couple months

    Votes: 28 18.1%
  • Very late this year

    Votes: 52 33.5%
  • Not until next year

    Votes: 69 44.5%

  • Total voters
    155
  • Poll closed .
It's simply the fact that the thing behaves/performs like 128-bit device...

I wonder if it would be possible a configuration with 24 ROPs, for instance.
 
800 versus 850MHz. Not that that explains the deficit, mind.
Some numbers are much worse than that (64/128bit writes). In fact the difference is even more than what's suggested by the memory clock difference (1000 vs. 1200Mhz).
Really strange. Someone should overclock core or mem and see how things change.
 
Actually, I think Dave hinted already that the chip doesn't really perform like it had a "true" 256bit interface:

Dave Baumann said:
Rys said:
Dave, can you really scale back ROP count on Cypress without scaling back bus width?
Yes. And performance would have been worse had we done so because removing the memory IF would have also removed a bunch of cache; curiously I've seen this outperform a 5850 in a few texture bound cases because it has a higher cache bandwidth.
Naturally, you'd expect it to perform better if you scale back only rops but not bus width, however Dave is saying it merely performs better cause that way there's more cache...
Could also explain why the press samples all seemed to had a bios with higher memory clock initially, didn't seem to make sense first (should have ample bandwidth for 16 rops) but if it effectively only acts as a 128bit device this suddenly makes sense.

So disabling rops that way sounds more like a hack to me. Might have made more sense to just scale back the chip to 24 rops / 6 memory channels instead (if possible, potentially needs to be power of two, we only know nvidia can do 192bit) should have had much better performance (and needed a possibly cheaper pcb) - but on the flipside maybe 768MB ram might have been deemed not enough and twice that too expensive and not very logical (more ram than top end cards).
 
It's simply the fact that the thing behaves/performs like 128-bit device...

I wonder if it would be possible a configuration with 24 ROPs, for instance.

Now if u drop in max AA and AF than u cant expect performance betwen a 16 ROP (4890,5770) card and a 32 ROP (5850) card with a 16 ROP card. I think that it should be clear for all the reviewers that the card will be much closer to the 5770,4890 than 5850 with AA and AF maxed out.
The only real problem here is that AMD-s price doesnt reflect this at all :rolleyes:.
 
Well, the release of the HD 5830 inspired me to stop waiting and instead go order the MSI HD 5770 Hawk. :)

Mission accomplished. lol
 
Now if u drop in max AA and AF than u cant expect performance betwen a 16 ROP (4890,5770) card and a 32 ROP (5850) card with a 16 ROP card. I think that it should be clear for all the reviewers that the card will be much closer to the 5770,4890 than 5850 with AA and AF maxed out.
Actually in the quite ROP-intensive 3DMark03 the HD 5830 is behind a 4890 in either default settings or in Full-HD or Full-HD with 4x AA and max AF enabled - the latter settings makes the 4890's advantage grow to almost 12%, which is more than even the pure clock difference between the two (even 4870/1G is a tad faster, despite lower clocks) - notwithstanding the higher shader throughput and higher texture-rate and -efficiency.

The only real problem here is that AMD-s price doesnt reflect this at all :rolleyes:.
I'm confident, the market dynamics will take care of that.
 
Actually in the quite ROP-intensive 3DMark03 the HD 5830 is behind a 4890 in either default settings or in Full-HD or Full-HD with 4x AA and max AF enabled - the latter settings makes the 4890's advantage grow to almost 12%, which is more than even the pure clock difference between the two (even 4870/1G is a tad faster, despite lower clocks) - notwithstanding the higher shader throughput and higher texture-rate and -efficiency.


.

On average in games the 4890 is usualy even with 5830. In some games the 4890 having the edge and in others the 5830. With better drivers the 5830 can grow while the 4890 will stay same. Maybe the 16rop vs 256bit bus need some specific driver hints :LOL: compared to the full chips.
 
guess that comes down to yeilds if there aren't that many chips salvaged for 5830 they could have the higher price vs 5770. if there are lots of 5830's i cant see them flying off the shelves.
 
AMD's REAL answer to GeForce GTX 480: Enhanced ATI Radeon HD 5970 with custom designs from AIBs..

Sapphire Readies Custom HD 5970 with 4 GB memory

Currently holding the crown for the fastest graphics card available is the HD 5970. SAPPHIRE will introduce at the show a new 4GB version of the SAPPHIRE HD 5970 with a special cooling solution and higher clock speeds that will blow previous versions out of the water!

http://www.pcgameshardware.com/aid,...es-4-GB-HD-5970-and-Eyefinity-6-Edition/News/

There was a reason why I posted news about Asus ROG ARES saying "AMD's REAL answer" (although it was also meant to point to recent Trillian the Fermi killer canard). Asus was the one who leaded way for AMD's custom 5970s with ROG ARES now others are following..
 
Sapphire Readies Custom HD 5970 with 4 GB memory



http://www.pcgameshardware.com/aid,...es-4-GB-HD-5970-and-Eyefinity-6-Edition/News/

There was a reason why I posted news about Asus ROG ARES saying "AMD's REAL answer" (although it was also meant to point to recent Trillian the Fermi killer canard). Asus was the one who leaded way for AMD's custom 5970s with ROG ARES now others are following..

Insanity!

So are they using 2gbit GDDR? (Is it out) or is the 32 million dollar question "How are they fitting 32 ram chips onto the single PCB along with two dice "

Also its lovely to see them breaking the PCI-E spec. Whats the point in a spec if its not meant to be broken eh?
 
What was the point in PCI-SIG trying to tie a power recommendation which belongs in the ATX spec and trying to turn it into a binding restriction? It should never have been in there and it never had binding power because their bylaws say specifically which parts of a device has to comply to allow the manufacturer to benefit from patent cross licensing (only the parts which interact with the bus).

I for one am glad to see this go, it was none of their fucking business.
 
So are they using 2gbit GDDR? (Is it out) or is the 32 million dollar question "How are they fitting 32 ram chips onto the single PCB along with two dice "
They would need 16 chips (2gbit each) for 4GB, methinks.
 
They would need 16 chips (2gbit each) for 4GB, methinks.

Are said chips available? I thought they were H2 2010?

Hynix introduced the industry's first 40 nm 2Gbit GDDR5 chips


The Well-known maker of memory chips, the corporation Hynix Semiconductor, has recently declared the launch of innovative chips GDDR5 memory type, which, according to the manufacturer, have the highest performance and density on the market today. Produced on 40 nm development technology, these chips can stock up to 2Gbit (256 MB) of data as well as sustain the utmost data rate of 7.0Gbit / s, which will let for future memory operate at speeds up to 28Gb / s with 32-bit I / O bus.

Even though the bandwidth as well as processing power of these most highly developed chips encloses the comparable features of the best ever chips on the market now, Hynix argue that its new-fangled chips are 20% more energy efficient, as their input voltage is 1.35V in its place of 1.5V, as in the earlier 50 nm generation GDDR5 memory . Additionally, escalating density will let manufacturers to set up the video card twice as much memory, or else the same, but using half the number of chips. Last saves space on printed circuit board as well as component costs, which in turn might guide to more reasonable products.

Corporation Hynix declared that it will start mass manufacturing of its newest 2Gbit chips GDDR5 in the second half of next year to meet up the increasing demand for high quality graphics memory chips.
 
Whoops :(

This is a custom card, they'll blow the PCIe power spec and ATX form factor spec.

Welcome to the new age of ultra GPU's, where standards are meant to be broken.
 
Back
Top