AMD: R8xx Speculation

How soon will Nvidia respond with GT300 to upcoming ATI-RV870 lineup GPUs

  • Within 1 or 2 weeks

    Votes: 1 0.6%
  • Within a month

    Votes: 5 3.2%
  • Within couple months

    Votes: 28 18.1%
  • Very late this year

    Votes: 52 33.5%
  • Not until next year

    Votes: 69 44.5%

  • Total voters
    155
  • Poll closed .
Does any of the reviews have a die shot?
I think this round we are left dry. :cry:
Do you mean dieshot as in just image of the die + package, or an image which shows "details" of the core (is it x-ray or what?)
The exact term for this should be "micrograph", but die shot is just the common name for the IC snapped for a picture.
 
Furmark will push any core to 100C and fry it if you're not careful. Unless you enjoy running Furmark 24/7, I don't think temperatures will be an issue.

It shouldn't reacht that, ever. It didn't here not even when the fan wasn't working properly, it downthrottled in the high 90s.
 
Tech Power Up has faux 5850 benchmarks along with the 5870. Which is interesting.

We simulated the performance of the HD 5850 by taking our HD 5870, reducing the clock speeds and disabling two SIMDs, which results in exactly the same performance as HD 5850.

How do they disable two SIMD's?

5850 seems like the better buy for me. Beat 285 in most games tested and a reasonable price at 259. It's something I could actually see myself purchasing were I in the market today.

I tell you what though, 4890 is still a brutally fast card for the (low) price.
 
Last edited by a moderator:
Tech Power Up has faux 5850 benchmarks along with the 5870. Which is interesting.



How do they disable two SIMD's?

5850 seems like the better buy for me. Beat 285 in most games tested and a reasonable price at 259. It's something I could actually see myself purchasing were I in the market today.

I tell you what though, 4890 is still a brutally fast card for the (low) price.

I think on RV700 series there was a register you could write to that disabled or enabled smids, might be the same on RV870.
 
Someone should bench Crysis/Warhead with custom (Nebula's :D) configs, 'cause the default spec, IMO is not very good quality/perfomance-wise.
 
There's always a bottleneck somewhere ;)
Others have insisted that since Larrabee is a CPU and uses x86, it won't have bottlenecks... ;)

I wouldn't be surprised if interconnect : RAM ratio is better in Larrabee (i.e. less of a performance constraint relative to a single chip) than in traditional GPUs. If any of these ever do this, of course.
It shouldn't be too hard to do, given how bad the bandwidth is between GPUs.

As long as performance scales adequately with multiple chips, who cares? It's a question of whether it sells, not absolute performance. People have been buying AFR X2 junk for years now, putting up with frankly terrible driver support.
It would depend on how much that 10% expands in a multi-chip case, and what adequate scaling is.
An increase in the percentage for a dual-chip case would be a penalty taken out of the doubled peak versus a lower-overhead single-chip solution.
Without actual simulations, it would be little more than hand-waving on my part to guess how much that 10% could expand.

I can't work out what you're quantifying here.
I was listing out areas of the current scheme that can potentially jump chip, and that have some unquantified cost that is currently assumed to be acceptable.
These are areas where either the cycle count can jump by an order of magnitude, or the bandwidths can drop by an order of magnitude.

Ultimately the key thing about Larrabee is it has lots of FLOPs per watt and per mm² and is heavily dependent on GPU-like implicit (4-way per core) and explicit (count of fibres) threading to hide systematic latencies.

So whether the latency is due to a texture fetch, a gather or data that is non-local, the key question is can the architecture degrade gracefully? Maybe only once it's reached version X?
At a hardware level, Larrabee is not as latency tolerant as a GPU.
At a software level, a long-enough strand appears able to cover texture latency in a single-chip case.
A texture read or remote fetch would be even longer than that, though what that costs I am not sure.
The fiber would have to be compiled to be longer, which may have some similar penalties to increasing GPU batch size.
I'm not sure what practical limits there are to fiber length.


I think I completely misintrepreted what you said before. I'm not sure why you say bin spread is going to get worse with flimsy binning of triangles.
I was thinking about the case of deferring something like tesselation to the back-end. Bins would already be farmed out to the other chip when newly generated triangles might cross tile boundaries and need to update other bins.

In the final result, that spread would happen anyway, so in retrospect I was abusing the term.
The bin spread percentage is the measure of the number of triangles that span bins and may not actually change, but the timing and cost of it might.

I'd hope there'd be performance counters and the programmers make the pipeline algorithms adaptive. The fact that there are choices about the balance of front-end and back-end processing indicates some adaptivity. Though that could be as naive as a "per-game" setting in the driver.
I know modern chips have a ton of monitors, though I'd be curious how much a core based on the P55 will sport.
Larrabee should run shader compilation on-chip, so maybe it can make adjustments.

Yes, I agree in general, since computation is cheap and, relatively speaking, cheaper in multi-chip. All I'm saying is that Intel has a lot of potential flexibility (if it's serious about multi-chip) and it's a matter of not leaving gotchas in the architecture. Considering the appalling maturation path of multi-GPU, so far, Intel could hardly do worse. The only risk seems to be that consumers get sick of multi-chip (price-performance, driver-woes).
The next question is when Larrabee is small enough and cool enough for such a setup.
The chip in the die shot certainly doesn't look like a promising candidate, but perhaps at a later node.
The two large ICs on a card scheme GPU makers have been using may be a fluke.
Even if Larrabee does sport better scaling, the market that would benefit from this is already very niche.

Of course now that we learn that R800 doesn't have dual-setup engines and is merely rasterising at 32 pixels per clock, it does put the prospects of any kind of move to multiple-setup (and multi-chip setup) way off in the future.
Yeah, that was pretty underwhelming.
Does this help much for tesselation?
The peak triangle throughput numbers don't change.
 
I find Anand review lacking in few places. Author conclusions sometimes are wired, like poor scaling of 5870CF in L4D where clearly cards are hitting same wall at lower resolutions which points to CPU bottleneck :rolleyes:.

Time to read another review then ...

PS. Anyone knows who's selling new cards in UK? (except overclockers)

EDIT: I had a thought. Maybe the reason why we aren't seeing RV870 x-ray shots is because there is more in it that AMD is showing today? If Cypress is really 14 SIMD and 181mm2 then that would make some sense. Any opinions??
 
I find Anand review lacking in few places. Author conclusions sometimes are wired, like poor scaling of 5870CF in L4D where clearly cards are hitting same wall at lower resolutions which points to CPU bottleneck :rolleyes:.

Time to read another review then ...

PS. Anyone knows who's selling new cards in UK? (except overclockers)

scan.co.uk had some I think, but against all odds OCUK was cheaper
 
Good luck finding one before next week, as Dell got virtually all of the initial production of 5870. If you haven't ordered one already, chances are you won't get another chance until next week.

Regards,
SB
 
Let's hope Dell's swallowing of the first run means more recently fabbed chips with better thermals will fill the channel with the full rollout.

Hemlock's future as a dual-chip board sort of depends on this, though hopefully not all cool chips go to that SKU.
 
Last edited by a moderator:
Back
Top