AMD: R7xx Speculation

Status
Not open for further replies.
It's been a long time since a Radeon was the graphics card of choice at the all-important $199 price point, but the HD 4850 looks like it might have the title locked up. The current GeForce 9800 GTX is simply no match for AMD's latest mid-range offering, and Nvidia's surprise http://www.techreport.com/articles.x/14967/11

This makes me kinda mad that Nvidia was surprise about RV770, it makes me feel that Nvidia thinks R300 days are over - ATI sucks.
 
The only surprises to NVIDIA were in getting working silicon AFAICS. G200 only shipped now and at 65nm, and only hitting 55nm now for their old gen, just as planned? I don't think so. If they had hit all their targets things would have looked very differently.
 
This makes me kinda mad that Nvidia was surprise about RV770, it makes me feel that Nvidia thinks R300 days are over - ATI sucks.

I think you are misunderstanding their statement. They meant "Nvidia's suprise entry 9800GTX+" and not that Nvidia was surprised.
 
I think you are misunderstanding their statement. They meant "Nvidia's suprise entry 9800GTX+" and not that Nvidia was surprised.
I spoke to an nVidia representative about this matter and I was told that a faster G92 product was on the roadmap for months. I think they just planned to release it a few weeks later, since it won't be available till the mid of July, but then decided to spoil the Radeon launch.
 
Do app detection and disable PhysX?
Wouldn't that violate FMs rules also?

I'm perfectly able to run 3D stuff and CUDA on my little 8600GT: there are examples that do this as part of the SDK. My understanding is that PhysX is based on CUDA: at least that's what Nvidia has been telling the world. I have no reasons to doubt that.

If the inability to run PhysX and 3D at the same time is the core of the argument, then "from what I've heard" is not good enough.
I haven't tried graphics and physics yet, but for one even the graphics in Vantage's CPU-test have to be rendered somehow, so that shows that it's at least possible without dedicating a whole GPU to physics and second, I am running Folding@Home on my GPU an forgot to turn it off once when i was playing a game. It was slower than normal, but still perfectly playable.

AFAIK context switches are not free between graphics and CUDA mode, but apart from that, CUDA apps are just another bunch of threads to be executed.
 

Looking at that, the 3DMark06 score is disappointing. Only 13037. I was expecting at least 14000+

Tell me, anyone else think it looks about 9.5" long? My box can't do 11" but can fit a 9.5" card, so hence i'm always looking for a 9.5" card.

That said, i'm hitting Europe tonight .. so will be on the look out for bargains when I get there.

US
 
Looks like 06 actually liked Texture Assigners more than pure ALU units (yes it could be bottlenecked, but looks like the TAs could have stayed at the same ratio)


Vantage however without a doubt, shows off the ALU power pretty much straightforward.


Is it me, or have post-R600 cards been chiming pretty close to the then-3DMark standard score design targets? D: :D
 
I don't think it's the driver. Previously it seemed to be, but once the TA removal got into place, bye bye conspiracy.

It could increase a little (10% enough? :p) but I wouldn't be expecting big shifts.
 
Well also the fact that 3dMark06 scores seem to be very CPU influenced, its almost impossible to compare 3dM06 scores across different setups
 
So you envision a scenario, say next gen, where running 20 separate programs simultaneously isn't enough, and you need 60?

At the chip level, at least for CUDA, I was under the impression that the number of kernels at a given instant was 1.
The current methodology is context switching between kernels.
The overhead of this looks like a future design's low-hanging fruit.

I was only thinking of juggling 2-4 such contexts.

The minimal granularity is per-SIMD, just by virtue of the fact that SIMDs run the exact same instruction over all their units, so nothing smaller can be done.
I'm not advocating that things be split down to one kernel per SIMD, just that the current setups are very coarse.

As far as I can tell CAL and CUDA both support more than one kernel running simultaneously.
I was under the impression that the last time we checked that the threading hardware would only work from CUDA kernel at a time.
Running multiple kernels at the chip level was not simultaneous, rather there was a context switch and startup of the separate kernel.

We're looking at the following types of kernel in D3D11 I reckon:
  • Control Point
  • Vertex
  • Geometry
  • Pixel
  • General Computation
Jawed
Kernel types or thread types? The usage prior to this indicated that multiple thread types could be applied to a kernel.
 
I thought the idea was these card will consume less power, yet they still need 2 powerplugs in addition to what they get through the PCI-E slot?
 
I thought the idea was these card will consume less power, yet they still need 2 powerplugs in addition to what they get through the PCI-E slot?

I don't think the idea is that these cards consume less power. They just have increased performance without a similar increase in power requirements. Hence all the emphasis on performance/watt.
 
4870 is a high performance binned SKU with the same RV770, as 4850 and as such I don't see how it could consume less power.
 
You mean the projection pattern? I didn't expected anything different in this iteration, to be honest, and I don't think it's a miss. ;)
 
From "just playing around" with an HD4850 I'd say, the Aniso-ouput hasn't changed compared to RV670. It's still better than Nvidias wrt track markings in racing games, but more prone to shimmering in high-freq content.
 
Status
Not open for further replies.
Back
Top