AMD: R8xx Speculation

How soon will Nvidia respond with GT300 to upcoming ATI-RV870 lineup GPUs

  • Within 1 or 2 weeks

    Votes: 1 0.6%
  • Within a month

    Votes: 5 3.2%
  • Within couple months

    Votes: 28 18.1%
  • Very late this year

    Votes: 52 33.5%
  • Not until next year

    Votes: 69 44.5%

  • Total voters
    155
  • Poll closed .
Never going to happen.
You're judging from the perspective of being the only one to support PhysX GPU acceleratoin is important for NVIDIA.
What's important for them is to have as high as possible PhysX market penetration while still having an edge in PhysX GPU acceleration.
You may port PhysX to CS and/or OCL and still have a big performance leadership but at the same time provide something that other physics middleware don't -- Compute Device acceleration.
Thus you beat your CPU-only competitors (that's todays Havok) and maintain some degree of leadership for your h/w via optimising your middleware for it first, everything else later.
In that case you earn even more than if you keep PhysX only NV GPU-accelerated -- it becomes possible to benchmark PhysX acceleration on competing GPUs. So what you need to provide is better performance in PhysX titles instead of doubtful videos of PhysX GPU effects. You don't even need to code these effects anymore for some 3rd party ISVs.
It can never happen but at least there are reasons why it could. I think that Havok will eventually go the same route (basic OpenCL version for everybody and highly optimized version for Intel CPUs+LRB). Now since NV don't have CPUs...

You do realise there's a version of DX11's compute shader that target's DX10 hardware, right? Targetting that'll give you access to a huge proportion of the market, much more than CUDA could ever hope for.
Actually, if i'm not mistaken, only 7x0 class AMD h/w are compatible with CS4.0 so as of now it's DXCS=NV(80+)+AMD(700+) vs CUDA=NV(80+). And CUDA has more features even on G80 than DXCS4 on RV770. So it's not that simple.

Sure, it may be more limited than CUDA but then the version of compute shader that targets DX11 hardware almost certainly offers certain things that CUDA on current Nvidia hardware can't either, so the point seems moot
But DX11 CS5-class h/w certainly has much less market footprint than CS4/CUDA h/w, so why does that matter? Once NV's DX11 h/w shows up CUDA will have everything CS5 has while still being more powerful on G8x+ DX10 h/w than CS4. Heck even OCL 1.0 is more powerful than DXCS. So maybe instead of celebrating DX11 release we all should celebrate that OCL 1.0 release (which happened sometime in spring, i think? how's that OpenCL adoption in s/w coming?).
 
Actually, if i'm not mistaken, only 7x0 class AMD h/w are compatible with CS4.0 so as of now it's DXCS=NV(80+)+AMD(700+) vs CUDA=NV(80+). And CUDA has more features even on G80 than DXCS4 on RV770. So it's not that simple.

You are mistaken. There's specific version of compute shader that targets DX 10, DX 10.1 and DX 11. Sure you won't have as many features as CUDA but you absolutely should be able to create some GPU based physics. The likes of Futuremark demonstrated GPU physics in DX10 without any sort of compute shader functionality at all.
 
In that case you earn even more than if you keep PhysX only NV GPU-accelerated -- it becomes possible to benchmark PhysX acceleration on competing GPUs.
Which I very much doubt is a battle they want to engage in as long as they are matching up G200s to R800s.
 
2cents

Once NV's DX11 h/w shows up CUDA will have everything CS5 has while still being more powerful on G8x+ DX10 h/w than CS4. Heck even OCL 1.0 is more powerful than DXCS. So maybe instead of celebrating DX11 release we all should celebrate that OCL 1.0 release (which happened sometime in spring, i think? how's that OpenCL adoption in s/w coming?).

The bottomline is, OCL is included in DX11. It is a standard feature that can be used by anyone, be it for Larabee, DX11 GPUs and even NVidia GPUs. Even smaller companies that will make DX11 GPUs, (S3) could potentially use it. So Game Developers will want to incorporate that to their engine for the simple fact that it is more UNIVERSAL in nature. Adoption rate should be faster in that sense. AMD and Intel are even teaming up to incorporate it efficiently. It may not have all the features that PhysX have at the moment, but it can always be updated and revised to make it run efficiently and even add the much touted Compute Device Acceleration, or a variation of it.

And about about early adoption size, of course, there's not going to be a lot since it's new. Everything started from a small amount of supporters, even for PhysX.
 
You are mistaken. There's specific version of compute shader that targets DX 10, DX 10.1 and DX 11.
Yes. CS 4.0.
It is required to have at least 16KB of shared memory to be CS4-compatible however and R600 architecture doesn't have any. So that means that CS 4.0 are compatible with DX10 hardware with the exclusion of Radeons 2000 and 3000 series. That means all 4000 series Radeons and all 80-200 series GeForces. And while 4000 series is truly wonderful I doubt that it has more market penetration than all GF8s/9s/200s combined.

Which I very much doubt is a battle they want to engage in as long as they are matching up G200s to R800s.
I don't think of it as something they want, I think of it as something they will inevitably need to do. Otherwise they'll loose any PhysX footprint at some point and this middleware will become irrelevant. That's hardly what they want either, don't you think?

The bottomline is, OCL is included in DX11. It is a standard feature that can be used by anyone, be it for Larabee, DX11 GPUs and even NVidia GPUs. Even smaller companies that will make DX11 GPUs, (S3) could potentially use it. So Game Developers will want to incorporate that to their engine for the simple fact that it is more UNIVERSAL in nature.
As i've already said, game developers don't care if something's a standard and universal or not, all they're interested in is how much money will they get after implementing it.
Most of game developers are working on consoles which are closed to the highest possible levels. Didn't hear anyone ever complaining about that.
Plus you don't understand what i'm saying. I'm saying that _right now_ PhysX has more market than DX11. That's all. It will change in the future, sure, but not this year and probably not in 2010.
 
There are several things that will push DX11

1) All the hype surrounding Windows7. I trully think that many many people will upgrade to it
2) Several dx11 games coming out before the end of the year
3) DX11 Hardware

Also, I think that many people who will be looking for new windows7 powered machines will choose Dx11 hardware to go along with it.
 
Yes. CS 4.0.
It is required to have at least 16KB of shared memory to be CS4-compatible however and R600 architecture doesn't have any. So that means that CS 4.0 are compatible with DX10 hardware with the exclusion of Radeons 2000 and 3000 series. That means all 4000 series Radeons and all 80-200 series GeForces. And while 4000 series is truly wonderful I doubt that it has more market penetration than all GF8s/9s/200s combined.
There's two grades fo DirectCompute for DX10 - DirectCompute 10 and DirectCompute 10.1. NVIDIA's DX10 products are DirectCompute 10 (compiling to Compute Shaders 4.0) and HD 4000's and the DX10.1 GT21x products are/should be DirectCompute 10.1 (compiling to Compute Shader 4.1).
 
Easy on Degustator, he hasn't heard anything back from his marketing contact and ran out of spins.

So, how many days before the first leaked benchmark numbers?

It's also very possible to created multiple surfaces, like for Supreme Commander, which requires 2 surfaces, you could create "odd" screen numbers

everything is possible under linux. windows is symetrical.
"windows is symmetrical"?
Care to elaborate a bit more on that?
At least the promo-shots and/or slides have had non-symmetrical combos used on Windows platform IIRC?
Or do you mean just gaming-wise?
And what about games which have native support for multiple monitors, with for example 1 monitor used for statistics and such, couldn't such use for example 3x1 + 1x2 configuration on Windows too?
 
:cool: Dual core?



http://www.chiphell.com/2009/0914/103.html

20090914122130798.png
 
Last edited by a moderator:
Looks like there is no cross fire sideport, but there is some crossfire thingy. Can anybody make out what it is?

If it is just a crossfire connector, then it is likely to be a pretty dumb kind of rendering scheme for Hemlock. :(
 
"windows is symmetrical"?
Care to elaborate a bit more on that?
At least the promo-shots and/or slides have had non-symmetrical combos used on Windows platform IIRC?
Or do you mean just gaming-wise?
And what about games which have native support for multiple monitors, with for example 1 monitor used for statistics and such, couldn't such use for example 3x1 + 1x2 configuration on Windows too?

For windows, a single surface (read, surface) needs to be symetrical. i.e. all screens must be position horizontally or vertically with the same resolution. it will create one rendering surface. placing a vertical screen next to your 3 horizontal screens (3xhor+1vert) creates two different surfaces, i.e. two different "monitors" for Windows.
 
To me DX11 is just a bonus, I'm sure the increase in performance will be the main driving force for a lot of people who have midrange at the moment (4850 in my case) who still have a few games they have to turn it down a bit for. The DX11 is just nice to have, especially if you are going from DX9 land to Windows 7.

I could wait for nvidia, but the timescale is just too long. A lot of people said wait for the ATi part when G80 came along, but it turned out that 1 in the hand was definitely worth 2 in the bush there, and perhaps the same again now.

So I'm almost certain to go top end single gpu which should give me twice the performance of my 4850, Windows 7 and increase in ram from 2GB to 4GB in the next few months. Strangely I have no desire to upgrade the cpu/motherboard to support the new Intels or DDR3!

Funnily enough, even though I know it is not, I just keep thinking of the new Ati card as smaller process so allowing2 x the number of processors, which although wrong fits in with what I want it for as mentioned above. I guess this is from the perspective of a mid range buyer who will be moving up to top of the range. $399 is not chicken feed but for big jumps as it will be then it does not seem very costly. I wonder if I had a 1GHz 4890 whether I would be tempted though, or wait a few months to they come down in price? Anyone with a 4890 who is contemplating moving over as soon as possible?
 
Yes. CS 4.0.
As i've already said, game developers don't care if something's a standard and universal or not, all they're interested in is how much money will they get after implementing it.
Most of game developers are working on consoles which are closed to the highest possible levels. Didn't hear anyone ever complaining about that.
Plus you don't understand what i'm saying. I'm saying that _right now_ PhysX has more market than DX11. That's all. It will change in the future, sure, but not this year and probably not in 2010.


But... the future is now! (I've always wanted to use that line :) )


I have to ask you, How does PhyX have more marketability when there's a lot of limiting factors the consumers have to face in order for them to use it?
 
It is definitely not a dual core, and the packaging suggests that it is not a MCM either.

It's not "completely" dual core but has some redundant parts and shader core seems to be divided in two main "blocks". There are also two rasterizers and two hyerarchical Z blocks before the dispatch processor: the rumors about SFR could have very solid ground with this setup. IMO.
(the chip allocates dynamically work between the two shader blocks in SFR mode that is "transparent" to the software)
 
Back
Top