AMD: R8xx Speculation

How soon will Nvidia respond with GT300 to upcoming ATI-RV870 lineup GPUs

  • Within 1 or 2 weeks

    Votes: 1 0.6%
  • Within a month

    Votes: 5 3.2%
  • Within couple months

    Votes: 28 18.1%
  • Very late this year

    Votes: 52 33.5%
  • Not until next year

    Votes: 69 44.5%

  • Total voters
    155
  • Poll closed .
Tesselation can be used in a way that you won't even notice it or will need to know where to look to notice it. Radeons had tesselation for 2,5 years. What makes you think that everybody will start using it now?
Compute shaders are officially available on DX10 class h/w and are essentially nothing more than CUDA but this time -- from Microsoft not NVIDIA.

Isn't it clear? This time tesselation is part of DirectX standard, for the 2.5 years it has been propiertary feature of Radeons.
Same goes for Compute Shaders (and OpenCL) vs CUDA, this is "free for all" standard, not propiertary feature of one brand
 
Isn't it clear? This time tesselation is part of DirectX standard, for the 2.5 years it has been propiertary feature of Radeons.
Same goes for Compute Shaders (and OpenCL) vs CUDA, this is "free for all" standard, not propiertary feature of one brand
So why would it suddenly be used before every vendor has DX11 hardware? It's essentially the same as "propiertary feature of Radeons" while nobody else supports it.
And calling CUDA proprietary is like calling Java proprietary.
 
So I'm looking at a 24" 19x12 monitor right now. What are my best options for extending it? 3x1 in portait mode - can't help feeling this is a bit like gaming through a stained glass window :) Has anyone seen any clips of games in this format?

There's no practical way of using smaller monitors on either side? Actually, as the 24" is wide screen, a 'normal' aspect ratio, smaller, monitor might be the same height. I wonder if 2 'normal' 20" or 22" screens would work? Or do normal monitors not display widescreen resolutions? It's been so long :)
 
So why would it suddenly be used before every vendor has DX11 hardware? It's essentially the same as "propiertary feature of Radeons" while nobody else supports it.
And calling CUDA proprietary is like calling Java proprietary.

Because eventually, every vendor will have DX11 hardware, that's pretty much guaranteed, while CUDA is pretty much guaranteed to remain exclusive to Nvidia.
 
So why would it suddenly be used before every vendor has DX11 hardware? It's essentially the same as "propiertary feature of Radeons" while nobody else supports it.
And calling CUDA proprietary is like calling Java proprietary.

That doesn't make sense at all. If you're writing a game for DX11 after all, why not use all available features if you feel it would make a difference? You wouldn't make a game a DX11 title just to put something on the sticker. While I'm sure there may be such cases, serious developers might use compute shaders to add more realism to the game. And since all (both) vendors will support it sooner or later (I guess sooner)... Of course I'm not talking to exclusive TWIM... devs, because they'll surely delay DX11 transition as much as required.

And I don't think CUDA and Java is a valid comparison due to vast support and compatibility of Java between different platforms.
 
So I'm looking at a 24" 19x12 monitor right now. What are my best options for extending it? 3x1 in portait mode - can't help feeling this is a bit like gaming through a stained glass window :) Has anyone seen any clips of games in this format?

I remember a short section while they were playing a flight sim i believe! I think running three 1920x1080 monitors in widescreen mode at 5760x1080 would be a little much for some games. Portrait mode for 3240x1920 would be sweet if you are going for "huge screen" mode rather than surround gaming. I can see advantages to both.

There's no practical way of using smaller monitors on either side? Actually, as the 24" is wide screen, a 'normal' aspect ratio, smaller, monitor might be the same height. I wonder if 2 'normal' 20" or 22" screens would work? Or do normal monitors not display widescreen resolutions? It's been so long :)

I don't have any insider info, but from the way it has been explained by Dave and others all screens must run at the same resolution for Eyefinity to work. It will depend on how your monitor interprets odd resolutions as far as how it would display the image. I can set my screen to 1:1, so smaller images display fine, and just letterbox instead.

Examples:
24" screen resolution is probably 1920x1200, and the smaller screens would be something like 1280x1024 or 1600x1200. You have two options, either force your 24" into a square resolution like 1280x1024 which will fit within the confines of that screen, or force your square screens into widescreen such as 1440x900 or 1280x800.

Either way it is less than ideal and you would probably not be happy with it.

(first post here, have been trolling this awesome thread since the top of the year, great read all of you!)
 
If it's twice the performance for less than twice the price, how can it be less?

Anyhow, since when price/performance differences are set in stone? Don't forget they have to fit the new products withing the current lineup, as they wouldn't want to cannibalise their 4xxx series just yet...

Why are you asking this question to me?

I didn't say that 5770 will have "worst performance / price ratio" than 5870.

I just said that the "performance / price ratio" of 5770 will not be 25%-30% better than 5870's "performance / price ratio".

(+25%/+30% is what "performance / price ratio" advantage the HDX7XX/HDX6XX series parts have in relation with HDX8XX series parts, the last 1,5 year)

Probably it will have the same "performance / price ratio" as 5870.


Is 0%/+5% "performance / price ratio" advantage less than 25%/30% performance / price ratio advantage?

Yes.


The question that you are asking has nothing to do with what i say.

I said that traditionally (last 1,5 year) the HDX7XX/HDX6XX series parts have better "performance / price ratio" than HDX8XX series parts.

And you said that if they have the same "performance / price ratio" it is just fine. (same as 5870...)

Let's suppose that, it is just fine.
(for this hypothesis to be correct, we must take also other factors for consideration, for example what "performance / price ratio" the competition (NV) will have at this price range (in no way i am forecasting that NV will have better "performance / price ratio", i am just saying that there are other factors also to consider...) (also, i am not the one to decide if customers will like that, customers will decide that...)

This (just fine...) has nothing to do with what i said...

Also i didn't say that "price/performance differences are set in stone".

On the contrary, did you miss my ATI X1600XT (169$) and NV 8600GTS (199$) comment?

I just said that i like better the "performance / price ratio" difference we enjoyed the last 1,5 year, because i happen to buy 150$ parts or less

So anyone who buys 150$ parts or bellow will miss some "performance / price ratio" advantage
and anyone who buys 250$ parts and above will enjoy more "performance / price ratio" advantage.
(All this (like i said in my original post) regarding what we have the last 1,5 year...)

It is simple as that...
 
Why are you asking this question to me?

I didn't say that 5770 will have "worst performance / price ratio" than 5870.

I just said that the "performance / price ratio" of 5770 will not be 25%-30% better than 5870's "performance / price ratio".

...

The question that you are asking has nothing to do with what i say.

That's exactly why I asked the question. Since when x6xx parts need to have better price/perf than x8xx parts?

Anyway, I'm sure that AMD will put a price premium on performance and high end cards. Anyway, if you don't think new pricing is fair, you can stay with whatever card you have at the moment.
 
That's exactly why I asked the question. Since when x6xx parts need to have better price/perf than x8xx parts?

I didn't say need.

I just said that the fact is that the last 1,5 year HDX670 / HDX770 parts have better performance / price ratio than HDX8X0 parts.

So something is changing...

If you like better the upcoming situation, then good for you, like i said i prefer the situation we had the last 1,5 year. (there may be more than 1 "point of views "in the world, correct? )

Anyway, I'm sure that AMD will put a price premium on performance and high end cards. Anyway, if you don't think new pricing is fair, you can stay with whatever card you have at the moment.

You are arguing about things i never implied...

I never said what is fair or not...

Also i don't remember mentioning my upgrade plan...
 
AFAIK, sun ported jvm to x86, arm etc. as well. nv wrote cuda for ptx and.......:?:

Easy on Degustator, he hasn't heard anything back from his marketing contact and ran out of spins.

So, how many days before the first leaked benchmark numbers?

about screen set-ups:
3x1portrait_242w.png

2x2landscape_plus2_242w.png

3x1landscape_plus3_242w.png


It's also very possible to created multiple surfaces, like for Supreme Commander, which requires 2 surfaces, you could create "odd" screen numbers

3x1landscape_plus1_242w.png


everything is possible under linux. windows is symetrical.
 
Last edited by a moderator:
AFAIK, sun ported jvm to x86, arm etc. as well. nv wrote cuda for ptx and.......:?:
Well if AMD's GPU architecture would be open i'm sure that NV would write CUDA for it, no? In fact we'll see if NV port CUDA to LRB once it'll become avialable. Although now with OpenCL it probably won't happen anyway.
 
Well if AMD's GPU architecture would be open i'm sure that NV would write CUDA for it, no? In fact we'll see if NV port CUDA to LRB once it'll become avialable. Although now with OpenCL it probably won't happen anyway.

Seriously, you are trying so hard... You should stop theses ridiculous spins, it makes you look like a basic (proprietary) fanboy.
 
Just going to throw in my $0.02 on the pricing debate... I am plenty happy with the current pricing scheme.

38xx replaced the R600 for less cost, less heat, roughly same performance. 2xxx series is not the greatest comparison due to the R600 fiasco, but it was a much needed move by ATi.

4670 replaced the 36xx, but came in at almost the performance level of a 3850 and slightly undercut the price while also cutting power consumption and board size. 4670 also came with 512MB, whereas most 3850's at that point (and price point) were 256MB. Otherwise the 46xx added nothing as DX 10.1 was already in place.

57xx is looking to come in at around 4850 to 4870 performance levels while cutting power and ADDING DX11, Eyefinity, and who knows what else. The price will be about the SAME as current 48xx parts while adding features and cutting power. Note that the part is 57xx. This is not the normal replace a x8xx with a x6xx part. There is still going to be a smaller, cheaper part for the 56xx market.

So yes, the pricing is starting to creep up, but this is also probably due to no pressure from NVidia. Nvidia never even came out with a desktop DX10.1 part (which might be understandable since it was pretty much ATi that spearheaded 10.1 in the first place). I will say this there is no reason for ATi to release anything under the 58xx at this point in time because they are just competing with themselves.

Until DX11 becomes the norm or until there is pressure from the green team, ATi will just continue on cutting prices on the 4xxx series to clear their inventory and THEN switch to the 57xx and lower series. There will be no huge pull for DX11 any time soon since DX11 will for now just be eye-candy on a few high-end games (no flames please, will post most on this later;))

The other thing is, why are we complaining about a "midstream" part that performs well above a typical midstream system? How many games need a card faster than current 4870 hardware while pushing cheap 1650x1050 monitors and have <$200 CPUs? Why price cards so cheap that they outrun everything else in the system? If you want to push 3x1080p monitors, you are going to spring for a High-end card anyway, as well as a high end CPU and high end everything else.

Besides, why are we comparing a BRAND NEW card to a bargin bin 4850 from 1.5 years ago?

-Plack
 
Easy on Degustator, he hasn't heard anything back from his marketing contact and ran out of spins.

Well after gems like this, (emphasis mine)

After all the renaming of late and general lack of serious progress in their GPUs since G80 I'm finding myself ready to believe that they've "ressurected" GT212 because they essentially has nothing against Cypress till 2Q 2010 and G300 is just so much faster (and more expensive) that it makes no sense for them to even try to put it against Cypress.
That's what i'm ready to believe, yes.

and this

Well if AMD's GPU architecture would be open i'm sure that NV would write CUDA for it, no? In fact we'll see if NV port CUDA to LRB once it'll become avialable. Although now with OpenCL it probably won't happen anyway.

I believe you neliz. :p
 
Well if AMD's GPU architecture would be open i'm sure that NV would write CUDA for it, no? In fact we'll see if NV port CUDA to LRB once it'll become avialable. Although now with OpenCL it probably won't happen anyway.

If I were Intel, I'd be very cautious about allowing NV to do that... Besides, wouldn't you think they'd rather go with Havok?
 
So why would it suddenly be used before every vendor has DX11 hardware? It's essentially the same as "propiertary feature of Radeons" while nobody else supports it.
I think there is a bit of a difference between a vendor supporting the new DX standard and one choosing to support a proprietary feature like PhysX or CUDA. :LOL:

Just because nVidia is going to be late to the game with their part, does that mean the rest of the world should just stop and wait for them?

And calling CUDA proprietary is like calling Java proprietary.
I don't get that at all DegustatoR, what do you mean? :|
 
Back
Top