AMD RV770 refresh -> RV790

btw.
It seems that NV is going to steal some HD4890 sales in advance - with $169-179 GTX260-216. ;)

a 260-216 is on par with the same priced 4870-1GB, why would it steal sales off of a better product?



The B2's were also used on the Quadro cards where they launched in september/october already. not seeing a uniform spread of all revisions on current boards (and the 295 issues) seem to suggest low yields.
 
a 260-216 is on par with the same priced 4870-1GB, why would it steal sales off of a better product?
If you are just looking on raw performance they may on par. But do not forget that NV offers more than that: CUDA, PhysX, better possibilities to control IQ (AF/AA), lower default idle consumption.

And also there are some nice OCed SKUs around ~$200, which is supposed to be the launch price of HD4890.
 
If you are just looking on raw performance they may on par. But do not forget that NV offers more than that: CUDA, PhysX, better possibilities to control IQ (AF/AA), lower default idle consumption.

You're wrong on the power consumption, and CUDA and PhysX, well.. RV7X0 has a DX10.1. Graphics+ Schmaphics+, that sh*T doesn't get me anything in games. I guess my AA is just as adjustable and only thing that might be better is the AF, which hardly translates to a better on-screen experience..

And also there are some nice OCed SKUs around ~$200, which is supposed to be the launch price of HD4890.

Do OC260's perform better than a GTX280? and thus on par with a GTX285? for half the price?
 
Last edited by a moderator:
You're wrong on the power consumption
We are talking about GTX 260 vs 4870(1GB)?
http://www.xbitlabs.com/articles/video/display/evga-geforce-gtx260-216-55nm_5.html#sect0
http://ht4u.net/reviews/2009/power_consumption_graphics/index13.php
~20W lower @ GTX 260 65nm.

Of course the savings by underclocking 4870 in 2D are big, but this is not default.

CUDA and PhysX, well.. RV7X0 has a DX10.1. Graphics+ Schmaphics+, that sh*T doesn't get me anything in games.
Whats wrong about a good GPGPU support? Later the other vendors might also profit by this early efforts by NV.

I guess my AA is just as adjustable
Ever tried some supersampling AA or MSAA-hybrids with this?

Do OC260's perform better than a GTX280? and thus on par with a GTX285? for half the price?
Do a 13% oced 4870 better perform than GTX 280 or be on par with GTX 285?

This card could be very close to GTX 280 performance - $219. (benchmark)
 
Last edited by a moderator:
We are talking about GTX 260 vs 4870(1GB)?
http://www.xbitlabs.com/articles/video/display/evga-geforce-gtx260-216-55nm_5.html#sect0
http://ht4u.net/reviews/2009/power_consumption_graphics/index13.php
~20W lower @ GTX 260 65nm.
Of course the savings by underclocking 4870 in 2D are big, but this is not default.

http://enthusiast.hardocp.com/article.html?art=MTYyNiw5LCxoZW50aHVzaWFzdA==

6W lower on custom boards, at idle. 15W difference between reference boards, but lower temps for the Radeons across the board.

Whats wrong about a good GPGPU support? Later the other vendors might also profit by this early efforts by NV.
For 99% of the end users it realy means nothing.
 
For 99% of the end users it realy means nothing.

Obvious exaggerations to get the point accross, I'd agree under the perspective that CUDA is only in it's infancy to show any benefits for mainstream applications.

In my mind the real purpose NV is struggling as much for market penetration is rather Intel Larabee than anything else. If they'll gain anything or not remains to be seen. However I still consider labelling anything GPGPU and/or physics as meaningless extremely shortsighted for the foreseeable future.

Personally I'd prefer rather something open source as OpenCL to be in CUDA's shoes today, but hopefully the former along with D3D11 will change things into a completely different direction. It's nonsensical at least to have developers coding each application with more than one different paths.
 
Obvious exaggerations to get the point accross, I'd agree under the perspective that CUDA is only in it's infancy to show any benefits for mainstream applications.

In my mind the real purpose NV is struggling as much for market penetration is rather Intel Larabee than anything else. If they'll gain anything or not remains to be seen. However I still consider labelling anything GPGPU and/or physics as meaningless extremely shortsighted for the foreseeable future.

Personally I'd prefer rather something open source as OpenCL to be in CUDA's shoes today, but hopefully the former along with D3D11 will change things into a completely different direction. It's nonsensical at least to have developers coding each application with more than one different paths.

I don't have any data on the percentages of owners of GT200 that bought it for CUDA programming as a primary reason, I can only see where it is used with our customers and it seems they spend more resources on the network than the computational system that it should support.

I do agree that GPGPU/Physics are things we will see in the future, but PhysX shot itself in the foot with compatibility issues between hard and software solutions CUDA can't and never will be a standard. It's a nice selling argument to get people started but it's dead next year.
 
Is there really a place for a 512MB HD4890?
Honestly, dunno.

I was thinking that RV770 would disappear reasonably soon. Perhaps, once that's happened, something like this instead?:

$100 RV740XT-1GB
$150 HD4890-512MB (RV790Pro)
$200 HD4890-1GB (RV790Pro)
$250 HD4890-1GB OC (RV790XT)

I'm assuming the RV790Pro clocks at 80%-ish of the XT version, for what it's worth.

This article seems to imply that HD4870 prices won't be following GTX260:

http://www.dailytech.com/article.aspx?newsid=14527

Jawed
 
Sorry that did not translate well, trying not to give a wrong meaning - 3) about the RV740 without a power connector and 4) failure of "clients"(ie AMD and NV) to get 40nm performance chip working both have something to do with excessive leakage. There is a second sentence mentioning AMD and the number 580 the rest of which i cannot make out.
Just following up. The 40nm problems may be about the RV740 itself, if so still do not understand what he means by the "580:00" figure. Alternatively may be reference related to GT212 and postponing/cancelling.

Finally recently there has been a (vague) rumor about a 40nm G92 nvidia recently taped. They ran into problems with the yield coming in below expectations. The 40nm shrink yield was low enough compared to 55nm G92 that there was presently no real financial advantage to going ahead with the process. So they plan to staying on the 55nm G92 for awhile till hopefully the 40nm process improves a bit and the 40nm part has a definite cost advantage.
 
Last edited by a moderator:
I see what your saying, but unless AMD is going to keep thier other SKU's untouched a month from now there really is no need for a new chip, Windows 7 is slated for June launch, as with Dx11, next gen chips are coming very soon, unless AMD is going to back log their inventory for another few months, I don't see a new chip in the short term.

Say what now? Only "official word" on the release is "before or after holidayseason 2009", and there hasn't even been afaik any other noteworthy rumors of something else, so Win7 should be expected around early Q4
 
Say what now? Only "official word" on the release is "before or after holidayseason 2009", and there hasn't even been afaik any other noteworthy rumors of something else, so Win7 should be expected around early Q4


Nope, RTM is definitly looking like June, also MS wants it out around the same time Snow Leapord comes out too.
 
From Microsoft empolyees, and there was a MS blog Q3, also OEM's are pushing for earlier release.

Windows 7 has to be RTM about 3 months before the holiday season. This is because otherwise OEMs wouldn't be able to sell Windows 7 based computers. So I would guess that RTM is released in August but the retail version isn't available until November.
 
I don't have any data on the percentages of owners of GT200 that bought it for CUDA programming as a primary reason, I can only see where it is used with our customers and it seems they spend more resources on the network than the computational system that it should support.

CUDA has a much wider range of applicability than just for GT200. I replied in a more generic sense and not the GT200 vs. RV770 comparisons you two were going through.

I do agree that GPGPU/Physics are things we will see in the future, but PhysX shot itself in the foot with compatibility issues between hard and software solutions CUDA can't and never will be a standard. It's a nice selling argument to get people started but it's dead next year.

Open standards always win over proprietary sollutions. In my mind NV aims more for early software penetrations than anything else. As I said I can't know if it'll have any benefits for them in the longrun or not.
 
Back
Top