R480/430 roadmappery

skazz said:
Given SLI implementations and dual PCI-express slots from NVidia, I think there are going to be problems getting a triple slot cooling accepted :)

Heh. Well, I don't mean three slots for each card in the SLI config, of course. What I mean is: give the video card a thermal zone volume equivalent of three slots. I am sure that would be enough for two SLI cards as we think of them today. I see the real problem as one of too little volume to work with, always dodging capacitors and other cards, and a thermally unbalanced environment in which to operate in. By zoning it off, being realistic and giving the video cards what they need to grow instead of asking them to make do, you can get a real solution going instead of fighting it. It may prove one fight too many soon.

Take a look at Apple's G5 dual CPU towers to get an idea of what I mean. I am not saying this is perfect or even enough, but this type of approach makes more sense. (Disclaimer: ironically enough, the G5 seems to be suffering from some thermal issues regarding CPU temperatures. I am not sure why this is, if it is a CPU on a process being asked to work too hard or if the thermal solution is flawed. Think of it more as a visual guide to what I mean.)
 
There are ways to transfer the heat from the GPU core and memory chips without taking alot of space. Since most computer will not be using the rather large video cards with their even larger HSF combinations I am not sure leaving a huge space for one to two components is the best solution. Air cooling only with the space provided seems to be the real problem. Then again with more intergrated options on motherboards from SATA controllers, audio, USB2, Firewire etc. the need to have alot of extra slots become less as well.

Transfering heat using a volatile fluid or a fluid system is pretty efficient in saving space where you need it and putting the heat in a better location (as in outside the computer) is probably a better answer. Still future computers by themselves will be disapating 600w of heat which will heat a small room, having 10-20 of such computers will really be a heater for any office. So the the long term solution is probably going to be more involved then just a bigger slot space in which for most computers the space won't be used.

Self contained cooling units will probably become the norm in the future where the heat will be ejected outside the building/home. I've been using a A/C system now for two years and it makes the system much quieter and I don't have a heat problem. In addition I don't need huge HSF combinations on CPU, GPU, Northbridge, Ram etc. in addition everything is kept clean as in a no dust enviroment. Fans run at slow speed while tempertures are rather cool a.k.a GPU less then 50c fully loaded (6800GT overvolted and overclocked to Ultra levels), cpu less then 35c fully loaded even being radically overclocked/overvolted with a somewhat small HSF combination. Noise levels are actually low due to smaller fans and slower speeds used while the real noise (A/C unit) is outside. Besides the case is well insulated which also cuts down the noise. Compaired to what I had in the past my computer is much more silent.

I am still using air (cold air down to freezing tempertures when needed) which could be replaced with a type of fluid cooling system making it more compact where the heat producing components could be cooled directly with very little space required and the internal case cooled by one heat exchanger keeping the system sealed and clean. Keeping a system clean will help keep the system cool from a build up of insulating dust. This type of cooling system is used on large type electrical systems and computers so this is nothing new. What will be new is this approach on home pc's. More space will only temporarily fix the problem but as energy needs ever increases the heat has to be moved outside.

As a side note, my A/C cooled system cost me less then $100 to build but has saved me in hundreds of bucks in modifications and big HSF for cpu, gpu, northbridge, memory, cooling for hard drives etc.. Now my electric bill has gone up so but then if I was dissapated the heat in my home it would go up anyways to keep it cool (at least in the summer time).
 
Tim said:
ChrisRay said:
As JVD replied using Unreal 3.0 as point that it will push hardware like 3dmark05, And it wont even be anywhere near the vertex load of 3dmark05.

When 3Dmark performance is not limited by vertex shaders it is no problem that poly higher than in UE3 - it can very well still be indicative of the performance.

On this point. I'd like to point out some vertex scaling tests in 3dmark05. And how vertex performance affects it. in my hindsight I didnt record individual game tests, But the second test seemed most effected by the disabling of vertex units. I did these tests with 66.81 drivers.

vertex.png
 
wireframe said:
I am a bit long and I may have come across as overly defensive about the "dual slot" comment. It is not intended as anything but a comment on this "cheering for single-slot". I thought this was a good time to bring out some reality because I find it ludicrous that ATI fans (this pun is defeinitely not inteded) seem to find fault in the GF 6800 Ultra because it uses a cooler that blocks out the adjacent PCI slot. Now high end ATI users look likely to face the same and will have to shut up about it. It is a reality.

I agree with you. As new gpus get more powerful, heat removal becomes more critical. I read some reviews that implied that the 6800U was at a disadvantage relative to the X800 XT PE because of the dual slot requirement, but I never paid any attention to that crap. I have an X800 Pro VIVO @ XT PE (540/600), and the stock HSF just wouldn't cut it so I installed a VGA Silencer 4, which is a dual slot cooler similar to the 6800U's. I for one will gladly sacrifice a slot for the astounding performance these new cards give.
 
Chalnoth said:
jvd said:
unreal engine 3

I'm sure the end of 2005 /2006 will bring many games that will push hardware like 2005 does
No game by the end of 2005/2006 will push games like 3DMark2005 does. It's vastly, vastly too vertex-limited. Futuremark once again has gotten things off. And they're bound to always do so since the driving forces behind designing a benchmark are entirely different from those behind games. This is why 3DMark always has been and always will be pretty much useless.

I disagree with you. I believe we will see games that are vertex limited in the next year or two.

I have yet to see where 3dmark has been wrong in the past .

The only way we will not see vertex limited games is if nvidia holds them back do to lack luster vertex performance.


I'm also talking about a perticular game using the unreal 3.0 engine that will certianly be vertex limited (and no i can't say the name as its unanounced)
 
500 points is much in marketing terms. If they manage to be 0.5% faster than nVidia, they'll be damn happy about it and you'll see it in every ad from US to Japan, be sure of that.
 
noko said:
Still future computers by themselves will be disapating 600w of heat which will heat a small room, having 10-20 of such computers will really be a heater for any office.
I think you explain perfectly in the second part of the sentence why the first part will likely not be true.
What does the average person do with a computer that makes 600W of power draw a necessity in the future? Nothing. There is no such driving force in the marketplace that concern anything but a small percentile of purchased computers (high-end gaming, video encoding). I'd contend that the reason we have seen the performance/power draw being driven the way we have in desktop space for the last few years has little to do with consumer need, and more to do with industry inertia in terms of how to market computers and motivate upgrades.

Of course there are ways to handle further increases in both power draw and power density. The question is - is that what consumers want? Or, given a choice, would they prefer cool and quiet power misers, with cheap and simple cooling?

The lunatic fringe of PC-gaming, while not without influence, is not likely to determine the way the PC platform evolves in the future. Intel and AMD have an obvious interest in keeping ASPs up, but Intel has been quite clear about shifting their priority to adding value in other areas than pure performance. Whether they will be successful in keeping prices up with such a strategy is anybodys guess, but it is likely that they will be more successful with that strategy than without. Trying to sell "600W power draw" as a feature worth paying extra for just doesn't seem like a realistic scenario. It is an interesting question to what extent this applies to gfx-IHVs as well.
 
jvd said:
Chalnoth said:
jvd said:
unreal engine 3

I'm sure the end of 2005 /2006 will bring many games that will push hardware like 2005 does
No game by the end of 2005/2006 will push games like 3DMark2005 does. It's vastly, vastly too vertex-limited. Futuremark once again has gotten things off. And they're bound to always do so since the driving forces behind designing a benchmark are entirely different from those behind games. This is why 3DMark always has been and always will be pretty much useless.

I disagree with you. I believe we will see games that are vertex limited in the next year or two.

I have yet to see where 3dmark has been wrong in the past .

The only way we will not see vertex limited games is if nvidia holds them back do to lack luster vertex performance.


I'm also talking about a perticular game using the unreal 3.0 engine that will certianly be vertex limited (and no i can't say the name as its unanounced)

lol your a funny kid
 
I am curious JVD, what games will be pushing vertex shaders in over 1 million polygons on screen in the coming 2 years? The only game we have to go by is Unreal 3.0 which is said to only push about 400,000 polygons,

With the lowest common denominator factor, I find it hard to believe we'll be seeing 1 million + polygons being rendered in game anytime soon. *shrug* Not with Geforce FX 5200/ATI Radeon 9200 being the lowest common target for games in 2 years. With perhaps some emphasis on X300 and Geforce 6200. I mean if you have any information to substantiate the possibility of games pushing well polygons like this. It'd be informative for all of us.
 
as i said chris its an announce game (it is by ea and its a mmorpg)

I also don't see the need to target a 5200 or a 9200 .


WHen the game i'm talking about is target to come out (First quater 2006) the lowest cards will be a x600 lvl or 6200 lvl .

But hey we will see . Perhaps the 600 series lack luster vertex performance will push back higher polygon count games much like the lack luster dx 9 performance of the fx line pushed back dx 9 games .
 
hovz said:
it isnt lackluster, its just not as fast as atis. this is much different from the fx pixel issue
True, the numbers in Beyond3D's reviews show the difference in performance between the R420 and NV40 are almost in line with the different clock-speeds. The exception is the NV40's static flow-control performance -- which is definitely lackluster, performing at half the speed of ATI's (unless recent drivers have improved this).
 
jvd said:
as i said chris its an announce game (it is by ea and its a mmorpg)
Don't tell me they're having a third go at another Ultima MMO? What are the odds of this one getting cancelled too?
 
Entropy said:
Of course there are ways to handle further increases in both power draw and power density. The question is - is that what consumers want? Or, given a choice, would they prefer cool and quiet power misers, with cheap and simple cooling?

Given the way that Prescott appears to have completely stalled, I'm thinking that market has already spoken on that one.

ChrisRay said:
With the lowest common denominator factor, I find it hard to believe we'll be seeing 1 million + polygons being rendered in game anytime soon. *shrug* Not with Geforce FX 5200/ATI Radeon 9200 being the lowest common target for games in 2 years. With perhaps some emphasis on X300 and Geforce 6200. I mean if you have any information to substantiate the possibility of games pushing well polygons like this. It'd be informative for all of us.

Bear in mind that vertex performance is not just about pushing numbers of polygons (in fact, thats more about your setup engine!), but also what you do in terms of vertex shading.
 
People in UE3 site they have a scene with 500.000 pol without caracters.
2º generation games should use a lot more ( UC2 use up to 4X more )
 
Bear in mind that vertex performance is not just about pushing numbers of polygons (in fact, thats more about your setup engine!), but also what you do in terms of vertex shading.

Possibly, But I'm curious if you think the sheer polygon count is going substantiate the difference between what we see here in the current implementation of 3dmark, Compared to future titles. I really dont dispute the emphasis on vertex shading.

That being said, would that make 3dmark a good indicator of vertex shading? by massive polygon count? I still dont see anything coming close to the vertex emphasis found in the current 3dmark05, Unless we start seeing games which actually have more control over geometry from within game settings (Rather than just controlling speculiar lighting shadows, and textures)) it seems unlikely that with lowest common target card is going to be able to push vertex data like that seen in 3dmark.

Unless, of course. You are willing to provide some information to the contrary?
 
Back
Top