R480/430 roadmappery

IMO, the more we get into the next generation of consoles the more the emphasis will be on Vertex Shading (initially) when you consider the composition of the next generation consoles (although I'm not quite as sure with what I'm gather about PS3).

Take a look at this post from DeanoC where he mentions that he does manage to get parts of their next gen XBox title Vertex limited at times on an 9800 PRO/X800 (it also worth looking at the video in that thread).

Further to that, I think that you find that quite a few titles are influenced by the by the poly-counts presented by the consoles of the past. Many cross platform titles / engines are not vertex bound now because because their primary poly-counts are influenced by the last generation of XBox and PS2 poly capabilities; so at the moment we aren’t at all vertex bound and people look to supplement that by eating up some of the fill-rate (FSAA / AF / Pixel Shader) that current PC boards have. With the next gen consoles there will now be a step change in capabilities (equivalent of 2 or 3 PC architectural generations) and the Vertex (and pixel) detail will follow that, however as VS is more static you’ll probably find that cross platform titles/engines will initially be pushing high end VS rates in high detail much more than they have over the past few years (and more so in this generation because I expect the new consoles to be heavier geometry pushers than just the VS rates on their graphics processors).
 
DaveBaumann said:
Entropy said:
Of course there are ways to handle further increases in both power draw and power density. The question is - is that what consumers want? Or, given a choice, would they prefer cool and quiet power misers, with cheap and simple cooling?

Given the way that Prescott appears to have completely stalled, I'm thinking that market has already spoken on that one.

That still leaves the question if the same goes for gfx cards.
In a sense, that would seem to be a given, but from a purely technical standpoint graphics chips are much more scaleable upwards in performance due to the parallell nature of the problem they are meant to solve. Intel isn't in a position today where it can offer for instance single core CPUs for laptops, dual cores for desktops, and quad cores for servers.* The gfx IHVs however already do the equivalent of this. They are not limited to speed-binning for product differentiation, but can relatively easily use the same IP for producing a range of products with differing numbers of parallell processing elements. Thus, it is eminently possible for the gfx IHVs to produce very large chips targeted at high-end desk top computers. And they have done exactly this as well, that's what the X800 and the GF6800 are.
Now, they could already have taken this a bit further, and from a purely technical standpoint they can easily extend this going forward. Will they?

Well, in favour there is both marketing/technical bragging rights, and the ability to present a well established reason to upgrade.

Against, there is of course that they want to keep cost per chip down. Plus there is the problem of cooling and noise, and associated size and expense. These properties alone can already cause otherwise interested consumers to reject their products. Furthermore, increasing power draw is bound to make all OEMs unhappy as they have to either mark their systems as unsuitable for the greatest gfx-cards, or they have to design their systems with sufficient margins in terms of space/ventilation/PSU that they can accomodate those cards, regardless of whether they are ever used or not. This is a major problem.

I really can't see the gfx-IHVs do other than follow the rest of the industry here, and let their power-draw requirements level off or even back down a bit, and start emphasizing aspects other than pure rendering speed in their marketing.

Rather than the currently rumoured 480/430 info, what would have been really interesting would have been if ATI had made a 24-pipe 0.11 um chip, roughly conserving die area vs. 0.13, but dropping clocks significantly, down to say 350 MHz. What would that have resulted in, in terms of performance/power draw/cost per chip? What will the IHVs do as they move to 0.09um? Stepping back from pushing clocks (and voltages/power draw) in favour of parallelism in order to achieve performance would be elegant, but would it cost too much in terms of dies per wafer? Are the benefits of low power draw sufficiently saleable yet?

There are still a couple of balancing acts and trends left that make consumer graphics interesting to follow. :)


* It is difficult to see Intel and AMD extend this as far as the gfx-IHVs can. Typical Windows computing just doesn't give much return on investment as you go from two to four cores, four to eight, eight to sixteen. Looking for added value outside pure performance is probably a wise move.
 
Entropy said:
The lunatic fringe of PC-gaming, while not without influence, is not likely to determine the way the PC platform evolves in the future. Intel and AMD have an obvious interest in keeping ASPs up, but Intel has been quite clear about shifting their priority to adding value in other areas than pure performance. Whether they will be successful in keeping prices up with such a strategy is anybodys guess, but it is likely that they will be more successful with that strategy than without. Trying to sell "600W power draw" as a feature worth paying extra for just doesn't seem like a realistic scenario. It is an interesting question to what extent this applies to gfx-IHVs as well.

This is catch 22, to stay in business the IHV's need to promote something new and convince their customers to upgrade or to buy in order to stay in business. Todays computers usually can last much longer then the normal upgrade. If the IHV's don't have something better as in faster, more features etc. the customer is going to keep what they have, why bother otherwise? Look at the modern day motherboard, 6:1 sound, SATA controllers, USB2, ethernet etc.. Features are pretty much at the max as it stand, what else do you need? Virtually the only thing left is increase performance which usually means more power.

When Dell sells a top end SLI gaming machine with a dual core CPU next year what kind of power supply will be in it? I can only guess and say it will be around 600w. I don't think it is a question if a few need it or want it, it is survival for Dell to at least have a top of the line system to show and sell. Those who buy this system will have to deal with the increase heat that it will generate and I don't see this getting any better in the near future. If tomorrow PC's are the same as today the manufacturers will probably not be around long is the jest.

Now where does it go from there? What will the next systems be like after these? Bottom line is the industry is hitting a thermal wall.
 
we need to adopt a new standard. the current setup of motherboards is not helpful at all to cooling. hopefully intels btx or maybe something even better will have a large userbase in the future.
 
hovz said:
we need to adopt a new standard. the current setup of motherboards is not helpful at all to cooling. hopefully intels btx or maybe something even better will have a large userbase in the future.
IIRC with btx the fresh air goes to the CPU and then to the graphics card. So the graphics card gets the hot air from the CPU. Silent freaks don't like btx too much right now. At least that's my current information.
 
madshi said:
hovz said:
we need to adopt a new standard. the current setup of motherboards is not helpful at all to cooling. hopefully intels btx or maybe something even better will have a large userbase in the future.
IIRC with btx the fresh air goes to the CPU and then to the graphics card. So the graphics card gets the hot air from the CPU. Silent freaks don't like btx too much right now. At least that's my current information.

with dual slot cooling i dont see this as an issue.
 
hovz said:
with dual slot cooling i dont see this as an issue.
Um, you just said:
hovz said:
we need to adopt a new standard. the current setup of motherboards is not helpful at all to cooling. hopefully intels btx or maybe something even better will have a large userbase in the future.
What are you smoking today? :?
 
Chalnoth said:
hovz said:
with dual slot cooling i dont see this as an issue.
Um, you just said:
hovz said:
we need to adopt a new standard. the current setup of motherboards is not helpful at all to cooling. hopefully intels btx or maybe something even better will have a large userbase in the future.
What are you smoking today? :?

?????????????????? with dual slot cooling the little bit of warm air that travels from the cpu to the video card area wont matter
 
Trying to sell "600W power draw" as a feature worth paying extra for just doesn't seem like a realistic scenario.

At least AMD seems intend to raise TDP even further as a recently granted patent reveals (though this was filed three years ago).

http://patft.uspto.gov/netacgi/nph-....WKU.&OS=PN/6,800,933&RS=PN/6,800,933

Abstract

Various embodiments of a semiconductor-on-insulator substrate incorporating a Peltier effect heat transfer device and methods of fabricating the same are provided. In one aspect, a circuit device is provided that includes an insulating substrate, a semiconductor structure positioned on the insulating substrate and a Peltier effect heat transfer device coupled to the insulating substrate to transfer heat between the semiconductor structure and the insulating substrate.
 
I'd like to elaborate and clarify a point I made above. Also, try to push the thread back towards the 480/430 track again.
Entropy said:
Rather than the currently rumoured 480/430 info, what would have been really interesting would have been if ATI had made a 24-pipe 0.11 um chip, roughly conserving die area vs. 0.13, but dropping clocks significantly, down to say 350 MHz. What would that have resulted in, in terms of performance/power draw/cost per chip? What will the IHVs do as they move to 0.09um? Stepping back from pushing clocks (and voltages/power draw) in favour of parallelism in order to achieve performance would be elegant, but would it cost too much in terms of dies per wafer? Are the benefits of low power draw sufficiently saleable yet?

What I didn't state clearly was that power draw scales pretty much linearly with the number of transistors, but increases at a rate of very roughly a power of two with clock frequency at a given process. This means that assuming you can double a certain processing power by either doubling the number of processing elements or by doubling the clock, choosing the first path will lead to twice the power draw whereas the second will lead to four times the power draw. The drawback with going the parallell route is of course that die size grows, leading to lower yields of good chips per wafer => higher cost per chip.

Now, for the top-of-the-line consumer gfx chips, the game is to maximize performance at a certain maximum allowable power draw - at this point in time, roughly 70W. (Used to be roughly 25W, the maximum allowed by the AGP spec). For these chips the gfx IHVs must emphasize parallellism to push performance, hence the large dies of the NV40 and R420. The ability to prioritize doesn't really come into play unless you do not have to hit the absolute maximum in performance.

As long as price/performance for the chip is the only thing that matters, performance will be achieved by pushing clocks. However, as power draw has increased, other factors have started to enter the picture.
* Size, cost and complexity of the cooling device.
* Noise.
* Additional demands on the PSU, and associated weight, cost and noise.
* Additional demands on the overall thermal design of the computer cabinet.
Thus, whether there is actually any overall cost benefit by increasing clocks is not as clear cut, and there are definite environmental drawbacks with following that route. How would the market react to products that cost 20% more, but had half the power draw and lower noise? Or to parts that cost the same, had 60% of the performance, but had only one third the power draw and had silent cooling without any moving parts? Would that be worth one step down in resolution?
I know that this is the direction I would personally like to see the market move.

The mystery R430 could conceivably be this kind of device, where ATI is experimenting with the tradeoffs.
The same price as the R480Pro, with lower performance. Doesn't seem to be any justification for it really, unless it brings some other benefits to the customer.
 
That would be interesting but then what will ATI do after that? If pc's don't advance to the extent that people would want to upgrade then I think we are at a point where we will see the collaspe of the growth of pc's.

Now ATI could get a higher performance/power ratio but that doesn't affect the higher power requirements in having dual core cpu's, more faster memory, bigger hard drives etc. . Plus having your competition just slaughter you in every benchmark out there while you draw 70w less probably won't help either.

If process technology can't keep up with the increase in transitor count and higher frequencies to keep thermal problems under control then eventually a thermal wall will be hit. 600w is probably very doable for alot of systems, it will be like running two 300w systems if you look at it like that. 600w is about 2000BTU/hour since virtually all of the power of a computer is converted into heat. Now having 2 or more systems draw 600w each as you can see can cause some problems in a small to medium size room. Still even one 600w computer in a bedroom could do a nice job of heating it ;). At least LCD monitors take less energy then CRTs for a computer system so technology advancement doesn't always amount to increase energy costs.
 
One potential way to break through the thermal barier would be to transition away from semiconductor-based technologies.

One way this may go is with half-metals, which, in effect, become conductive or non-conductive just by changing the magnetic field (like you do when writing to a hard drive, or when writing to a bit in MRAM). Since this change is much lower in energy than the changes involved in switching semiconductor transistors, and doesn't depend upon the same statistical processes, it may be possible for half-metal based technologies to have much lower power requirements than semiconductor based technologies. What I don't know currently are switching speeds and minimal sizes of the switches (though there may not necessarily be a minimal size for half-metal based technologies: unlike with hard drives, you shouldn't really need to keep the magnetic moment of a local domain stable when there is no magnetic field....).

But since half-metals are very new discovery, it will be some time before anybody finds an efficient, economical way to fabricate processors with them. I doubt it will be a trivial matter. But they definitely have the potential to be faster, smaller, and cooler than semiconductor transistors.

Edit: Just keep in mind that the slowing of semiconductor technologies is a problem that the entire tech industry is facing. R&D costs will go through the roof as it becomes harder and harder to squeeze more processing power out of the silicon, but at the same time it will become less and less compelling for people to upgrade. This may, in the end, mean that the limit we reach in silicon-based computing power won't be a physical one, but an economic one.
 
This means that assuming you can double a certain processing power by either doubling the number of processing elements or by doubling the clock, choosing the first path will lead to twice the power draw whereas the second will lead to four times the power draw.
I think you're confusing "frequency" with "voltage".

Dynamic Power ~= h(V^2, f, n)

where h is some linear function, V is the transistor supply voltage, f is the frequency and n is the number of transistors. (Actually, dynamic power scales with circuit capacitance, not number of transistors, but those are tightly correlated).

However,

Static Power ~= g(V, e^Vt, n)

where g is some linear function, Vt is the transistor threshold voltage, V is the transistor supply voltage and n is the number of transistors.

Thus, doubling the number of transistors double both static and dynamic power. But doubling the frequency mearly doubles of the dynamic power.

I'm ignoring the fact that increasing frequencies usually comes at an additional transistor cost (extra pipelining, etc).

Details can be found here

Edit: fixed spelling.
 
There's the additional factor these days of EM radiation related to high frequencies, though. This radiation isn't dangerous at all, but it does add to the overall power draw. There are likely also other limiting factors that become important when the frequency starts approaching the maximum allowable frequency for a particular device, such as transistor switching speeds and the speed at which a wave can propagate through the conductors/transistors.
 
Bob said:
Dynamic Power ~= h(V^2, f, n)
Static Power ~= g(V, e^Vt, n)

Thus, doubling the number of transistors double both static and dynamic power. But doubling the frequency mearly doubles of the dynamic power.

I'm ignoring the fact that increasing frequencies usually comes at an additional transistor cost (extra pipelining, etc).

Increasing the frequency can sometimes require a voltage increase to run stable, isn't it?

You are presuming V & f are independent...
 
Increasing the frequency can sometimes require a voltage increase to run stable, isn't it?

You are presuming V & f are independent...
And you are presuming that they are not. Designing an ASIC to run at a particular frequency is not the same thing as overclocking an ASIC to run at a particular frequency.
 
Bob said:
This means that assuming you can double a certain processing power by either doubling the number of processing elements or by doubling the clock, choosing the first path will lead to twice the power draw whereas the second will lead to four times the power draw.
I think you're confusing "frequency" with "voltage".
No I'm not.

I simply thought the issues at hand were quite clear. I'm fully aware that the statement regarding power draw and clock frequency were not exact which was why I stated that power increased "very roughly" with the square of frequency.
In real life, it often increases at an even higher rate. You don't have to take my word for it. Take IBMs PPC970fx frequency vs. voltage vs power draw graph
While the frequency drop by itself causes the (theoretical) linear drop in power, lowering the frequency also makes possible a lowering of drive voltage, causing the power to drop from 100W to roughly 15W as the frequency is halved, indicating a real world increase in power as a function of frequency as somewhere between the square and the cube in the relevant interval. This is an actual example of the realities we are dealing with.
(I'd really like for an ATI engineer to chime in on just how much they save in power on for instance the RV360 when used under mobile conditions (voltage/frequency).)
 
By the way, guys, there is the additional issue that when you increase the temperature of a resistor, its resistance typically increases. This is where you get the power goes as frequency squared, I believe (well, approximately).

There's also the additional issue that transistors aren't actually linear in their response (i.e. V=IR does not apply exactly, only approximately), so that may also be a contributing factor. I'm not actually sure which factor dominates the faster-than-linear increase of power consumption with frequency.
 
Chalnoth said:
By the way, guys, there is the additional issue that when you increase the temperature of a resistor, its resistance typically increases. This is where you get the power goes as frequency squared, I believe (well, approximately).

No.
P = U*U/R -> linear voltage increase leads to quadratic power increase.

by the way, the resistance of a transitor is decreased with increased temperature.


Chalnoth said:
There's also the additional issue that transistors aren't actually linear in their response (i.e. V=IR does not apply exactly, only approximately), so that may also be a contributing factor. I'm not actually sure which factor dominates the faster-than-linear increase of power consumption with frequency.

Yes and no.

Yes, transistors are not linear but no, it doesn't matter here.
We're talking about digital devices where transistors only have two different states: ON and OFF.
 
Back
Top