Adrenaline Vault's take on the future of 3D graphics

Adrenaline Vault and hardware reviews, yuck. I've never held too much interest with them when it came to anything related to hardware, unless they're talking about gaming controllers or something like that. When it comes to their game reviews, on the otherhand, they're the first ones I go to. :)

Anyway, I think what he proposed was more wishful thinking than an analytic report of some kind. From those who don't care much for bleeding-edge technology, but more on games themselves, I've always heard complaints about hardware advancing too fast, that their new systems are being rendered obsolete within months, etc. - then finally wishing the cycles would be something like 2 years.

Considering that Adrenaline Vault, since it's inception, has been a gaming oriented website and scarcely doing anything related to hardware/technology, I think it's not too much of a surprise that they wrote an article like that (which is also why I don't really depend on them for hardware info). Let's just hope they keep their focus on the games and not drift towards a hardware focus. They're just the site I need when I have to get away from all the tech-talk and remember why I'm buying all this hardware. :)

-dksuiko
 
It did pick up on the importance of P10 (over other more recent tech announcments) and highlight that programmable architectures such as this are going to be the future. Whether the P10 artiecture as it is is 'it' is another question enirely but it highly likely that all vendors will be following a path like this in the future, and I give the writer credit for recognising that.
 
huh

i dunno about 2 years , but a year sounds good... one new video card for every direct x release ... those cards would be top of the line for one year and when the next dx comes out the new cards will come in and the old ones drop to the value market and the ones now 3 generations old can be moved to intergrated/oem machines.

hmmm so instead of having geforce3 ti 500 , geforce 4 4600 and now geforce 4800 (whatever this new speed bin will be called ) and most likely the geforce 5 all in the same year and all costing 300+ you would have it like this ... geforce 2 gts (dx7) a year later geforce 3(dx8) and then geforce 5 (dx9) start each one off at 300 and move them down in price as they see fit ...

also mabye making a standard dx that all cards being sold have to support (feature wise) so that it doesn't take years to see the new features used... for gods sake we still have tnts being sold ...


sorry about the rant its 8 am and i haven't slept :eek:
 
I do think it was a bit pointless to go out on the limb he did now, but I don't think the limb is as shaky as you make it out to be...I just think we have a bit of climbing on the tree left to do to reach it.

My pet theory, which I've mentioned here somewhere before, is that we were heading to the point where the GPU paradigm would shift to more CPU like...except, how much further do we have to go? Yes, there are many theoretical achievements left to accomplish, but we seem to be getting greatly diminishing returns in terms of noticeable visual improvement.

All of the cards intended to be released this year, taken collectively, seem, to me, to have reached a level of display such that it will be hard to find an effect that can be significantly (atleast from the user's viewpoint) improved by future hardware. Until we move beyond a 21 inch 1600x1200 display (which, I suppose we could, but why would we in the next decade? That would just be cumbersome...I think we'll just be focusing on LCD-alike development) what evolutionary breakthroughs beyond a P10 type card that is able to match the features of the other cards planned for release do we have left?

Combined with the very noticeable delay in the ability of software developers to take advantage of new features in cards, I think graphics card manufacturers will be forced (nVidia, and possibly ATi if their chip licensing is as profitable as it is for nVidia) and perhaps eager (everyone else) to embrace this shift.

I mean, how long have we had the Athlon, with the only real change being higher clock speeds and manufacturing improvements to lower costs? GPUs shifting to that type of model of development seems feasible "soon" (an ambiguous term depending on whether companies are able to execute), and that would fit the author's vision, atleast as I interpreted it.

Then again, Longhorn might open up an entirely new avenue of improvements...though I can't envision it being more demanding than 3D acceleration, since parallelism is already a design goal, I could be wrong.

Atleast, this is my admittedly uninformed opinion. ;)
 
One quick thing about product cycles.

Currently we're on a 12-18 month new architecture cycle. The "six-month" cycle doesn't mean that every six months we have a new video card that must be programmed-for differently.

Also, it appears that the product cycles of ATI and nVidia are lining up. I'm pretty sure that each new architecture will line up nicely with die shrinks.

It's kind of interesting, though...I hear everybody complaining about how fast the 3D hardware has been advancing except for game developers...
 
I hear everybody complaining about how fast the 3D hardware has been advancing except for game developers

But everybody camplains that new possibilites are unused and both ATI and NVidia complain that developers generally are not eager to use (or avare of) new possibilities. ;) And what's the point of learning Pixel Shaders 1.1, 1.3 and 1.4 if there is 2.0 "just around the corner"?...
 
SvP said:
But everybody camplains that new possibilites are unused and both ATI and NVidia complain that developers generally are not eager to use (or avare of) new possibilities. ;) And what's the point of learning Pixel Shaders 1.1, 1.3 and 1.4 if there is 2.0 "just around the corner"?...

It takes time to develop games...and I know from experience that it can be very challenging to change existing code (Sometimes it's easy...but sometimes, if the change is more fundamental, it can be very time consuming).

And you also have to consider that many games will want to look identical on as wide a range of video cards as possible. This is true for multiplayer games in particular.
 
BTW, Dave, when is that P10 tech preview due?

I've got everything I recored written down and now I'm just going through and typing it up into a coherant article. I'm hoping to have something next week, however I'm still waiting on a few questions to be answered. :)
 
The author makes it sound like that everyone has to upgrade every six-month, which is of course bullocks. The cards from 2 years ago - GF2 GTS and Radeon - are still very viable today; he states that right now games waste money by buying cards on semi-annual bases, which of course almost no one does.

He also states that full programmability will result in longer cycles - which is funny considering that AMD and Intel release faster ( fully programmable) CPUs every 2-3 months. There is much less performance delta between Athlon 2100+ and 2200+ they between Ti 500 and 4600, yet AMD will still release that CPU, mainly to stay ahead or at least on par with competition.
 
i just dont think thats going to happen...

Nvidia is going to stick with their "new chipset" -"ultra version" per year cycle.

While it has its bad points... it does keep the interest in the graphics market peaked all year. It gives you review sites something to do. Usually many people feel like a "new something" for their computer in the spring, or just before christmas. It also helps with all the second generation engine games that get released mid year For people who want that competative edge....

My thought would be this....

I would much rather see A primary card released once a year, with upgrade modules made avaiable for it. Like more ram, or a second processor. That seems like a much better route than the cost and headaches for 3rd parties that a whole new design brings to the table.
 
I for one would be quite content with a return to the days of 1 year product cycles. I can understand however that the relentless battle for mindshare and marketshare will never allow this to happen again.
 
Johnny Rotten said:
I for one would be quite content with a return to the days of 1 year product cycles. I can understand however that the relentless battle for mindshare and marketshare will never allow this to happen again.

We're currently working on an 18-month architecture cycle (for the most part). More interim products are generally better for the consumer, as it allows these companies to hit every section of the market.
 
I don't mind the cycle. But I think it's insane for anyone to even consider more than once a year upgrades on their personal videocards. After all a Geforce2 GTS/Radeon is a perfectly viable card in all but a very few games like Morrowind which last year's card at this time, the Geforce3 can play just fine. A Geforce4 Ti or 8500 or Parhelia purchase today will last probably well into next year (Doom3 will probably cause a paradigm shift into serious player's card buying decisions, forcing people to upgrade sooner than otherwise expected) at the minimum.
 
Geeforcer said:
He also states that full programmability will result in longer cycles - which is funny considering that AMD and Intel release faster ( fully programmable) CPUs every 2-3 months. There is much less performance delta between Athlon 2100+ and 2200+ they between Ti 500 and 4600, yet AMD will still release that CPU, mainly to stay ahead or at least on par with competition.

<rant>

You can't compare GPUs with CPUs, even if one is invited to do so. There is only so much parallelism you can get out of CPUs, especially with instruction sets like IA32 (see Intels effort with EPIC); Which is in great contrast to GPUs. You can throw millions of transistors for logic in a GPU and can make it more and more parallel, something you can't do with CPUs, they are already hitting on a (parallelism) wall. Compare a TNT2 at 150MHz with a Geforce4 (or Parhelia!) underclocked to 150MHz, the difference will be huge, even if they have the same bandwith. Now compare a Pentium 200MHz against a (hypotethical) Pentium 3 at 200MHz, I bet the difference between the two graphic cards will be much higher.

CPU manufacturer are using die shrinks to clock their cores higher, make them cheaper and put more cache on the die. GPU manufacturer are using die shrinks to most importantly put more logic into their cores, second comes clock rate and maybe cache, lowering cost is only relevant on the low end. Its just more logical for GPU manufacturers to do so, they can gain more performance this way than just upping the core clock as high as it goes. Because of that I don't believe the graphic industry/its cycles is/are going to be more "CPU-like", not in the foreseeable future, because there are still manymany transistors to be used up for logic / there is still much parallelism to be gained.

And I think GPUs are *much* more modular (and therefore easier to design) today than CPUs. CPUs have so many work arrounds (OOE, prediction, prefetching, instruction buffers) and their pipelines (esp. IA32) must be tuned so carefully to achieve high performance; Look at the exotic P4 tech and how long Intel needed to design it. It is their first new core since 1996! Imagine that in the graphic world.. we would have 1.5GHz Riva128's and Voodoo1's, or something like that.

Oh well.. thats it for now.

</rant>
 
Back
Top