What's next?

geciopi

Newcomer
How long will the developement continue?

We've been seeing some amazing achievements in 3D graphics. But what about the future?
Is this technology going to approach its limits? If so, what's next?
3DMArk 2005 was the very first 3DMark test with unquestionable CPU limits using cutting-edge GPUs. The obsolence time is getting sorter and shorter with each generation of GPUs.
It's safe to say that 3DMark Next will be obsolete and CPU-limited by the time the next-gen GPUs with DX10 compliance and reasonable performance see the light.
I don't see contents to be able to keep the pace with the rush of the hardware. Developing an acceptable looking game is requiering more and more developers and designers' hours and is getting more and more expensive.
Who will pay the bill? The key to the future of 3D doesn't seem to be the hardware. Fast and cost-effective developement tools are going to gain the major importance in the near future. What's your point?
 
The biggest limitation in the near future, both for GPUs and CPUs will be a wall in the process shrink. Intel has been able to get down to 45nm but anything less has been met with problems. I believe in order to continue to improve performance and features we'll need to investigate new methods of computing, quantum is one notable that comes to mind.
 
geciopi said:
The key to the future of 3D doesn't seem to be the hardware.
Software vs hardware isn't a chicken-and-egg scenario -- without hardware, there will never be any software.

The "key" is most definitely hardware. Faster hardware to be specific (nee ANova's post above). Software and hardware experts know what's needed for realism; we just don't have the speed.

Fast and cost-effective developement tools are going to gain the major importance in the near future.
This is a matter of cost and time-to-market statement about software. Not about the future of 3D.

What's your point?
Quite a puzzling question. What's your point? Are you confusing the rate that 3D is advancing with the speed with which we are seeing it in games being released? Game developers need to sell games; they can't make and release games using every single feature availed by an API or latest-and-greatest hardware because not everyone own the exact same machine.

And to address your first question last :
How long will the developement continue?
Until we are able to play games (=realtime) at the same framerate/fluidity on the PC/consoles with the same graphics as we see in Peter Jackson's King Kong.
 
Last edited:
Reverend said:
Until we are able to play games (=realtime) at the same framerate/fluidity on the PC/consoles with the same graphics as we see in Peter Jackson's King Kong.
Nah, it won't stop then....think "holodeck". ;)
 
I kinda of took his question / message as hardware vs software vs cost? Kind of confusing how he wrote it. (Its probably me its late :)) I think he is referring to how long can we keep upping the game quality but keep games and development within a managed cost without it becoming so overblown that only a handful can develop high lvl games?

Although we can never have enough power for graphics and CPU intensive tasks, I think we need a stronger and more universal attempt at software development. XNA, UE3, Havok, all have their places but it would be nice to have software platform that has the flexability to utilize software (coding, graphics, A.I. etc) technologies that all developers contribute to and expand upon. (Yeh right not anytime soon :)). But with competitors from consoles to PC's (and PC's many configurations), I dont think we will see it in our lifetimes.

I must admit UE3 looks to be shaping up :)
 
ANova said:
The biggest limitation in the near future, both for GPUs and CPUs will be a wall in the process shrink. Intel has been able to get down to 45nm but anything less has been met with problems. I believe in order to continue to improve performance and features we'll need to investigate new methods of computing, quantum is one notable that comes to mind.

There are new techniques coming though. I recently read a science magazine which covered new techniques, optical chips, nano-tube chips. But also about new techniques for making conventional chips. Steven Chou at Princeton university have apparently invented a new way of making "traditional" chips which makes it possible to achieve down to 2 nm. That technique involves making a mould that you make the rest of the chips from. Though making that mould is rather expensive and probably takes quite some time also. So i'm guessing that if you're a GPU manufacturer and want to use this technique, you'd want to get your chip right the first time :)
 
ANova said:
I believe in order to continue to improve performance and features we'll need to investigate new methods of computing, quantum is one notable that comes to mind.
Quantum computing is not particularly suitable to 3d rendering, as far as I can see. The only known problems for which quantum computing is faster than classical computing are problems where you need to find a single 'correct' answer among a large collection of candidate answers, subject to the following criteria (scissored from here):
  • The only way to solve it is to guess answers repeatedly and check them.
  • There are n possible answers to check.
  • Every possible answer takes the same amount of time to check.
  • There are no clues about which answers might be better: generating possibilities randomly is just as good as checking them in some special order.
3d rendering tends to be more about generating extremely many "answers" (one per color component per pixel per frame) spending a rather modest amount of work per "answer"; the "answers" tend not to be very amenable to guessing either.
 
Bjorn said:
Though making that mould is rather expensive and probably takes quite some time also. So i'm guessing that if you're a GPU manufacturer and want to use this technique, you'd want to get your chip right the first time :)
Any numbers on approximately how expensive? It's not like the mask sets used today are cheap, at around $1M per tapeout at 90nm.
 
Bjorn said:
There are new techniques coming though.

There are always new techniques and technologies being developed. I do believe our current model is here to stay awhile longer, people will no doubt find ways to extend it as much as possible, especially those who stand to benefit most from it.

arjan de lumens said:
Quantum computing is not particularly suitable to 3d rendering, as far as I can see.

Maybe not currently, but there's no reason it can't be developed further. I only mentioned it because it's one possibility.
 
Reverend said:
Until we are able to play games (=realtime) at the same framerate/fluidity on the PC/consoles with the same graphics as we see in Peter Jackson's King Kong.

In some ways and maybe im stretching it on the technical or mathematical way and difference too achive some visuals in games, i think there is two examples already in this last console-era where one could be fooled IMO. And thats water and heathaze, fireflames to a degree also.

I personally have quite a problem(not techwise) with the constant grasp for realtime and photorealsim as i think the concepts gets mixed and its very subjective for each and everyone. I think this is more true for people thats interested in 3Dtech and also are gamers that we think whats going on here more from a technical POV etz.

One grand ex was when my mother was on visit and i was playing COD2 and she satt and looked and frequently asked "is this real, are that arm real under the weapon?". It was first when she saw some of the tanks and closeups of the faces she "got it" ..
But also she is from a WHOLE different generation but i think it was quite funny as i didnt know what to say. :)
 
ANova said:
Maybe not currently, but there's no reason it can't be developed further.
The entire speed benefit of quantum computing comes from setting up a superposition of a gargantuan number of states, then have it rapidly collapse to a single desired state. Once you read out this state, the entire superposition is irreversibly destroyed and you will have to set up the entire calculation all over again if you wish to produce a second result. As such, computing two results always takes twice as much execution resources as computing one result. So if you just want to push more pixels, it's not going to help you. Even if you want to do a lot of work per pixel, for quantum computing to be useful you still need a sub-algorithm that needs less resources to check random proposed values for the pixel for "correctness" than to directly compute the pixel's "correct" value in the classical way.

As such, the only real problem that I can see quantum computing be useful for in 3d graphics is visbility determination - "given a pixel and N opaque polygons, which polygon, if any, ends up being drawn for the pixel?", which a quantum computer should be able to solve in O(sqrt(N)) time using Grover's algorithm. Of course, this assumes seriously evil data sets, or else someone could just smack up a kd-tree and solve the problem in O(log(N)) time on a classical computer.

While quantum computers have certain fields where they are immensely powerful, 3d graphics is hardly one of them.
 
arjan de lumens said:
Any numbers on approximately how expensive? It's not like the mask sets used today are cheap, at around $1M per tapeout at 90nm.

I'll have a look at it when i get home. But i don't think they put any numbers in the article. Just that it wasn't cheap, i'm guessing in relation to what's done today.
 
arjan de lumens said:
Any numbers on approximately how expensive? It's not like the mask sets used today are cheap, at around $1M per tapeout at 90nm.

I was under the impression that the costs for a new tapeout are rather ~$20 million? (total)
 
_xxx_ said:
I was under the impression that the costs for a new tapeout are rather ~$20 million? (total)
The $1 million number is for the actual production of one complete mask set from diagrams produced by synthesis/place&route tools; running those tools, as well as general R&D of the product to tape out, come in addition to that and may well add up to $20M, strongly dependent on design complexity (curiously, the price of the actual mask set doesn't seem to depend very much on the design complexity).
 
arjan de lumens said:
The $1 million number is for the actual production of one complete mask set from diagrams produced by synthesis/place&route tools; running those tools, as well as general R&D of the product to tape out, come in addition to that and may well add up to $20M, strongly dependent on design complexity (curiously, the price of the actual mask set doesn't seem to depend very much on the design complexity).

Not talking about R&D and tools, just the whole logistics/testing/packaging/documentation/qualification etc. + the masks and all the minor things. Are there any numbers somewhere?
 
Bjorn said:
There are new techniques coming though. I recently read a science magazine which covered new techniques, optical chips, nano-tube chips. But also about new techniques for making conventional chips. Steven Chou at Princeton university have apparently invented a new way of making "traditional" chips which makes it possible to achieve down to 2 nm. That technique involves making a mould that you make the rest of the chips from. Though making that mould is rather expensive and probably takes quite some time also. So i'm guessing that if you're a GPU manufacturer and want to use this technique, you'd want to get your chip right the first time :)
According to his website, the minimum was 6nm:
http://www.ee.princeton.edu/people/Chou.php

It is conceivable, of course, that this webpage hasn't been updated recently. Anyway, one thing you really have to realize when it comes to these very small objects is that they do not behave like the transistors we have today. They have the same basic structure as a 90nm or 65nm transistor, but won't have the same properties because the quantum effects come to dominate.
 
Kombatant said:
Damn Digi, sometimes I swear you answer with the same words I'd use :LOL:
Well no disrespect meant to the original poster, but when I see questions/statements like this is sort of cracks me up.

The whole, "we've almost taken the technology as far as it will go!", argument just doesn't fly with me ever. It just seems so "no one will ever need anymore than 640k"-ish, y'know?

I guess I've just seen too much development since my old VIC-20 and seen it argued that we've taken technology just as far as we can too many times, but I just have this gut feeling that we're going to continue to push the envelope for quite some time to come yet before we have to start worrying about not being able to anymore.

Give it another 20-30 years and mebbe I'll feel differently, but I kind of doubt it. By then I'll be whining for a silent teleporter... ;)
 
Back
Top