Tim Sweeney says photorealism may be achieved at 40 teraflops

Is a 40TF single gpu card achievable or even economically feasible before Moore's Law comes to an end? Maybe multiple gpu's will be needed to hit such a target.
 
Transistor scaling as we know it is about to hit a brick wall long before we reach 40TF it would seem, but other technologies like vertical stacking/new materials and whatnot is looking to keep Moore busy for a while longer yet. There's a lot of obstacles in our way to make that reality, but necessity is the mother of invention, as the saying goes.

Electric cars weren't feasible either until a certain company came along and made it so, and now suddenly every major manufacturer is designing their own. Imagine that, huh! ;)
 
I wonder though, if vertical stacking is so easy then why is moore's law supposed to end ~2020? Even Intel are saying this.
 
I wonder though, if vertical stacking is so easy then why is moore's law supposed to end ~2020? Even Intel are saying this.
well you can't push the micron size down further so once you put as many stacks as can stay cool without burning them out your basicly done with scaling ?

but I thought graphine was next
 
What would 40 Tflops be?

10,000 shaders at 2GHz? I don't think that's that far fetched for a Titan X tier GPU on 5nm (that's what TSMC is planning as of now). It's about 4X the current first generation 16FF Titan with a 10,7, and 5nm nodes coming up.

http://semiengineering.com/10nm-versus-7nm/

What is 7nm?
Like 10nm, 7nm has some pluses and minuses. Compared to 16nm/14nm, 7nm provides a 35% speed improvement, 65% less power, and a 3.3X density improvement, according to Gartner.

5nm would be a little beyond that.

Now that may not happen and the node shrinks will come up short. It certainly won't happen for mainstream chips/consoles, but for a high end GPU its possible.
 
At some point the industry will for the most part max out how much performance they can get out of 3d silicon transistors and architectural improvements. If sales really decline due to this stagnation and R&D shrinks and they no longer have the R&D resources to transition to germanium or photon transistors we could be in for some bad news.

We could see a long term the halt of humanities advancements in computing performance, unless government steps.
 
If that happens then hardware producers and devs will be forced to look into alternative rendering schemes and algorithms for improvements.
 
Reasearch always looks for the imediate biggest payoff and doubles down on that, and rightly so. We've seen decades of lighning fast technological improvement in both hardware and software done that way. If silicon RnD eventually comes to a generalized brick wall, I'm sure there is a lot of untapped potential for improvement in all sorts of other micro-optimizations that have been ignored along the way. Even though they wont sustain a Moore-like rate of improvement, they'd still buy a few decades of steady progressive performance enhancements even if at a comparatively slower pace.
 
IMO software performance is nowhere near what it could be, even in games, blame programming languages, programmers expertise... But we are not there yet.
But we are disgressing ^^
Yeah, I should have ended my post with "and not to mention software..." because I agree 100% with you on that.
 
IMO software performance is nowhere near what it could be, even in games, blame programming languages, programmers expertise... But we are not there yet.
But we are disgressing ^^
Yeah, software performance somehow decreases over the years. At least you get the feeling. The Problem here is just that most software is much more complex than years ago. Also there is a lot ore running in background.
Just think of a simple messenger from the late 90s early 2000s. They had almost the same featureset than now. Well they got more features, but not running with a simple message you sent. Nowadays you have encryption in every place of the communication. That's something that really hurts the performance.
Also some bottlenecks didn't change that much. While bandwidth increased drastically over the last years, the time your packet need to reach it's destination didn't decrease that much.
Also SSDs are booming now, over years HDDs didn't increase in speed that much.
"New" programming languages (like c#/java ... well javascript) are not there to be performance beasts, there are there to make things easier to develop (and quicker) and at least c#/java to improve stability/security.

Long story short, todays software just does a lot more than yesterdays. It gets more complicated and sometimes even slower but it does many more things.
 
I prefer photo imperialism.

36e35ebc1f87d6b5a97e4199db772c042143c3970d549f53a002e2fc15e83d43.jpg
 
Yeah, I think it's a lowball figure. If every game nowadays with static scenes and lighting was photorealistic, he may have a point. But dynamic GI and such isn't a solved problem such that we know we just need to amp up the Flops to hit a target. 40 TF will create a photorealistic driving game or novelty visuals game (Pikmin) but Final Fantasy or Uncharted as if a live action movie is going to be well beyond that, I think.

Yes,but research is not static. In some sectors algorithmic advances have provided speeds up greater than moore's law itself over the years. Assuming we do not have the optimal algorithms, it may be that advances in algorithmic design might provide 10+x speed ups in the coming years.

Already things look spectacular in realtime
This is with about 2Tflops in a gtx 760

Aside from the slight stylistic art choices, that looks like a cg movie from a while back. Very realistic, will 20x the teraflops in addition to software advances not be enough? Maybe.

well you can't push the micron size down further so once you put as many stacks as can stay cool without burning them out your basicly done with scaling ?

but I thought graphine was next

Depends , there are probably substances, designs and materials that would allow you to scale arbitrarily. The real limits are energy consumption, which might depend on the practicality of reversible computing, and manufacturing costs(as well as eventually space, as once something's say the size of a refrigerator not many would want much bigger than that).

At some point the industry will for the most part max out how much performance they can get out of 3d silicon transistors and architectural improvements. If sales really decline due to this stagnation and R&D shrinks and they no longer have the R&D resources to transition to germanium or photon transistors we could be in for some bad news.

We could see a long term the halt of humanities advancements in computing performance, unless government steps.

I think biological computing holds great promise in the coming decades. It grows it self repairs, it replicates, and if designed appropriately it is ageless(lasting for tens of thousands of years). I've been thinking that perhaps some kind of biosynthesized molecular electronics might be viable long term, but in the short term it may be possible to genetically modify neurons to be less finicky about oxygen and able to suspend metabolic activity. You might think you'd need to feed such a device, but it could hypothetically use electricity to recycle wastes and recapture lost carbon from an enclosed gas chamber, being fully enclosed water and gas losses should be minimal.

For certain applications like artificial intelligence such should provide excellent performance at low maintenance costs, and low energy costs.
 
there was am old pdf from nvidia about 10-12 years ago which stated how much flops they thought they needed to achieve photorealism, I have it on an old HDD (one day I will transfer all my old HDD's onto a new SDD, I must have about 10 lying around) but perhaps someone has a link to it, perhaps worthy of a laugh of how wrong/right they were
that would be a nice read for sure.. Maybe the Titan X would do by then, but now that it is out, we're nowhere close to photorealism. It could be great to have that, but games are art and art is not always photorealistic, and hopefully photorealistic or not, games should always be games.
 
that would be a nice read for sure.. Maybe the Titan X would do by then, but now that it is out, we're nowhere close to photorealism. It could be great to have that, but games are art and art is not always photorealistic, and hopefully photorealistic or not, games should always be games.
you prolly can find the pdf on the nvidia developer website, IIRC it was something like 50,000x more performance was needed. If not I'll be ordering a new Harddisk next week and copying my old HDD's
 
you prolly can find the pdf on the nvidia developer website, IIRC it was something like 50,000x more performance was needed. If not I'll be ordering a new Harddisk next week and copying my old HDD's
what where the numbers they based their theory on? If the scientific basis of those numbers are coincident with the 40 teraflops figure, they could be into something, but 50000 times the performance of a GPU from 2005-2006 would be off the charts even compared to the 40 teraflops Sweeney talks about, especially when you get into the gigaflops area of operations, which were pretty good numbers those days.
 
I don't think photorealism will be achieved at 100TFlops, because developers would rather push higher resolutions in "stylized" games than try to make a game that actually passes for a live action television show. To be blunt, the idea of most future console games targetting ultra high resolutions (4K) and high frame rates to make them compatible with VR makes me sick. In my opinion, we should stay at 1080P (maximum) and 30FPS until we reach the point that games don't look substatially different than real life. Of course this won't happen, because it is much cheaper and easier to make highly stylized games.
 
Back
Top