Whats the next big thing in realtime graphics?

Well, what I'm claiming is, if you pick up an old game that was considered to be good when it was released, and you equivalently pick up a similar new game that was also considered good upon release, you'll find that you prefer the new game. This is provided, of course, that you've played neither game before, and like the style of game that both belong to.

I believe this for the simple reason that when I go back and play old games, I notice many flaws that I didn't notice at that time. I wouldn't notice these flaws if they still existed in new games.
 
Chalnoth said:
I believe this for the simple reason that when I go back and play old games, I notice many flaws that I didn't notice at that time. I wouldn't notice these flaws if they still existed in new games.

funny, i do the opposite- i play new games and think to myself "xyz did that SOOO much better 2-7 years ago! why didnt they learn!"
 
Fred said:
When this question was asked a few years ago, SA gave a post that more or less hit the nail on the head.

He correctly pinpointed the rage over shadow implementations, the bandwidth saving mechanisms that would become standard fare, as well as the eventual unification of pixel and vertex shaders with standardized instructions.

Well same question.

Are we going to see merely incremental upgrades to a static featureset (like some developers claim), or is there still room for radically new and different algorithms/lighting solutions/etc etc. What about *real* displacement maps... Feasible or not?

Etc

Major increases in triangle rate (pushing upward toward 1 billion triangles per second) as well as general floating point processing rate. To reduce the external memory bandwidth requirements of such high triangle rates, on-chip geometry instance caches, on-chip tessellation, and on-chip displacment mapping will be relied upon.

http://www.beyond3d.com/forum/viewtopic.php?p=13331&highlight=#13331

Timeframe was off in that post back then, yet many (not unjustifiably) believed that on chip adaptive tesselation would have made a breakthrough with the advent of DX9.0. Prior to SM3.0 DM capabilities showing up in hardware, only Matrox had a real HW sollution and they didn´t even fully support in their drivers last time I checked.

You may excuse my attempt to bring the thread back on topic :oops:
 
Sage said:
funny, i do the opposite- i play new games and think to myself "xyz did that SOOO much better 2-7 years ago! why didnt they learn!"
Did what so much better? If you're going to pick apart a game and focus in on one flaw, you're always going to find one. People are fallible. But if you haven't been having fun with new games, it's not because there aren't many good new games.
 
Ailuros said:
Timeframe was off in that post back then, yet many (not unjustifiably) believed that on chip adaptive tesselation would have made a breakthrough with the advent of DX9.0. Prior to SM3.0 DM capabilities showing up in hardware, only Matrox had a real HW sollution and they didn´t even fully support in their drivers last time I checked.
SM3 doesn't really offer any tesselation capabilities, though, so its uses may be considered fairly limited as far as displacement mapping is concerned.
 
Fred said:
Whats the next big thing in realtime graphics?
Believable Human Motion.

Well, that is... ...I don't know how much games devs are working specifically to address it (and therefore if it's or not the next big thing to come), but it's definitely IMHO the next big thing we need.
 
Ostsol said:
Saem said:
Fast dynamic world geometry. Remember the HL 2 E3 video, well right at the begining, when the terrain changed, that was the most exciting part of the entire video.
Well, games have been doing that for quite a while. The only issue is ensuring the lighting and shadowing are properly affected.

Well doing it and doing it properly/well are two different things. And no games haven't really been doing it. Take a game like Dungeon Siege, I'd like to be able to cast a huge fireball and have a crater created, trees blown away and so on. This is getting into physics as well.
 
Chalnoth said:
Sage said:
funny, i do the opposite- i play new games and think to myself "xyz did that SOOO much better 2-7 years ago! why didnt they learn!"
Did what so much better? If you're going to pick apart a game and focus in on one flaw, you're always going to find one. People are fallible. But if you haven't been having fun with new games, it's not because there aren't many good new games.

well what specific flaws are you talking about noticing with older games? If you're going to focus in on one flaw, you're almost always going to find one.

I still ahve fun with new games occasionally, just not nearly as often. it just seems like games have mostly gone to crap lately. of course, I blame the consoles- PC games are becomming mroe and more like console games and I HATE almost all console games. it's just a different type of person that plays console games.
 
Flaws in old games? Poor graphics are a given. Most of the older ones also tend to have highly repetitive or slow gameplay. You really have to take each game franchise separately, though.

I'll go for another example: the Elder Scrolls series. Daggerfall was a monstrous game, but all of the dungeons were random, which meant that you could get a mission where an object would be hidden in some obscure location in the dungeon, and it could potentially take many hours to locate said object (which could get extremely boring, particularly if you have to backtrack).

Morrowind fixed this primary issue by simply not making random dungeons anymore. This shrunk the world significantly, but overall it made for a much better game. The game still had its downfalls, obviously, but it still managed to be lots of fun, and quite a bit less boring than Daggerfall.

Anyway, if you want to blame anything for an apparent lack of good games today, you should blame publishers who tend to rush games and tend to not support original game ideas.
 
Chalnoth said:
Flaws in old games? Poor graphics are a given. Most of the older ones also tend to have highly repetitive or slow gameplay. You really have to take each game franchise separately, though.

since when is poor graphics a flaw? it's only a flaw if the graphics aren't good enough for what you need to play the game. Castles2 has 8-bit graphics but I've never really been bothered by the graphics because the gameplay is so good that it doesnt NEED better. Also, I find that newer titles seem to be more repetative. It seems like every game I see is just rubberstamped from a dozen or so different types of games. And I dare you to ever have a game (especially multiplayer) of Civ2 go the same way. And slow? I dont see the problem with that. I want a game to last a long time, not be over in a few hours. I want to be able to take my time and enjoy a game, not be rushed through basically the same thing over and over which each "new" game.
 
Sage said:
Also, I find that newer titles seem to be more repetative. It seems like every game I see is just rubberstamped from a dozen or so different types of games.
These are completely different issues. Repetitive means that you're doing the same thing in the game all the time. You're talking more about unoriginality. Unoriginality is only a problem if titles that you don't understand right away don't interest you (that is, you're dubious because you haven't played a game like that before). There are still original games today, but it takes some degree of courage to actually pay for them.

Anyway, I'm mostly talking about really old games here. Many of the original NES games, as I stated previously, had the problem that you found yourself continually performing the same actions. As I said, the limited amount of memory available for games at this time made it so that they had to be very challenging (i.e. you had to try the same task over and over again until you got it) in order for them to require any reasonable amount of playtime. If you look at the sequels or modern reinterpretations of these games, you'll frequently find that one of the biggest changes is that the gameplay has been made less repetitive.

In the PC space, for instance, Doom is a great example of this. The original Doom was nothing but a dumb shooter. It was fun, but each level consisted of essentially the exact same thing, just with a different map and different monsters: kill monsters, find the keycards, find the exit (and find secrets, if you're so inclined). Fast forward to Doom 3 and they actually managed to get a fair amount of atmosphere and story into the game. It's still repetitive, but not nearly as much as the originals were.

And I dare you to ever have a game (especially multiplayer) of Civ2 go the same way.
Well, I've never played Civ2, but have played Civ3. I actually decided that Civ3 was way too much of a time sink for me to ever play it again. Lots of fun, but I don't have the time for that sort of thing any more.

And slow? I dont see the problem with that. I want a game to last a long time, not be over in a few hours.
Again, I think you missed the point. By "slow" I'm not talking about how long it takes to finish the game. I'm talking about the spaces of time where there's nothing to do. From what I remember of Civ3 (and which is probably true also of Civ2), it tended to be a very fluid game that was always changing, and it therefore didn't feel slow at all to me. Now, the turn-based mode of play may have meant that there were long spaces of time waiting for other players, but this is often a problem of multiplayer games, and is another problem that's being solved (and, by the way, is one of the reasons why I left Everquest).

I want to be able to take my time and enjoy a game, not be rushed through basically the same thing over and over which each "new" game.
So it obviously sounds to me like you're into strategy games where you can set your own pace. Again, have you tried the Total War games? New games, and you'd probably like them....
 
Over the next several years I expect the following in real-time 3d graphics:

Movement from static to dynamic logic. This will be accompanied with a corresponding jump in frequency, in the multiple GHz range.

A certain amount of general purpose on-chip cache memory.

More CPU-like programmability and memory addressability.

I see these changes being driven by the problems that still remain to be solved in realtime 3d. In particular, real-time 3d graphics is about much more that just the graphics (i.e. physical simulation of the optical properties). It's about 3d physical simulation in general. This includes the collision detection, rigid body physics, fluid dynamics, soft body dynamics, statics, character behavior, etc.

All of this simulation is significantly computation bound, and current real-time 3d has barely scratched the surface.

While I believe there is a significant market for meeting this need with a special class of CPUs with an advanced implementation of SSE, so far CPU vendors have not addressed this market specifically. In particular, I think that defining an SSE instruction set that allowed for any vector length (rather than a fixed length of 4 floats/2doubles) with binary object code compatibility and then using the extra area thats becoming available at smaller geometries to increase the floating point vector length would provide a very significant increase in floating point performance.

SSE capabilites currently only occupy a small percentage of the CPU area (probably less than 10%). Most of the area is taken by cache. Rather than putting two complete cores on a chip with its attendant cache, if the extra area was instead dedicated entirely to floating point you would increase the floating point capabilities by a factor or more than 10 times, assuming no change in clock frequency.

While it is true that standard business apps won't see much benefit from improved floating point performance, standard business apps are not CPU bound anyway, so they won't benefit much even from multiple cores.

When you look at what would really benefit from improved performance in CPUs, it is primarily physical simulation: whether for scientific engineering computation or real-time 3d entertainment. Therefore, that's where I think the bulk of the transistors should be spent.

Since the improved SSE instruction set would allow CPUs with different vector lengths to run the same code, the CPU vendors could sell two classes of CPUs: a business/server CPU with a small vector SSE and perhaps multiple cores and a home/workstation CPU with a very large vector SSE (say 64 floats/32 doubles) and a single core. This would provide 10 to 15 times the floating point capability for twice the area rather than 1.5 times or so if you add another entire core.

Seeing as this market has not been tapped by the CPU vendors, I believe the GPU vendors will fill the gap instead. To do so, the GPU will need to become generally programmable and scalable using standard language (say C++) compilers and will need to offer both 32 bit IEEE floats as well as 64 bit IEEE floats just as SSE2 does. It also means that GPUs will need to become more CPU like in other ways, including the use of dynamic logic and general purpose on-chip cache (while using highly parallel floating point, GPUs are still a factor of 5 slower than CPUs in frequency).
 
These things, to me, seem more applicable in the 5-10 year span, SA.

This is because I don't think general processing will be useful to IHV's primary market, gaming, for some time to come. That is to say, I don't feel that game developers will opt to use GPU's for processing other than just graphics processing unless/until that processing is small compared to the graphics processing that still needs to be done. Since we're still a little ways away from good, long pixel shaders, I think it'll be some time before graphics cards (across a broad range of markets: high-end to budget) really start having extra processing power available to them.

In the meantime, I'm sure that there will be many more people becoming interested in using graphics cards in accelerating computation. Matrix diagonalization, for example, is one algorithm that is used very frequently in various areas. Graphics cards may be capable of performing this algorithm faster for less money than CPU's, and thus may become very attractive. But I somehow doubt that many IHV's will pay attention to this market seriously for some time to come.
 
I think, that after the unification of the shaders, vertex processing will dissapear. You input objects made out of bones and polygons with splines, which are morphed according to physics rules and tesselated directly into uniform sized triangles smaller than one pixel each and processed by quite general pixel shaders from a pool, not a quad.

Not really my idea, I read a paper about most of that. Add some on-board EDRAM, and that's about what I expect in 5+ years from now.

I also expect the GPU to do the scene management and lighting by then, so it is really more like a console on a card, with the PC mostly doing the IO, user input and synchronizing/coordination, while just uploading all graphic-related components and rulesets for the current game location to the GPU.
 
I don't think you could really call micropolygons an implementation where "vertex processing disappears." Rather, it sort of removes some of the distinction between vertex and pixel processing.
 
Actually Sony and IBM have been moving in this direction for some time already with the cell processor. It will be interesting to see how this impacts the PC/workstation space over the next few years.

If it has a big impact, the move to much higher general purpose floating point performance in the PC space may happen sooner.

Regardless of how long it takes though and with which companies and platforms, I would mark it as the next major step in realtime 3d, for without significantly higher general purpose floating point performance, you are just not going to be able to perform the kinds of complex computations necessary for a much more complete physical simulation.

I think the recent movement in the PC space to floating point shaders is a transitional one, necessitated by the gradual change from specific hardwired graphics logic. Shaders, being stream oriented, are too specific and difficult to program for general purpose computations in the long run. Imagine trying to write NASTRAN using shaders. What you need for highly complex physical computations is a general purpose load- store model with arbitrary random access, just like in the CPU with SSE. I do see many of the capabilities and lessons learned from shaders being carried forward to any future architectures.

I think the best approach in the long run to high performance floating point is what I mentioned above, either a significant improvement to SSE with large vector sizes that can arbitrarily grow without a change in the binaries, or a cell style floating point vector coprocessor ala the GPU (which, of course, should also allow arbitrary vector sizes)

In either case, the compiler should detect the floating point capabilities and hide the hardware specifics. The compiler should also automatically optimize the floating point code for vector hardware without the developer being involved (much like Intel's current compiler). The same code would then work on current CPUs/GPUs but would take advantage of any new CPU/GPU floating point vector capabilites. Admittedly compiler optimization for vector processing is in its early stages, but it is one of those hurdles that must be overcome to move to the next phase.

Increasing frequency has had a good run for the last few decades. This approach to increasing performance, however, is quickly running out. New hardware architectures will need to focus on parallism and transistor efficiency, while keeping the job of programming for it roughly the same.
 
SA said:
In either case, the compiler should detect the floating point capabilities and hide the hardware specifics. The compiler should also automatically optimize the floating point code for vector hardware without the developer being involved (much like Intel's current compiler). The same code would then work on current CPUs/GPUs but would take advantage of any new CPU/GPU floating point vector capabilites. Admittedly compiler optimization for vector processing is in its early stages, but it is one of those hurdles that must be overcome to move to the next phase.

Increasing frequency has had a good run for the last few decades. This approach to increasing performance, however, is quickly running out. New hardware architectures will need to focus on parallism and transistor efficiency, while keeping the job of programming for it roughly the same.

How are vector processing compilers in their early stages? They've been used for a long time in supercomputing?

New hardware architectures will focus on parallism, but expecting to keep programming the same is a false dream. A departure from C and C++ will have to happen. A general purpose processor designed with parallism will require a different programming model. The memory system will be organized differently.
 
SA said:
Actually Sony and IBM have been moving in this direction for some time already with the cell processor. It will be interesting to see how this impacts the PC/workstation space over the next few years.
Right, but that's more a move on the CPU side of things. I claim it's even harder to get it done on the GPU side, for the reasons I outlined above.

Shaders, being stream oriented, are too specific and difficult to program for general purpose computations in the long run.
Well, sure, but they're not designed for general purpose computation. But, for example, for physics simulations, your computing time is frequently dominated by a single algorithm. If you can get that one algorithm to work more quickly on a graphics card, then you only need to program that one piece on the graphics card (and, typically, scientists use libraries for these parts of their programs anyway, so it's not a big deal if it's hard to program a particular commonly-used algorithm on the graphics card: it only has to be done once).
 
Chalnoth said:
I don't think you could really call micropolygons an implementation where "vertex processing disappears." Rather, it sort of removes some of the distinction between vertex and pixel processing.

Well, there wouldn't be a distinct stage anymore that handles vertices. You only handle curved objects and replace the rasterizer with a tesselator. The pixel processing will stay about the same, although (if we assume dynamic branching is going to be used often) not with quads and with more general-purpose floating point/vector processing.

And a lot of the current hot things (hacks), like normal maps, bump mapping and displacement mapping can go as well. Although you might want to use something like that at the tesselation stage. And anti-aliasing and such would go as well at the same time, just add the values (according to the depth and luminosity value) to the value in the display buffer.

So it would actually make things much simpler, which is IMHO the best way to see if it would be better. Improving things is not adding features, it is removing as much as possible.
 
Back
Top