Gigaherz GPU?

Hyp-X said:
Looks like software renderers are the new trend. ;)

http://www.radgametools.com/pixomain.htm

All I have to say is:
Sad. Why does anybody care about this sort of thing? I mean, I think it might be fun to create something like this from a programming perspective, but why would anybody want to use it? Software renderers are not the way of the future.

Just looking at the texture filtering capabilities of the TNT, a scalar processor would need to execute about 5.6 billion operations per second to keep up (assuming each color channel must be dealt with separately...). While MMX, SSE, and 3DNow! may help to offset this, it still takes up a whole heck of a lot of CPU cycles, cycles that would be better used making more realistic physics, sound, AI, etc.
 
Chalnoth said:
All I have to say is:
Sad. Why does anybody care about this sort of thing? I mean, I think it might be fun to create something like this from a programming perspective, but why would anybody want to use it? Software renderers are not the way of the future.

Just looking at the texture filtering capabilities of the TNT, a scalar processor would need to execute about 5.6 billion operations per second to keep up (assuming each color channel must be dealt with separately...). While MMX, SSE, and 3DNow! may help to offset this, it still takes up a whole heck of a lot of CPU cycles, cycles that would be better used making more realistic physics, sound, AI, etc.
You're looking at it from the wrong perspective! First of all, not everybody has the same super graphics card like you have. Secondly, not everyone is interested in 120 FPS and 16x AA, do you think my little sister cares about it when she plays Tomb Raider II in software mode? She plays the game in action, she's not taking stills and trying to look for mipmapping artifacts. Third, not everybody is focussed at real-time graphics. Everything that does more than only pushing polygons and using hacks to increase realism, uses a software renderer. And last but not least, I can do research about completely new non-hardware related graphics techniques. I am not limited by the hardware and certainly not my wallet. Do you think shadow mapping was first done with hardware rendering? With a software renderer you're always ahead of hardware features.

With a software renderer you also don't have the problem of having to check whether a feature is available. Everything can be emulated. Found a bug? No Problem, it can be fixed without waiting for the next driver. So from a game programmer's point of view it might be more interesting to develop on a software renderer because what you see is what you get, everywhere.

Anyone got hardware with ps 3.0 support already? With a renderer like mine, it would be no problem to emulate them. And because of the optimisations/tricks/hacks I use it will run at a much higher framerate than the reference rasterizer. And that's what I'm doing this for.

No offence, but please stop your "fillrate is everything" way of thinking. There's more to graphics programming than getting the highest 3D mark so you can show off to your friends.
 
Joe DeFuria said:
http://www.radgametools.com/pixowhy.htm

Sounds to me like a pretty legitimate reason for its existence. If your game / app doesn't need the performance and features and performance of DX7 and greater hardware, you can cut down development and testing time significantly. Support costs would come down as well.

If you don't want the development time, just license an engine. That's why these engines exist in the first place.

If you're not going to be making money off of the engine, then it's not going to cost anything to use it.
 
Nick said:
You're looking at it from the wrong perspective! First of all, not everybody has the same super graphics card like you have.

I think we've managed to show pretty well that any software rendering engine is going to have trouble matching even an ancient, low-end graphics card. Most computers today ship with graphics cards in the range of the original TNT.

And with a software renderer, you are not always ahead of hardware features, not anymore. This is where the programmability of modern graphics cards comes in. Soon DX9-level graphics chips will be moving to the low-end, ready to be integrated in mainstream PC's. While the programmability of these graphics processors isn't quite up to the level of CPU's, it's getting close, and is close enough for most rendering situations.

No offence, but please stop your "fillrate is everything" way of thinking. There's more to graphics programming than getting the highest 3D mark so you can show off to your friends.

It's not about "fillrate is everything." It's about utilizing multiple processors for different types of processing. Attempting to use the CPU for all processing is bad for games. For other graphics applications, it might not be so bad (such as high-quality non-realtime rendering), but more programmable graphics processors will soon make even these obsolete.

GPU's are just so much faster than CPU's, and will soon be so flexible that there won't be much, if any, reason to do any graphics processing on the CPU (The only reason I can think of why you'd want to do graphics processing on a CPU, once DX9-level chips become mainstream, is for very high-precision geometry calculations).
 
If you don't want the development time, just license an engine. That's why these engines exist in the first place.

Huh?

How does licensing an engine have anything to do with having to code, test, and support multiple hardware platforms? They are both DIFFERENT ways of trying to reduce cost.

If you're not going to be making money off of the engine, then it's not going to cost anything to use it.

Terribly unclear. Make money off the engine, or game? Licensee or Licensor making money? I have no idea what you're trying to say here.
 
Joe DeFuria said:
Huh?

How does licensing an engine have anything to do with having to code, test, and support multiple hardware platforms? They are both DIFFERENT ways of trying to reduce cost.

Um, because in licensing an engine you don't need to touch (unless you want to) any hardware-specific details, but still get the advantage of offloading graphics processing to the video card, increasing available computation power for a given target machine.

Terribly unclear. Make money off the engine, or game? Licensee or Licensor making money? I have no idea what you're trying to say here.

Making money off of selling a product made from an engine. I'm just saying that you can use pretty much any of the "licensable" engines for free if you don't charge for your product. Another way of stating this is that modifying an existing game (Unreal-series, Quake-series, etc.) would be better than going all software to make things easier, whether you want to produce a product to make money off of or not.

Anyway, the way I see it, a software renderer had better be a "basement bargain bin" type of programming interface. There are much better ways to reduce programming and support overhead than killing processor performance through software rendering.
 
Chalnoth said:
I think we've managed to show pretty well that any software rendering engine is going to have trouble matching even an ancient, low-end graphics card. Most computers today ship with graphics cards in the range of the original TNT.
I won't dispute that the fastest software renderer will always be slower than the fastest 3D graphics card at rendering polygons. Period. But what if you don't have a 3D grahics card or don't want to render triangles? Like you said, most computers today ship with a TNT-like card. But those are new computers. Most people only buy a new computer every 3-5 years. My uncle just upgraded his Pentium 200 to the cheapest Pentium 4 without a 3D card. That might sound terrible to guys like you, but he's perfectly happy with it. That's what a big part of the market looks like. Also think of all the cheaper laptops. So what I'm trying to do is give people like him some 3D graphics once in a while with a rich set of features at an interactive framerate.
Chalnoth said:
And with a software renderer, you are not always ahead of hardware features, not anymore. This is where the programmability of modern graphics cards comes in. Soon DX9-level graphics chips will be moving to the low-end, ready to be integrated in mainstream PC's. While the programmability of these graphics processors isn't quite up to the level of CPU's, it's getting close, and is close enough for most rendering situations.
By the time DX9 moves to the low-end market, the Prescott will be a low-end processor. So there will still be people who choose not to buy a 3D card, but they still expect things like online 3D plugins to work. Not everyone is interested in gaming and are happy with what their CPU can produce.
Chalnoth said:
It's not about "fillrate is everything." It's about utilizing multiple processors for different types of processing. Attempting to use the CPU for all processing is bad for games. For other graphics applications, it might not be so bad (such as high-quality non-realtime rendering), but more programmable graphics processors will soon make even these obsolete.
Were the Quake 2, Half-Life and Unreal software engines that slow or ugly that they made those games no fun at all? No, they ran perfectly in software those days, and they run even smoother nowadays. Gameplay hasn't changed since then. I bet that part of the reason why these engines are still this popular is because they can run on any system. So if you're not looking for the best performance with insane framerates, resolutions the eye can't see and things like anti-aliasing, is software rendering really such a bad choice if you have almost no other choice?
Chalnoth said:
GPU's are just so much faster than CPU's, and will soon be so flexible that there won't be much, if any, reason to do any graphics processing on the CPU (The only reason I can think of why you'd want to do graphics processing on a CPU, once DX9-level chips become mainstream, is for very high-precision geometry calculations).
Even when DX9 becomes mainstream amongst regular gamers, there will always be systems with hardly any 3D graphics hardware. But these systems will have a powerful CPU which satisfies the 3D graphics needs of the people witch such a system. Besides, a 3D card can only render polygons. While pretty powerful, shaders don't allow you to break free from all the restrictions. There are some really interesting demos that do real-time ray-tracing, radiosity or voxel rendering that don't benifit much from current graphics hardware architecture.

Just tell me, how much does a Radeon 9700 make the game experience more exciting? And I mean gameplay, not the "my texture filtering looks better" kind of comparison when looking at screenshots. Why does it constantly have to be faster, bigger, higher? When fighting the bad guys in Unreal, I absolutely don't notice that it uses a dither filter and no antialiasing when running in software mode at 512x384. Only when I walk closer to the wall and stand still I notice it's not as smooth, and then I get shot from the back...

Please, I really don't expect everyone to start using software rendering now. I just hope you can agree it is useful to have a software renderer with a rich feature set which is many times faster than the reference rasterizer? Thanks.
 
Chalnoth said:
Um, because in licensing an engine you don't need to touch (unless you want to) any hardware-specific details, but still get the advantage of offloading graphics processing to the video card, increasing available computation power for a given target machine.
You don't need to touch any hardware-specific details? Ok, then tell me how an artist can use a DX9 shader without hardware support in Linux? With software rendering, you don't have to worry about this. So the only really hardware-independent engine will be a software engine. Else you'll always need some minimum system specs which restrict you to a relatively small market.
Chalnoth said:
Making money off of selling a product made from an engine. I'm just saying that you can use pretty much any of the "licensable" engines for free if you don't charge for your product. Another way of stating this is that modifying an existing game (Unreal-series, Quake-series, etc.) would be better than going all software to make things easier, whether you want to produce a product to make money off of or not.
But then you don't have full freedom to create whatever you want. As I already said, I'm also highly intereted in researching new methods to make things more efficient instead of using brute force. Wasn't it Carmack who said "Try new things"? Every new effect was first implemented in software.
Chalnoth said:
Anyway, the way I see it, a software renderer had better be a "basement bargain bin" type of programming interface. There are much better ways to reduce programming and support overhead than killing processor performance through software rendering.
Yeah, by forcing everyone to buy a workstation graphics card? :rolleyes:
 
Um, because in licensing an engine you don't need to touch (unless you want to) any hardware-specific details, but still get the advantage of offloading graphics processing to the video card, increasing available computation power for a given target machine.

Huh?

Youi're saying that the engines you license are already hardware agnostic, and don't have different issues on different hardware? When was the last time you saw any game based on "some licensed engine" that had zero inconsistency across hardware platforms.

When I make my game with a licensed engine, and someone calls me because their "set-up" doesn't work, do I get to just forward the call to the licensing company?

I don't see why on earth you are opposed to this idea.

Engine licensing has its place. There are valid reasons for EACH of the following approaches. (And each approach having its own set of cons as well.)

1) Build engine from scratch, using hardware based API
2) License Engine, and use unmodified
3) License Engine, and modify it
4) Build engine from scratch, using software based API

Again, I just fail to see why number 4 is "all con, and no pro" to you.

Another way of stating this is that modifying an existing game (Unreal-series, Quake-series, etc.) would be better than going all software to make things easier, whether you want to produce a product to make money off of or not.

Chalnoth, why can't you see that it all depends on what type of game you are trying to make? If I want to make money with a game like the Sims, why should I license Unreal or Quake?

Why do you fail to realize that at some point, the trade-off for "performance" vs. "platform stability" can tip toward "platform stability"? It all depends on the graphical requirements of your game.

Anyway, the way I see it, a software renderer had better be a "basement bargain bin" type of programming interface. There are much better ways to reduce programming and support overhead than killing processor performance through software rendering.

Hey, tell that to Derek Smart....I'm sure he's all ears to whatever obvious ways you are talking about.

Your basic problem is, you are running with the assumption that every game requires the performance of these hardware video cards.

BTW

Did you ever notice how many games, when shipping with full motion video clips, have their OWN software codec to play-back the video? (Bink, for example). Why is there even a market for such a thing? I mean, every computer has some ability to decode video through a standard interface...Why not just encode the video to some "industry standard, hardware decodable codec?"
 
Nick: here's an honest question -- how much longer do you think it will be before it's impossible to buy a PC without some sort of hardware acceleration built in?

I'm thinking integrated graphics here. Sure, up to now integrated graphics has been looked down upon by the hard-core crowd. But it's been fine for the low-end, and with NVIDIA, ATi and others putting competent DX8-level graphics cores into chipsets, accelerated 3D is starting to be entry-level.

Strikes me that by the time DX9 becomes low-end, it might well be impossible to buy a non-accelerated PC. There's always mobile phones I suppose! ;)
 
Joe DeFuria said:
Did you ever notice how many games, when shipping with full motion video chips, have their OWN software codec to play-back the video? (Bink, for example). Why is there even a market for such a thing? I mean, every computer has some ability to decode video through a standard interface...Why not just encode the video to some "industry standard, hardware decodable codec?"

Hmm.
Because MP/DirectShow sucks bigtime? :(
 
nutball said:
Nick: here's an honest question -- how much longer do you think it will be before it's impossible to buy a PC without some sort of hardware acceleration built in?
Well, I can't be absolutely sure of course, but I think that by the time every new PC has at least a TNT2-like graphics card, within a year or two, they will all have >3 GHz Hyper-Threading CPU's which still makes software rendering an interesting alternative, especially for emulating things not available in hardware.
nutball said:
I'm thinking integrated graphics here. Sure, up to now integrated graphics has been looked down upon by the hard-core crowd. But it's been fine for the low-end, and with NVIDIA, ATi and others putting competent DX8-level graphics cores into chipsets, accelerated 3D is starting to be entry-level.
Even then, it will take another few years before everyone has bought such a new PC. That is, if people buy such a chipset, because chipsets without NVIDIA or ATI chips will still be cheaper and many people don't look at the specs when they buy a computer in the supermarket :oops:
nutball said:
Strikes me that by the time DX9 becomes low-end, it might well be impossible to buy a non-accelerated PC. There's always mobile phones I suppose! ;)
By that time, let's do this discussion again shall we? ;) In the meantime, I hope you see that my effort (and also Abrash's) has more use than just being a nostalgic return to the times when producing 3D graphics was fun and not a highly commercialised race for performance...
 
Hmm.
Because MP/DirectShow sucks bigtime?

No. All you're doing is just playing back a video for cryin' out loud. When it works, how bad can the playback be?

It's because by doing it in software, you remove all those problems with "which codecs are installed...which players have problems with which codecs...which video card drivers have which problems with which players, using which codecs....."

You raise the possibility of the video not playing at all or not playing properly whenever you rely on elements not in your direct control...like external codecs, drivers, and media players....
 
Joe DeFuria said:
Hmm.
Because MP/DirectShow sucks bigtime?

No. All you're doing is just playing back a video for cryin' out loud. When it works, how bad can the playback be?

When it works...

Actually the DS feature of turning on hardware acceleration only after the first 5 seconds of the movie is very bad...

It's because by doing it in software, you remove all those problems with "which codecs are installed...which players have problems with which codecs...which video card drivers have which problems with which players, using which codecs....."

You raise the possibility of the video not playing at all or not playing properly whenever you rely on elements not in your direct control...like external codecs, drivers, and media players....

Yep.

People like to install codec packs.

The good thing in codec packs that they collect incompatible codecs and let you install them all at once.

I was really surprised Blizard went for DS with WC3, but from the feedback I saw on forums they had far more compatibility problems with the movie playback than with anything else.
 
Back
Top