Most prevalent data structure?

I'm curious about something...

Over the last 2 weeks or so, I've been digging back into my books and revisiting common data structures. All of them were coded in C++, but I wanted to revisit them to possibly implement in C#.

At any rate, I'm curious as to what (if any) are some of the most common data structures used in 3D rendering? (be it games or otherwise)
 
The one mostly used physical data structure is an array! :)
For games, most of the other data structures physically end up in a array - mainly for cache efficiency and not-fragmented-memory.
Other commonly used data structures are: trees of all sorts (binary tree, heap, BSP tree :), oct-tree :) etc.); hash tables; linked lists. One might add "pool" there - but you may argue if that's a data structure or a way to allocate fixed chunks of memory.
 
Did you say array? Is that something new? :)

Yeah, trees definitely make sense.

How common is it to see a game employing full-blown OOP these days? Is it more common than not, or is Quake-style C more prevalent?

That was another area that I've been brushing up on is OOP in C++. Been a while since I had to really think about why virtual destructors are used in C++ classes :)
 
OOP and performance are diametrically opposed. However, so are bugs and performance and cost of production. So I figure it's a balance, which is leaning more and more towards OOP.
 
Saem said:
OOP and performance are diametrically opposed.

sounds to me pretty much like 'strawberries and komodo dragons are diametrically opposed'. would you elaborate on your point?
 
The cost of object creation IIRC is about the most expensive operation. Object creation isn't exclusive to OOP, but it is used more often.

BTW, I think the most used data structure is a stack (function calls and parameters passed) or int -- it's a structure which hold data and has operations carried out upon it. I'll stop being silly now. :oops:
 
NeARAZ said:
The one mostly used physical data structure is an array! :)
For games, most of the other data structures physically end up in a array - mainly for cache efficiency and not-fragmented-memory.
Other commonly used data structures are: trees of all sorts (binary tree, heap, BSP tree :), oct-tree :) etc.); hash tables; linked lists. One might add "pool" there - but you may argue if that's a data structure or a way to allocate fixed chunks of memory.

Yes, actually the most common structure we use is a dynamic array.
We have double ended queues (an array modification).
We have an O(1) add/remove non-sorted array (we call it heap altough it's not correct naming)
We have a linked list structure stored in this "heap".
We have a stack structure (stored in dynamic array).
We have a priority queue structure (stored in dynamic array).

So basicly everything is an array or a dynamic array, even lists and trees.
It keeps the allocation costs low.
 
Saem said:
The cost of object creation IIRC is about the most expensive operation. Object creation isn't exclusive to OOP, but it is used more often.

It depends.
If it's a local object with no virtual functions that is created on the stack that's pretty fast.
A float to int conversion takes much more time in C/C++. :)
 
One could make a strong arguement that if you're not using virtual functions, you're likely not doing OOP. Of course, using C++ class functionality for data abstraction is still fine and likely better than trying to do OO things in the language.

PS. I love Idol. *drool*
 
Hyp-X said:
Saem said:
The cost of object creation IIRC is about the most expensive operation. Object creation isn't exclusive to OOP, but it is used more often.

It depends.
If it's a local object with no virtual functions that is created on the stack that's pretty fast.

now i'm at a complete loss here (appears these days i'm completely unable to follow the posts on this forum)!
how does the presence of a vtable burden an object creation on the stack?

A float to int conversion takes much more time in C/C++. :)

ah, at least something i can wholeheartedly agree with - ftol is the graphics programmer's enemy no.1 under c/c++
 
darkblu said:
now i'm at a complete loss here (appears these days i'm completely unable to follow the posts on this forum)!
how does the presence of a vtable burden an object creation on the stack?

Actually it's not much of a cost. It's just setting another variable.

Object creation has the following two costs:

1. Memory allocation.
- On the stack that is quite cheap
- On the heap it's much slower especially if you use thread safe libc
2. Running the constructor
- It depends on how much work is done

If you have a Vector3 class it's typically allocated on stack and it's small, having vtable would present a noticable slowdown (as well as would be pointless).

I think one should always view the object creation cost relative to the average object lifetime. Vector3's come and go, but Units can persist trough an entire map, so talking about the creation cost of a Unit is silly.
 
Hyp-X said:
darkblu said:
now i'm at a complete loss here (appears these days i'm completely unable to follow the posts on this forum)!
how does the presence of a vtable burden an object creation on the stack?

Actually it's not much of a cost. It's just setting another variable.

hence my frustration from your original comment.

Object creation has the following two costs:

1. Memory allocation.
- On the stack that is quite cheap
- On the heap it's much slower especially if you use thread safe libc

in the thread-safe libc scenario maybe, but in a non-thread-safe scenario libc would just allocate the requested memory on the heap and call the appropriate ctor per-object. why would it be 'much slower'? what would be the 'great OOP implication' in the following code snippet:

Code:
class vect3
{
    float m[3];

public:
    vect3() {}
};

vect3 *array = new vect3() [50];
compared to
Code:
typedef float vect[3];

vect3 *array = (vect3 *) malloc (sizeof(vect3) * 50);

2. Running the constructor
- It depends on how much work is done

sure. but by this very course of logic i can say the same for the non-OO paradigm: a function call may be slow depending on how much work the function does.

If you have a Vector3 class it's typically allocated on stack and it's small, having vtable would present a noticable slowdown (as well as would be pointless).

no doubt, pointless it would be.

I think one should always view the object creation cost relative to the average object lifetime. Vector3's come and go, but Units can persist trough an entire map, so talking about the creation cost of a Unit is silly.

exactly. and that's the source of my frustration when people start talking about the 'great performance burden that OOP imposes'. fact is, C++ can be just as quick as C, given one knows what he's doing.
 
darkblu said:
A float to int conversion takes much more time in C/C++. :)

ah, at least something i can wholeheartedly agree with - ftol is the graphics programmer's enemy no.1 under c/c++
Isn't it partly a case of the "bizarre" x86 architecture/instruction set and partly the truncate rules of C vs default rounding of x86. On other architectures, it's probably less of problem.

BTW, anyone remember the hack to convert float to int quickly by adding a large power or 2 number that effectivelty stored the int in the LSBs? I think I used that in the PCX1/2 drivers.
 
Simon F said:
darkblu said:
A float to int conversion takes much more time in C/C++. :)

ah, at least something i can wholeheartedly agree with - ftol is the graphics programmer's enemy no.1 under c/c++

Isn't it partly a case of the "bizarre" x86 architecture/instruction set and partly the truncate rules of C vs default rounding of x86.

i'd say more of the second. as it's not the 'default rounding of x86' (as you know the rounding mode in the x87 is on a 'latch' principle), it's the 'default rounding when doing fp math' -- one would naturally expect to do his fp math in 'nearest' rounding mode (which is the actual case with CRT), and then when one does a ftol - *bam!* - mode must be switched to 'chop' and the back again to continue regular arithmetics.

On other architectures, it's probably less of problem.

hardly. unless the fpu has a 'store float as int using chop mode regardless of present rounding mode' instruction. freakin munchkins, for christ sake, it still escapes me what made the people responsible for the c-standard come up with that particular ftol rule!

BTW, anyone remember the hack to convert float to int quickly by adding a large power or 2 number that effectivelty stored the int in the LSBs? I think I used that in the PCX1/2 drivers.

hmm, never occurred to me. probably because
a) 99% of the floats i need converting to int are already on the fp stack
b) where a fistp eats them for breakfast (on pentium+)

but such a trick would be interesting to know. might come in handy on some architectures.

btw, just yesterday i got fed up with watching the frametrate of THURP going down as i keep adding more and more passes, so i decided to drop the first bag of ballast in THURP - as one could guess that bag of ballast was the float-to-int casts. so i went through each and every occurence of "(int)" in the code and replaced it for a function doing a single "fistp". the result was framerate x2 - x4 (depending on other factors)
 
Simon F said:
BTW, anyone remember the hack to convert float to int quickly by adding a large power or 2 number that effectivelty stored the int in the LSBs? I think I used that in the PCX1/2 drivers.
I have seen that trick elsewhere. If you just used a power of 2 directly, you would get a 1-bit shift error for negative numbers (Power of 2 minus small number => number of smaller magnitude in the floating-point format => result left-shifted 1 place erroneously). For correct results, it's best to use a magic number on the form 2^n+2^(n-1), with n=53 or so if you write back double-precision numbers and use the low 32 bits (or up to about 50, if you like) as an integer.

Reminds me of another trick to use the FPU to quickly get the approximate logarithm of an integer (or the bit position of the most significant bit that is set):
Code:
union { int i; float f; } P; 
P.f = (float) input; 
result = (P.i >> 23) - 0x7f;
 
arjan de lumens said:
Reminds me of another trick to use the FPU to quickly get the approximate logarithm of an integer (or the bit position of the most significant bit that is set):
Code:
union { int i; float f; } P; 
P.f = (float) input; 
result = (P.i >> 23) - 0x7f;

some eons ago that was one of the monthly quiz questions in a computer magazine where i lived / gets carried away by memories... /
 
sure. but by this very course of logic i can say the same for the non-OO paradigm: a function call may be slow depending on how much work the function does.

I think there are a few things to note. Handling of the activation record/stack frame is done by the CPU very quickly -- likely the only allocation of memory required. The thing with constructors is that you're likely going to be using non-default constructors. And if you're setting values, then you shouldn't use initialisation lists -- breaks OO; unless initialisation lists allow you to make function calls -- I don't have a lot of experiences with them. Inlining allows you to get around some of this, unless the mutators are complex.

Why initialisation lists breaks OO: The idea is that accessors and mutators are supposed to be there, even if they're not needed, they can be later scaled rather than reworking the constructors and making the new accessors and mutators.
 
Saem said:
The thing with constructors is that you're likely going to be using non-default constructors. And if you're setting values, then you shouldn't use initialisation lists -- breaks OO; unless initialisation lists allow you to make function calls -- I don't have a lot of experiences with them. Inlining allows you to get around some of this, unless the mutators are complex.

by 'initilisation list allow you to make function calls' you mean allowing you to go through mutators? no, i don't think you can do that. neither you'd want to at this stage of an object's life. that is, except for the purpose of validation, in which case you can provide for the necessary validators as statics. something along the lines of

Code:
class A
{
    classX      x;
    classY      y;

public:

    static classX validateX(const classX &value);
    static classY validateY(const classX &value);

    A(const classX &init_x, const classY &init_y) :
        x(validateX(init_x)),
        y(validateY(init_y)) {}
};

Why initialisation lists breaks OO: The idea is that accessors and mutators are supposed to be there, even if they're not needed, they can be later scaled rather than reworking the constructors and making the new accessors and mutators.

problem with accessors and mutators and why they cannot be used in initialization lists is that the former are only conventionally *called* accessors and mutators - whereas AAMOF they are full-blooded methods, and as such are granted to see the whole internal scope of the object. it they were somehow semantically restrainted to the embedded vars they are referring to, then they could be called out-of-scope too. but then, that would turn them into mere validators, no?
 
by 'initilisation list allow you to make function calls' you mean allowing you to go through mutators? no, i don't think you can do that. neither you'd want to at this stage of an object's life. that is, except for the purpose of validation, in which case you can provide for the necessary validators as statics. something along the lines of

Providing static methods to validate may seem like a good idea, but the object should be the one maintaining invariance, not the programmer using the class instance. Validation, thusly, falls on the class' implementors lap.

Accessors and mutators are only descriptors for a type of method and in C++ there is no formal facilities for them and so they're treated as methods. In no way does this imply that accessors and mutators should be discarded for initialisation lists and validators because they can see they have full access to the entire class. Besides, the latter is a VERY good thing. What if the validity of a state of a data member is dependent on the state of another? The mutator would require information about the other data member(s) of the class. So long as mutators are implemented properly there is no need to worry, they will only change what they're supposed to change and only allow the change if it maintains the class invariance.

Of course one problem is the lack of semi-predicates, the closest one can come to it are execptions and those can get just as ugly as a couple of gotos.
 
Saem said:
Providing static methods to validate may seem like a good idea, but the object should be the one maintaining invariance, not the programmer using the class instance. Validation, thusly, falls on the class' implementors lap.

apparently i misled you with not providing the bodies of the validators; i just took them for negligible for presenting the case. my bad. the idea is that the validators have been provided _and_ positioned by the class implementor.

Accessors and mutators are only descriptors for a type of method and in C++ there is no formal facilities for them and so they're treated as methods. In no way does this imply that accessors and mutators should be discarded for initialisation lists and validators because they have full access to the entire class. Besides, the latter is a VERY good thing. What if the validity of a state of a data member is dependent on the state of another? The mutator would require information about the other data member(s) of the class.
bolding on my part

class initialisation is sequential, i.e. those members which list-wise are past the data member currently being initialised are still uninitialised. this automatically imposes a uni-directional dependence on member vars. which means that cases where the by-accessor state of member A depends on the state of member B _and_ and the by-accessor state of member B depends on the state of A are beyond the semantics of this scheme.

in contrast, a scheme with static validators and regular accessors/mutators does not have the above shortage.

Of course one problem is the lack of semi-predicates, the closest one can come to it are execptions and those can get just as ugly as a couple of gotos.

ok, i see now how much your are into icon :)
 
Back
Top