Sorry but after the last discussion on CPU architecture I kind of decided as of late to see how 3D graphics algorithms perform on a CPU(not GPU).
Funny thing I found was that DDA was faster than Bresenham.
I'm thinking I may have stuffed up somewhere:
"modded" = DDA
"normal" = Bresenham
I've written and rewritten the test several times, each differently and came up with this final version.
Under all cases I saw almost alike results or the DDA being a bit faster.
The DDA was initially slower when I used Math.round under the "dda" routine, however when I replaced it with "(int)" it became faster.
Is there a more efficient Bresenham algorithm (I stole this one off wikipedia than work it out)?
I believe the reason it's faster is due to 2 factors, as were also discussed in my other thread:
Casting being cheaper than conditional check.
Floating point calcs being faster than fixed point on modern CPUs.
Are my results normal or have I done something dumb?
I have attached my test so people can point at me and laugh.
Funny thing I found was that DDA was faster than Bresenham.
I'm thinking I may have stuffed up somewhere:
Code:
Avg time normal: 53764.058450000004
Avg time modded: 52061.156820000004
"normal" = Bresenham
I've written and rewritten the test several times, each differently and came up with this final version.
Under all cases I saw almost alike results or the DDA being a bit faster.
The DDA was initially slower when I used Math.round under the "dda" routine, however when I replaced it with "(int)" it became faster.
Is there a more efficient Bresenham algorithm (I stole this one off wikipedia than work it out)?
I believe the reason it's faster is due to 2 factors, as were also discussed in my other thread:
Casting being cheaper than conditional check.
Floating point calcs being faster than fixed point on modern CPUs.
Are my results normal or have I done something dumb?
I have attached my test so people can point at me and laugh.