"Digital Media Professionals"

Sorry, I can't find anything that says anything more than that...the LW 8.2 readme basically says the same thing about it.
 
Chalnoth- use IE, for some reason Mozilla is borked on those pages...

As far as the AA algorithm works, I'm having a hard time with it. It appears to store a geometric description of edges in a one frame buffer and colors in another. Then, synced with scan out, colors are modulated according to edge data and the colors of surrounding pixels.

As far as I can tell, only one color is stored per pixel, a guess supported by the fact that it looks pretty good so long as geometric complexity is low, but blurry in finely detailed areas. Here is the format of an entry in the PEB (polygon edge buffer) :
Code:
Polygon Identifier ID 16–24 bits 
Edge Flag Fl 1 bit 
Start/End Flag Se 1 bit 
Slope of Outline Sl 2–3 bits 
Fractional Value Fr 2–3 bits of fraction and 1 bit of overlap flag 
Major axis and x direction Vt 2 bits 
Inside Direction / Normal In 1 bit + 4 bits
It appears that one of these is stored per pixel as well. Not exactly a revolution if you ask me.
 
psurge said:
Chalnoth- use IE, for some reason Mozilla is borked on those pages...
Right, I think those pages use a MS Office plugin to operate. The problem is that I'm on a Linux machine right now (at my office). But I'm heading home, so I should be able to take a look myself soon.

Sounds interesting so far, though. But yeah, don't know how much I really like it so far.
 
I think that Powerpoint presentation is a little dated.

It mentions the Pentium Pro as a current CPU - and says Glare and Lens effects are under development for Y2001 on another slide.

NEXT!
 
akira888 said:
About some of these claims, just remember that VM Labs (maker of the world-shatteringly successful NUON system) claimed to have real time ray tracing, and in the propaganda released by them showed an obviously ray traced scene of translucent bubbles among other obviously offline rendered scenes.

Needless to say that came to nothing.

Not so different from the Cell hype.

They started with claims of unlimited scalability (4 processors in first chip, claimed 16 or 64 next, etc.). Rumors of non-traditional rendering (voxels, ray tracing, etc.). :rolleyes:
 
http://www.saarcor.de/

That's what I'm waiting for. They actually have real hardware, that really works! One day....

But I think much of this hardware implementation stuff is not a good idea. I think perhaps having a library of hardware implemented functions you can use within the context of a programmable pipeline could be a good idea. However, I don't think that's what these guys have in mind.
 
DudeMiester said:
But I think much of this hardware implementation stuff is not a good idea.
There is one company that is apparently trying to go this way:
http://www.starbridgesystems.com/home2.html

They basically are attempting to make field programmable gate arrays performance-worthy. The essence is that you build a computer with an array of programmable processors, and configure it to the specific task on the fly.
 
Chalnoth said:
DudeMiester said:
But I think much of this hardware implementation stuff is not a good idea.
There is one company that is apparently trying to go this way:
http://www.starbridgesystems.com/home2.html

They basically are attempting to make field programmable gate arrays performance-worthy. The essence is that you build a computer with an array of programmable processors, and configure it to the specific task on the fly.

IIRC there's another company who have combined FPGA's and fixed circuitry in one chip to create a CPU which can handle "custom" instructions, thus dramatically increaseing the speed of programs compiled for it.

In my highly unqualified opinion, FPGA's are a better way forward than multi core architectures. Although I presume there's some sort of catch when it comes to using this technology for less specialised applications e.g consumer level CPU's?
 
I just wonder as to whether or not the increased performance from optimized processing can really offset the significant increase in required number of transistors. I also hear it is very challenging to make FPGA's that clock to reasonably-high frequencies (the ones used for chip simulation, I believe, typically run in the kHz range).
 
All I can think is this. Will have extremely high transitor counts. Will likely have poor utiliazation of resources unless every feature was turned on.
 
Colourless said:
All I can think is this. Will have extremely high transitor counts. Will likely have poor utiliazation of resources unless every feature was turned on.
Are you talking about the FPGA system? Well, high transistor counts are a given, but given a good compiler, there should be no problem achieving nearly full utilization all the time (the chip can reconfigure itself very fast, too). Note that Starbridge Systems also uses a custom programming language to further optimize this process.
 
Okay... I just returned from here...

http://www.parims.org/overview/k-1x.files/frame.htm

Question 1... Which horrible, sadistic alien race made it?

Question 2... Did they really intend to drive somebody In-frikkin-sane with that? I'm pretty sure it just made me go temporarily Bat-$#!+ crazy!



Now to the actual question. Is there any coherrent information on these guys, and their technology, or is it all locked up in garbled "napkin English" websites? If so, what would that information detail?

Later

Iridius Dio
 
Looking back at that presentation, it really looks like a comglomeration of bad ideas. Basically it seems like they planned on implementing a number of different techniques via hardware accleration, similar to how PC 3D graphics was progressing up to the GeForce 256 and Radeon. I think we all know now that you can do so much more with shaders, enough that it makes more sense to make your shaders more flexible than to implement more and more specific functions in hardware.
 
But once you start implementing a large set of different algorithms in hardware, eventually it becomes more efficient to just have fewer programmable processors do the same tasks. Granted, if you know exactly what your hardware is going to be doing when you design it, it makes most sense to design it specifically for that task. But since graphics cards can be given many different tasks, it makes much more sense to make them more generalized.

After all, if we imagine that nVidia had done nothing more than taken the TNT architecture and expanded its performance, we would currently have parts with around 32 pipelines. They would be beasts as far as performance was concerned, at least when doing basic 3D operations. But they wouldn't hold a candle to modern processors in terms of being able to render a believable scene, even if they had implemented a number of hardware-accelerated routines for shadows, lighting, bump mapping, etc.

Edit:
Put another way, even if you could make a clever fully hard-wired solution for accelerating 3D graphics, there would always be the game developer who says, "I want to do this!" and the hard-wired solution just won't have a way to do it, or at least not remotely efficiently.

So, this is clearly where programmability comes in: developers have a much wider range of algorithms which they can realistically apply. What's more, software development is accelerating at a rapid pace in 3D, and it makes much more sense to build hardware that can react to changing software, than to build hardware that needs to be re-built as software changes.
 
It is still a trade off, and I dont really trust my intuition on what tradeoffs will end up giving the best bang for buck ... programmability is a means to an end.

Hell, to argue it the other way ... I personally dont feel derivative computation should be hard wired for instance, I think you should be able to write fragment shaders above the single sample level and DIY it.
 
MfA said:
Hell, to argue it the other way ... I personally dont feel derivative computation should be hard wired for instance, I think you should be able to write fragment shaders above the single sample level and DIY it.
Well, that's a very hard thing to ask for. For one, it means that pixel computation would no longer be independent, which means lowered parallelization possibilities.
 
Oh so now the cost isn't worth the gain in flexibility :)

Anyway, the shader need be no less parallelizeable than the present quad based approach ... it would be upto the developer.
 
MfA - how would you accomplish that? Via some form of shared memory, or explicit inter-thread communication similar to that VTA/SCALE paper I linked a while back (which BTW seems just about perfect for shader model 4)?
 
Well personally Im interested in some simple additions to the shader language such that you can do incremental computations in shaders.

The simplest I can come up with would be to add additional registers TOP/BOTTOM/LEFT/RIGHT which allowed synchronous communication with the fragment shaders for the relevant samples (and some way for the shader to know the sample is on a edge, for initialization). You could choose to not use the registers at all, or if you use them to keep either colomns or rows independent for parallelization.

Obviously this wouldnt work at all well on a quad based architecture. The architecture should be based on single sample shading, supporting quad based grouping only as a special mode for efficient computation of approximate derivatives.
 
Back
Top