How complete are HLSL's?

I think you ment to ask "are they Turing complete", in other words, "are they powerful enough to execute any possible program", rather than "are they NP complete", which is a question about how hard it is to solve a math problem.

Sure, they're Turing complete, but that's no big deal. It doesn't take much to be Turing complete -- all you need is RAM and a way of doing conditional calculations. Every video card that had hardware support for bitblt operations (which are required for Windows GDI accelleration) is also technically Turing complete.

If you don't allow multiple passes, well then, of course you can easily come up with problems that are too big to solve on any piece of hardware.

Rather than ask about Turing completeness, you should ask about more practical things: :D

+ Can they be used to implement Conway's Game of Life?
+ Can they be used to implement Tetris?
+ Can they be used to implement Doom?
+ Can they be used to implement MAME?
+ Can they be used to implement Linux?
 
Also conjecturing Turing vs NP doesn't add any information - why state obscure definitions irrevalent to 3D graphics?

Go away and show P <> NP then tell us how its done!!!
 
simplicity. Your post does not match your name.

Well, it seemed simple to me... :oops:

Also conjecturing Turing vs NP doesn't add any information - why state obscure definitions irrevalent to 3D graphics?

Huh? I don't understand what you mean by that. I was answering the original question. Are you pulling my leg?

Go away and show P <> NP then tell us how its done!!!

Now I know you're pulling my leg. :D

Personally, I think P != NP, because guessing the right answer is such a powerful short cut. But I certainly don't have any proof of that. :LOL:
 
Xmas said:
Anyone care to explain what 'NP' means? :?: :D

"Nondeterministic Polynomial". Used to describe a class of problems that are such that you can guess at an answer and check in polynomial time whether the answer is valid. 'Polynomial time' means that the time taken (or an upper bound for the time) for the task can be expressed as a polynomial of the problem size. These problems can be divided into two subclasses:
  • P: for this class of problems, you can write a program/algorithm that will always provide an valid answer or say that no answer exists - in polynomial time. Examples of problems in P are: sorting a deck of cards, finding the median of a list of numbers, finding a string within a text, etc.
  • NP-complete: for this class of problems, there is no known algorithm that can guarantee that a correct answer is produced (or shown not to exist) in polynomial time (P != NP is basically the statement that such an algorithm doesn't exist: but a final mathematical proof of this hasn't been found yet). Some NP-complete problems are: Travelling Salesman (given a bunch of cities and roads between them, find the shortest path that visits all cities at least once), Circuit Satisfiability (for a circuit of logic gates, can it be shown that the circuit produces '0' for all possible inputs), and many others.
In short, P problems can be solved in a reasonable amount of time, whereas NP-complete problems of large enough size cannot be solved until the universe suffers its final heat death.

As for HLSLs and NP-completeness: I don't see how HLSLs have anything to do with NP-completeness other than that if they contain conditional jumps, you can set them to solve NP-complete problems for the rest of the universe's lifetime.

As for HLSLs and Turing-completeness: HLSLs without data-dependent conditional jumps are not Turing-complete by themselves. If you allow such a HLSL to run only 1 pass (or a fixed number of passes), there are always problems that it cannot solve that can be solved by some Turing machine. If you allow the HLSL to run infinite passes, you essentially have a Turing machine which cannot halt - and as such solves no problem at all. Data-dependent jumps, on the other hand, will allow the HLSL to simulate any Turing machine, and thereby make the HLSL Turing-complete (ignoring limitations placed on program length, running time, etc)
 
If you allow the HLSL to run infinite passes, you essentially have a Turing machine which cannot halt - and as such solves no problem at all.

Oh, come now, that's an unnescessarily strict interpretation of the rules. You can simply say "when the halt symbol is written, we will consider the program done".

And you don't need data dependant branches to implement a Turing machine, because you can easily fake them by using boolean algebra.

Like I wrote earlier, any card that implements the Windows GDI is Turing equivalent.

Here's a proof: All graphics cards implement the Windows GDI in hardware. GDI operations include bit-blit (or ROP, or whatever Windows calls that operation). You can easily implement Conway's Game of Life using bit-blit operations. And you can implement a Turing machine in Conway's Game of Life. So any Windows graphics card is Turing complete, baring issues of screen size.

Since HLSL supports alpha blending, and since alpha blending can be used to implement boolean operations, HLSL is Turing equivalent, too.

There may be more efficient ways of implenting a Turing machine in HLSL, or PS 1.0, or even GDI, but that's the easiest way I know of to prove that it's possible.

By the way, check out this URL for a Turing machine implemented in Conway's Game of Life:

www.cs.ualberta.ca/~bulitko/F02/papers/tm_words.pdf
 
To implement Conway's Game Of Life in a framebuffer, you need to access pixels adjacent to the pixel currently being processed. That kind of functionality is not present in any HLSL that I am aware of. Sure, you can do render-to-texture for each time step of the Game of Life and that way get access to adjacent pixels, but that requires an agent to control which memory areas are used as textures and framebuffer by the HLSL shaders for each time step - the operation of this agent is outside the scope of the HLSLs themselves. A system that consists of HLSL shaders AND such an agent may be Turing complete - HLSLs by themselves are generally not.
 
Yes, all shaders have a maximum length of N. You can enumerate these shaders, and there will exist a turing computable program not listed in your list.

Of course, using this definition, the largest computer in the world with the biggest harddrive you could build is not turing complete. It's a finite state machine. :)
 
arjan de lumens said:
To implement Conway's Game Of Life in a framebuffer, you need to access pixels adjacent to the pixel currently being processed.

If I remember the algorithm correctly, you need access to adjacent pixels of the previous frame, which means that render-to-texture must be used. Otherwise as you iterate, even if you can access adjacent pixels in the frame buffer, you will change the outcome as not all pixels are updated at the same time.
 
arjan de lumens said:
A system that consists of HLSL shaders AND such an agent may be Turing complete - HLSLs by themselves are generally not.

Wow, that's a very good point. Lets call that being multi-pass Turing complete. All HLSLs are multi-pass Turing complete, but then again, so are more primitive graphics systems, like GDI.

In order to be single-pass Turing complete, a HLSL would have to allow writing to and reading from the same texture an arbitrary number of times. I can see why current GPUs don't do that (it would require a much more expensive texture cache.)

I bet that capability gets introduced in PC graphics hardware within the next three years. At which time HLSLs will be extended to support it.
 
Back
Top