Google want to opensource gfx drivers

Electrical and Computer Engineers. I know my classes started out on chip basics and then moved up to C. I'm not even sure the CS program teaches actual C. I think it's just C++ and Java in addition to some other languages.
 
You are way past crazy. Dropout ratio in CS is already around 50% for first-year students anyway at most US universities, and that's with Java. Switch to asm first, and you're going to hit 99%.
So you also think we should first let the students work with Mathematica/Maple and later with a pocket calculator? Oh and let's not even think about long division on paper... They'd never understand and it's pointless, right?

Seriously, there is nothing hard about assembly programming (for small projects). Repeat computer history in fast forward and the students will easily understand every new abstraction. Things like Java inheritance horror newcomers every year, but if you approach things from the other side it becomes a solution instead of a problem! References and memory management still puzzle those who graduate with high honours, while if they learned assembly and C it would have been a natural thing.

I learned C and assembly before I went to university. It's fun, because you feel like having control over the whole machine and you can build things up step by step. With Java you really don't have a clue what you're doing (copying what the textbook sais) and syntax errors make it an unpleasant experience to try something new. My stomach still turns when I see "hello world" in Java:
Code:
class HelloWorldApp {
    public static void main(String[] args) {
        System.out.println("Hello World!"); // Display the string.
    }
}
Class? Public? Static? String? System? You want to explain all that to the students in the first class? Or should they just accept it and ask no questions you can't answer in that class? And in the next class you're going to introduce even more new concepts like that? No wonder they lose interest quickly and 50% drops out.

Obviously, to start with assembly in the first class you need a 'friendly' environment. A simulator where you can see the contents of the registers and execute instructions step by step would be ideal. Obviously only the basic instructions should be taught, but with a hands-on approach they'll only want more. In the second class you can already introduce them to C, and show the disassembly. Nobody will have a big problem understanding how fairly complex statements are translated to assembly. They learned in the first class how to use their fancy calculator...

If you see things totally differently please share your vision, but personally I really don't regret learning the low-level stuff before learning the high-level stuff. I don't want to sound cocky but I think it enabled me to truely tackle any software project ranging from driver development to creating GUIs.
 
I'd suggest covering some basic C functions before moving to asm. Nothing complex, just the basic loops and conditionals. If you can't grasp how an if statement or a for/while loop work seeing the assembly isn't going to help a whole lot. Also you don't really need to understand all the assembly as much as having an understanding of what's going on when you're coding in a higher language.
 
I programme in Smalltalk for a living, and consequently have no love for Java or (shudder) C++.

The "hello world" programme in Smalltalk, incidentally, looks like this:

Code:
Transcript show: 'hello world!'

Bit easier than the Java version! :)

But anyway, coming from an OO background, I think that grasping OO concepts is absolutely critical when it comes to writing commercial application software; and I've found that the more time people spend writing in procedural languages like C, the more likely it is that they will never truly "get" OO techniques. OO is a very different way of thinking from procedural programming, and I think it's vital to start people off thinking in objects.

Falling back from working in an OO fashion to working procedurally (along with all the pointless things like type declarations and memory management) is actually not difficult once you've learned to work in objects. It's incredibly tedious and frustrating, but not difficult. But getting a handle on objects if you've only ever programmed procedurally is much tougher.
 
I'd suggest covering some basic C functions before moving to asm.
I guess you could do either... It depends whether you want them to be really comfortable writing assembly (to make them good embedded system, operating system, driver, or compiler programmers). If you teach C first they'll likely feel horrified for going to a lower level and requiring multiple lines for things they did in a single line in C.
Nothing complex, just the basic loops and conditionals. If you can't grasp how an if statement or a for/while loop work seeing the assembly isn't going to help a whole lot.
Actually a C for loop already has a pretty confusing syntax the first time you see it. In assembly it's instantly clear where the intitialisation is done, where the loop test is done, where the increment is done, and where the jump back happens. Once they get that it's trivial to understand that a for loop is a handy abstraction instead of a complicated construct.
Also you don't really need to understand all the assembly as much as having an understanding of what's going on when you're coding in a higher language.
99% of the time you don't have to know what happens at the lowest level, but it's the other 1% where it can be a lifesaver. Especially when you have a bug, some assembly skills can make it really easy to track down the cause, while at a higher level you might remain clueless and waste hours or days. Debuggers don't show intermediate results and for a release build they don't show the source code at all. I believe that somebody who learned assembly only at a later stage will be less inclined to look at the disassembly.

Anyway, teaching assembly at some point is still much better than not teaching it at all! :) My twin brother never learned it and he never got further than mediocre application programming. There's nothing really wrong with that except that he's still scared by C programming and he'll likely never work on high-tech projects...
 
Jaysus, Walt, we gave zidane1strife a month off for that degree of goofballitis in his rants. :smile:

Heh...;) I know, it does sound goofy, indeed. But I don't think it's any goofier than Google suddenly talking about "open source graphics drivers"...;) Where Google is concerned, there is little the company does, or tries, that is based on goofiness, or on much else aside from self-promotion.
 
Bit easier than the Java version! :)
In a batch file it's just:
Code:
Hello world!
Seriously now, the important thing is to understand how software is executed at every level.
...and I've found that the more time people spend writing in procedural languages like C, the more likely it is that they will never truly "get" OO techniques.
I think that's a generalization. Tons of people have transitioned from C to C++ and created the most impressive software. By the way, good C programmers already structure things into logical objects in C.
OO is a very different way of thinking from procedural programming, and I think it's vital to start people off thinking in objects.
The problem with that is that people tend to think -everything- needs to be object-oriented. It easily leads to over-engineering. And over-engineering leads to inefficiency and makes things hard to maintain. If C++ ever makes you stop shuddering, I suggest to read Refactoring to Patterns. It's full of advanced object-oriented techniques, but the author also makes it very clear to look for the simplest possible solution. Even purely procedural programming has a lot of merit. Don't use a Visitor if a switch statement works too. Don't use a polymorphic class if a union is fine.

Correct me if I'm wrong but I suspect that you consider somebody who writes procedural code in an object-oriented language somebody who never truly "got" object-oriented techniques? Then I suggest you take a step back and ask whether there might be a good reason why it's not done the object-oriented way.
Falling back from working in an OO fashion to working procedurally (along with all the pointless things like type declarations and memory management) is actually not difficult once you've learned to work in objects. It's incredibly tedious and frustrating, but not difficult.
Wouldn't it be pleasant if it wasn't tedious nor frustrating and you actually understood why some software (fragments) are still written procedurally? Every language and paradigm has its purpose. Another must read is Concepts, Techniques, and Models of Computer Programming. Object-oriented programming is just one approach, and far from a silver bullet. It has to be used judiciously to reach all goals of a complex project.
But getting a handle on objects if you've only ever programmed procedurally is much tougher.
How do you know? You've never taken that trajectory. Anyway, for new students I would suggest to transistion to higher-level programming as quickly as possible. Once they're comfortable with assembly, go to C. Once they're comfortable with C, go to C++. Once they're comfortable with C++, go to C#. This way they won't get stuck with low-level reasoning. But they also won't be afraid to go to a lower level to get a better understanding of problems, they'll have a feeling for the reality of performance and memory, and they'll keep things simple.

Now where did we leave this Google story... :D
 
Well, as you will have gathered, I'm quite evangelical about objects, but this is not because I have no experience of programming procedurally. The first language I learned was BASIC (circa 1982 at the age of 11) and I also worked in C and its derivatives for some time before discovering Smalltalk.

The way I now feel about those languages is a bit like the way someone who is actually gay might feel about early attempts to have heterosexual sex simply because he didn't realise there was any alternative. Procedural code may work for some people, buit for me it's just wrong, and once you go OO, there's no going back. :devilish:

We're obviously not going to agree about this. :) As far as I'm concerned, for example, there are no situations in which a switch statement is preferable to anything. ;) There are very good reasons why Smalltalk doesn't even have a switch statement.

As a Smalltalker I find it fascinating to watch the progression of C-like OO languages. If you look at the sequence from C, to C++, to Java, to C# version 1, to C# version 2, those languages are moving steadily closer and closer to where Smalltalk was around 1980. Sadly, they still haven't quite got there. But they're heading in the right direction, so I live in hope. :D

Anyway, yes, off-topic, sorry! :???:
 
We're obviously not going to agree about this. :) As far as I'm concerned, for example, there are no situations in which a switch statement is preferable to anything. ;) There are very good reasons why Smalltalk doesn't even have a switch statement.
You obviously have not had to try to get your [ deleted ]-processing code to run at faster than X microseconds per [deleted].

I won't argue that object-oriented code is easier to write/read but driver code is never fast enough. :(
 
I might, of course, have been exaggerating ever so slightly in my previous post. :yes:

While the performance of managed code is actually far closer to that of non-managed code than many anti-object people realise (thanks to JIT-compilation) it's clear that a high-level language like Smalltalk is not suitable for any kind of application where it is necessary to directly interact with hardware, and directly manipulate values and memory locations. Much of the point of a high-level language is precisely to avoid having to do that type of thing. When the sum total of the task that is being carried out is something that a language was specifically designed to avoid dealing with, that's clearly not going to be a terribly good match.

So no, Smalltalk is not a good candidate for writing driver-software. (Obviously).
 
Actually, the code I was alluding to is a fairly complex algorithm and so would be much nicer to have written in a higher level language. The only problem is that it also had to be very fast.
 
There is also the issue with over-obfuscation in OO languages: a very good example would be trying to use chars in .NET. It might be desirable to use unicode for everything, but in reality, there are still very many things that require simple chars. It takes me in general about as much time to get the casting and handling of the chars right, as it takes to write everything else.

Or enums: instead of simply using a constant of the numeric representation that is in your API documentation, you have to hunt down and use an enum that has a name in excess of 50 chars.

And, without simple functions, you have to make both static and instanced methods of everything, put them inside a class, add bookkeeping methods and properties, constructors and destructors (if needed) and use that instead. For the most trivial things. That's code bloat in the order of 100:3 lines.
 
All I can say is that, speaking as an OO programmer of 13 years' experience, I don't recognise any of those criticisms to any degree. If they were issues that generally applied to OO, I think I would have come across them by now.

Some of them may perhaps be issues with .NET in particular rather than with OO in general.
 
Last edited by a moderator:
Maybe Intel will do the right thing (tm) and release register level specifications for their Larrabee project?

Anyway, I now believe that as CPU and GPU development converges to some degree (with the CPU getting specialized SIMD units and GPUs getting closer to general purpose processors), the problem with the unwillingness to open up GPU (or "stream computing unit"?) specifications will go away within the next five years :p .
 
All I can say is that, speaking as an OO programmer of 13 years' experience, I don't recognise any of those criticisms to any degree. If they were issues that generally applied to OO, I think I would have come across them by now.

Some of them may perhaps be issues with .NET in particular rather than with OO in general.

There are definitely language specific problems. The whole concept of automatic garbage collection can wreak havoc for real time systems. There definitely is a cost associated with object creation and destruction though. It costs more or less depending on the language. Java for instance has very fast object creation. Perl on the other hand is much slower. No matter the language though, the cost can be pretty severe if you have a significant number of objects being created/destroyed.

Having said this, 95% of the time the cost is worth it. Computer hardware is fast enough these days that the benefits far outweigh the costs. There are very specific circumstances when getting the absolute best performance is worth the costs (maintainability, readability, extra debugging, etc).

Nite_Hawk
 
I never said anything like that. Besides, I work for TransGaming, so I really care about the succes of Linux. I just don't think open graphics drivers are the answer. And obviously it's not realistic in the first place.

I wouldn't mind if Ubutu positioned itself as the Linux desktop distribution, and sat around the table with NVIDIA and ATI to discuss how they can help each other...

I wanted to clarify the remains of the argument, and of course, you and Tim knew where I was. I agree with the idea of Ubuntu - at least then we have Debian's foundation for development and excruciating exactness, which can solidify the soundness of the BSD sources closed solidity, without becoming the devil :devilish: that is ... well, those with sense know with which and where we are going.

I may have used to much alliteration, so I'll say this much - you make sense, I just wanted to interject some bland, "I'm a small fry" sense into the argument without being windy about it, flavored with "high-level packet tracers need a voice" without marginalizing people. :D

I guess I'm not good enough at summing both view points yet, without pissing somebody off.
 
Too many people think they could do better than the IHV's driver teams. But those actually competent enough likely already work in these teams. The few who are competent enough but don't yet work in these teams would do us a bigger favor by contacting the IHV's for a position instead of working on open drivers.

Anyway, the drivers themselves are probably not the biggest problem. Here's a must read: The State of Linux Graphics. Instead of pointing fingers at the closed drivers they'd better first take care of the rest. Then they have to make it easier for the IHV's to deliver good drivers (good specifications, conformance tests, etc), and actually give them a reason to care.

We already agree - that's why I asked for an explanation - I thought I had missed something subtle. Guess not.
 
One further point, and you can accept it, or not accept it to your choosing - reverse engineering is no longer the domain of "hackers" but more the domain of those seeking to subvert a patent. This means that protection of "IP" becomes more and more less a matter of strength in court, and more of a matter in design strength.

IMHO, that *IS* progress. Otherwise, you get people that cannot actually deliver declaring "IP" and hosing it up for people with the very same idea that can deliver. Only a fool would accept that "IP" protects them if they cannot develop it further - which seems to be a misguided assumption according to some people here *ahem* - but rather that IP protects development backed by sound "science/developlment" rather than pontifikation.

Pontification is worth about 3 bucks, so it makes sense that it comes cheap.
 
One further point, and you can accept it, or not accept it to your choosing - reverse engineering is no longer the domain of "hackers" but more the domain of those seeking to subvert a patent. This means that protection of "IP" becomes more and more less a matter of strength in court, and more of a matter in design strength.

The entire hundreds of billions of dollars PC industry is the result of nifty reverse engineering by Compaq engineers over 25 years ago. I'd call that kind of consequential. :LOL:
 
IMHO, that *IS* progress. Otherwise, you get people that cannot actually deliver declaring "IP" and hosing it up for people with the very same idea that can deliver. Only a fool would accept that "IP" protects them if they cannot develop it further - which seems to be a misguided assumption according to some people here *ahem* - but rather that IP protects development backed by sound "science/developlment" rather than pontifikation.
Could you rephrase that for me, and add Google/NVIDIA/AMD/Microsoft/Intel/Linux where applicable? :D
 
Back
Top