Learning programming languages *split*

tuna

Veteran
As of currently, I am learning Java (and C# is next) as I am studying programming and they wholeheartedly recommend you to use that object-oriented approach where you have a main method but you can create as many methods as you can to create cleaner to understand, reusable code.

That is not an object oriented approach, that is just code separation for readability and/or removing code duplication. But this is OT.
 
That is not an object oriented approach, that is just code separation for readability and/or removing code duplication. But this is OT.
as of currently I am learning Java, that's what they teach me. Most of the typical programs I am making are things like the Floyd's triangle, Pascal's triangle, plus other shapes, arrays, maybe a calendar, etc etc. But for the most part everything is in the main method, so it looks for now like structured language, like C.

Java is a great programming language, to learn and stuff.


My real passion is Xamarin Studio combined with C#. I find it so elegant:


I am learning both at the same time, they are quite similar and C based. Another super interesting language to me is Haskwell, but it might take time to learn the 3.
 
Last edited:
C# and any garbage collected languages are bad for games, especially on consoles.
Of course you're right that if you want maximum performance on an AAA game or maybe an emulator of a modern machine, C++ is going to be the language.

But Java is perfectly useful for most non performance intensive games, so is C#. For instance, Minecraft is written in Java, which in turn was inspired by Infiniminer, which was written in C#.

https://www.rockpapershotgun.com/2011/01/20/proto-minecraft-abandoned-due-to-epic-error/

afaik, you can use the unsafe tag to go low level in C# and use pointers, but I don't know exactly how it works tbh.

This game was made using C# and is a 4 years work, and you can see how the graphics are, but you know...

 
Functional/structured programming is wayyy better, Object Oriented (C++/C#/Java style) is pretty much all you shouldn't do if you want high performance, working on one item at a time instead of a bunch of them, having a lot of dead data in your cache when you want none...
Programming is difficult because you need to know the hardware to make best use of it, and the ins and outs of your programming language so it doesn't get in the way, also big O notation is less and less relevant because memory access are the bottleneck...
There's so much to say about programming and languages and algorithms and how primitive all of that still is, not even talking about keyboards which are nothing else than glorified typing machines meant to slow you down !

Computer science is mostly wrong and backward with occasional bright things.
[Time too limited to expand atm, sorry.]
 
And we still program computers using 7bit ASCII glyphs that already existed in ancient typewriters. Since we now have Unicode, we need keyboards with e-Ink/OLED keys that can be remapped to show any Unicode glyph, and then standardize some keyboard layouts for programming and create some kind of "programming notation" (like it was done to math notation).
It's sad to see languages like APL almost completely forgotten. It has its share of problems, but it is extremely powerful and its creator had a point when he talked about notation as a tool of thought.
 
Functional/structured programming is wayyy better, Object Oriented (C++/C#/Java style) is pretty much all you shouldn't do if you want high performance, working on one item at a time instead of a bunch of them, having a lot of dead data in your cache when you want none...
Programming is difficult because you need to know the hardware to make best use of it, and the ins and outs of your programming language so it doesn't get in the way, also big O notation is less and less relevant because memory access are the bottleneck...
There's so much to say about programming and languages and algorithms and how primitive all of that still is, not even talking about keyboards which are nothing else than glorified typing machines meant to slow you down !

Computer science is mostly wrong and backward with occasional bright things.
[Time too limited to expand atm, sorry.]
introduction to comp sci is pretty... accessible, but the good stuff happens when you go low level. Compilers, OS, and gate level programming separate the 'just need a job' from 'I like this stuff'
 
Programming is difficult because you need to know the hardware to make best use of it, and the ins and outs of your programming language so it doesn't get in the way, also big O notation is less and less relevant because memory access are the bottleneck...
Big O is relevant. However the N in O(N) isn't the number or comparisons or arithmetic instructions, it is the number or cache misses. Cache miss is ~200 cycles, while ALU (multiply, add, etc) is just one.

Branches are worth noting. Modern CPUs are pretty wide and have deep OoO. CPU must guess which way each branch goes. If the guess is wrong, all the speculative work must be discarded and executed again. Avoid branches that are random. Branch that changes infrequently (like null checks and validation) is however fast. Sorting/separating data by branch is a good way to avoid branch mispredictions and also a good way to separate data/code. Leads to better cache line utilization and better code reuse. This is the basis for data oriented entity/component data models.
 
Big O is relevant. However the N in O(N) isn't the number or comparisons or arithmetic instructions, it is the number or cache misses. Cache miss is ~200 cycles, while ALU (multiply, add, etc) is just one.
Indeed. As I said I typed that in a hurry, it's a topic I like too much to pass on replying altogether but under severe time constraints so I'm not nearly precise or deep enough :(

Branches are worth noting. Modern CPUs are pretty wide and have deep OoO. CPU must guess which way each branch goes. If the guess is wrong, all the speculative work must be discarded and executed again. Avoid branches that are random. Branch that changes infrequently (like null checks and validation) is however fast. Sorting/separating data by branch is a good way to avoid branch mispredictions and also a good way to separate data/code. Leads to better cache line utilization and better code reuse. This is the basis for data oriented entity/component data models.
When it comes to branching I prefer no branching as much as possible, [you can do that for maths code with masking and such] there are branch prediction tables and if you can use them sparingly without unnecessary clutter it should run faster [fewer mispredictions]. (nullptr checking for exemple should only be in debug build IMHO. Depending on your programming language you may not have the problem at all [non nil pointers in Swift] or have alternatives [references in C++])
Analysing your data and how you use it (which algorithms you run on it) is the only correct way to decide on your data structures (what you pack together because it's used together)
 
Last edited:
I'll still hammer that our input devices, especially the keyboard, in its very layout, is just completetly wrong, it's a copy of a type-writer which by design is meant to reduce typing speed, therefore efficiency :(

When it comes to programming languages, we are still mostly telling the machine what it must do [imperative programming] rather than what we are trying to achieve, although some languages are more about the latter. (Haskel ? Swift ?)
I'm also unsure using ASCII or even Unicode is a good solution either, first you cannot notice connections/spaghetti code easily whereas some kind of visual programming tool could do that, also I think pretty much all cultures have the same say : "Un petit dessin vaut mieux qu'un long discours" in french, and "a picture is worth a thousand words" in english, which clearly should be a hint that writing in a solely textual programming language might not be the best way to convey the information we are trying to pass on.

I think there is still progress at least in the primitive textual form to make things more readable for the programmers (because, we must all acknowledge that our work will mostly be read by fellow humans, so we must optimise for them), without losing performance for the machine.

(Also note that the compiler's optimisations are not magic and often rely on things they shouldn't, such as undefined behaviour, and you shouldn't count on your compiler to make your code fast, but rather carefully analyse your data set & algorithms.)
 
When it comes to branching I prefer no branching as much as possible, [you can do that for maths code with masking and such] there are branch prediction tables and if you can use them sparingly without unnecessary clutter it should run faster [fewer mispredictions]. (nullptr checking for exemple should only be in debug build IMHO. Depending on your programming language you may not have the problem at all [non nil pointers in Swift] or have alternatives [references in C++])
Agreed. Most code should take parameters as references. Most low level code can't properly deal with null case, and it's not even that code's responsibility. So passing a pointer to low level code is not a correct thing to do. I use only pointers when null is a valid value (this is a very small percentage of code base). Error checking (including unresolved/missing links to assets -> null pointers) should be done at a higher level. You then pass data as reference. if (data == nullptr) handleNullCase() else processData(*data). Data processing code filled with null checks is less readable. Also as you said, even predictable branches are not free. It is instruction cache bloat and too many branches close to each other result in worse branch prediction behavior (as there's fixed amount of predictor storage per cache line).

My preference is to assert code bugs and handle data issues separately (null pointer checks, range checks, etc). Data (asset) issues should never terminate the execution, as most engines share code base with game and tools. Tool build crash (assert) because an user filled wrong data to a field is unacceptable. Data issues should give good error messages and recover. If corrupt data however gets loaded (because it is not correctly validated), that's is a code bug -> asserts will catch that code bug -> someone will fix the actual bug, which is missing data validation (+ potentially add new error message to tools).
 
I think I'll just write a quick summary of the languages I learnt:
Basic (on Amstrad CPC6128), using line numbers to call subroutines
Pascal, C++ & Object Pascal, during my computer science degree, pointers where tough back then ;)
C, because C++ is based on it and it was just difficult to learn w/o its basis
Java, as a server back-end dev during the 1st internet bubble
D, D2, because I was (and still am) dissatisfied with C++
lisp, mostly primitive stuff, but it looked fun to learn.
Clay, which is a kind of C with generics, really nice, changed the way I programmed since
Chapel, mostly "Hello World !" stuff, I really like the approach the language took on concurrency, it's well done and could become the next big language with good runtime/tools
Swift, just started, more than "Hello World !" but not a small engine with it yet, it's still in infancy but already nice

I wrote a 2D rendering engine in Pascal, 6 or 7 engines in C++, 1 in D2 (barely finished), 1 in Clay (unfinished though).
I find it really interesting to rewrite code in a new language with new paradigms, it usually changes the way I code from then on, I use generics a lot more and dislike object oriented a lot more (because the way C++ does it, encapsulates functions in a namespace which prevents me from writing a generic version that is called like a regular member function. In D2 there's Universal Function Call Syntax, which means foo( bar* b ) can be called either b.foo() or foo( b ) so I can.)
Swift protocols are really interesting, they seem to be (not dived in implementation yet) about C++ (still unavailable) concepts, which pretty much are template arguments requirements, to the point Apple started referring to Swift as a Protocol Oriented System's Programming Language, AFAIR. ^^
 
Maybe I should also go about programming styles.
Since I started with basic, I used imperative structured programming, but only with global variables since that's how the language works. (Globals require special care in heavily concurrent engines, so you use them as little as possible, or with extra care, so rarely because extra care is tedious and error prone, basically only things like AssetMananger or LogFile...)
Pascal is conceptually similar but has local variables.
C++ & ObjectPascal exposed me to a virus called object oriented programming (which is such a bad idea it could only originate from California according to M Dijkstra)
Java nailed the idea in my brain (unfortunately, although Java is schizophrenic having an int primitive and an Integer class.)
D & D2 being spirtual successors do C++ continued with the idea of objects, although D2 went multi paradigm and bit more functional (Haskel is a functional programming language, to me it means more math like), because Alexandrescu went on board [and he's very C++ biased even if he doesn't realise it].
lisp was for fun, but everything a list was definetly interesting, and taught me how to think completely differently.
Clay was going back to a kind of C with a dead easy syntax and generics, wrote all the supporting code of my existing in engine in about half the number of lines, because there was a lot of code to reuse through templates that was not possible when functions were integrated into their own namespace (a class defines its own namespace) and with a hidden pointer (the "this" pointer). I also had something like concepts already, and that was before C++11 AFAIR.
I found Chapel looking for something better at concurrency that could run on heterogeneous architectures (ie CPU & GPU), that one being made by Cray inc, it handles supercomputers ^^ Very well designed language in the making, I really like it, following every release because I find it to have a huge potential.
Swift is the best alternative to C++ IMO, written by one of the makers of LLVM, the language is simple, straightforward, powerful, clean, and getting fast. (already as fast/faster than C++ in some cases, sometimes a lot slower, but it's very young !) And it's protocol oriented more than object oriented, seems similar to the way I ended up using templates in C++ thanks to Clay, so might be really nice ^^

For people learning programming today, I think I'd push them toward Pascal, because it's really well thought, for better low level knowledge maybe some C programming, but for more modern stuff, I'd point directly at Swift.
Of course that also depends whether it's the start of a pro career and in which field, for the web, it seems php rules [horrible thing that language], for games C++ rules [unfortunate, language massive, bloated, badly designed and getting worse by each release in complexity, but huge amount of existing code + plenty of rather good tools = hard to avoid :(], and for Apple... well you're the luckiest then, you can use Swift !
 
How do you suggest we enter text, telepathy ?
What about this
looking_across_from_front_l_corner_grande.jpg

https://shop.keyboard.io/

The problem with current keyboards is that, first keys aren't in a row/column format because it was (mechanically) impossible on typewriters, but on electronic devices this is no issue and is way more natural, second is that letters placement was meant to minimize typing accident/interlock, nothing else (not balance writing to both hands, not based on statistical analysis or comfort or anything, just because of the way they were built.)
So both the physical & logical layout of current computers keyboards are plain wrong...

(Small illustration for what I call typing accident : upload_2016-12-6_16-45-55.jpeg)
 
Last edited:
After a night's rest, I think Pascal is a good fit, but Swift might also be, it's more modern unfortunately doesn't run on Windows yet (I think there's a port in progress), but runs on macOS, iOS & Linux, which isn't negligible.
Also I would advise learning about the basics of the hardware, nothing difficult, what's a CPU, memory, how they work together, have some metric informations about size, speed, bandwidth to get a clue, then the programming language, then some algorithms and more details about hardware.

I wonder how other programmers would advise to proceed, I learnt some things before studying computer science, some during, and an incredible lot after...
(Including reading the not so digest Intel architecture books/pdf)


When it comes to algorithms books, I recommend either "The algorithm design manual" or "Introduction de algorithms", you can probably read a number of chapters online to see which one matches you best, they cover the same topics so one should be enough.
There are plenty of good books when it comes to 3D graphics, some with more or less details, the reading them in the right order might be easier.
 
I have a long career of using all sorts of programming languages, and even wrote some training material. But today, I would go about teaching programming completely differently.

I can't go into it too much now, but basically I would teach two different programming styles: agent driven and data driven. Agent driven would be setup very object oriented like, and focus on seeing programming as creating agents that do tasks for you. I would bring in (unit)testing very early too, as a way of thinking about what you want to program before you start, and having quick feedback on whether (and eventually proving that) you succeeded.

The other approach would be to see any programming assignment as a factory line that has resources (data) coming in, and resources (data) coming out, and you are going to try to create as efficient a line as possible.

Then I would try to bring in DDD and MicroService concepts for high-level organization and tie everything together.

While the actual language isn't that important, I would be inclined to go for C# for the agent driven approach, and F# for the data driven approach. I did try Swift on the Mac briefly, and I like that too, especially its sandboxy nature there (create visual effects while you type code). But I don't know if I'd want to give up on the wealth of unit testing frameworks for C#, which I think can really help learning to understand code and learning how to keep code not becoming a giant ball of mud.
 
I think I'll just write a quick summary of the languages I learnt:
Basic (on Amstrad CPC6128), using line numbers to call subroutines
Pascal, C++ & Object Pascal, during my computer science degree, pointers where tough back then ;)
C, because C++ is based on it and it was just difficult to learn w/o its basis
Java, as a server back-end dev during the 1st internet bubble
D, D2, because I was (and still am) dissatisfied with C++
lisp, mostly primitive stuff, but it looked fun to learn.
Clay, which is a kind of C with generics, really nice, changed the way I programmed since
Chapel, mostly "Hello World !" stuff, I really like the approach the language took on concurrency, it's well done and could become the next big language with good runtime/tools
Swift, just started, more than "Hello World !" but not a small engine with it yet, it's still in infancy but already nice

I wrote a 2D rendering engine in Pascal, 6 or 7 engines in C++, 1 in D2 (barely finished), 1 in Clay (unfinished though).
I find it really interesting to rewrite code in a new language with new paradigms, it usually changes the way I code from then on, I use generics a lot more and dislike object oriented a lot more (because the way C++ does it, encapsulates functions in a namespace which prevents me from writing a generic version that is called like a regular member function. In D2 there's Universal Function Call Syntax, which means foo( bar* b ) can be called either b.foo() or foo( b ) so I can.)
Swift protocols are really interesting, they seem to be (not dived in implementation yet) about C++ (still unavailable) concepts, which pretty much are template arguments requirements, to the point Apple started referring to Swift as a Protocol Oriented System's Programming Language, AFAIR. ^^
wow, that's an impressive amount of languages. I wonder how you cope with that. Here you have the code for the last-letter first-letter "game", well, better called the chain of words game -which I had to create in an exam- in a ton of languages. I checked and compared the syntax of the different languages you used, and they are so different!

https://rosettacode.org/wiki/Last_letter-first_letter

It's also my intention to lean several languages. My main language is Java, I have extra passion for C# and C++, and I'd like to learn a language or two that almost nobody know, and I got to say that of those Haskell seems to fit that, plus it's simple and structured. As for Swift..., I just heard of it and you seem to like it so it might be good. For me, until it becomes more platform agnostic it's not much of an option...
 
What about this
looking_across_from_front_l_corner_grande.jpg

https://shop.keyboard.io/

The problem with current keyboards is that, first keys aren't in a row/column format because it was (mechanically) impossible on typewriters, but on electronic devices this is no issue and is way more natural, second is that letters placement was meant to minimize typing accident/interlock, nothing else (not balance writing to both hands, not based on statistical analysis or comfort or anything, just because of the way they were built.)
So both the physical & logical layout of current computers keyboards are plain wrong...

(Small illustration for what I call typing accident : View attachment 1787)
a friend of mine, who is a professional programmer has a similar keyboard. Where I have hard time programming is writing brackets and stuff --it also depends on the language your keyboard is set in, but still.

I learnt typewriting, I studied it for 4 years, and had a decent amount of words per minute I could write -though women in my class where the best at that, maybe women have nimbler fingers-.

But when I am programming that speed is gone! Because traditional typewriting and programming aren't the best friends, I think.
 
Big O is relevant. However the N in O(N) isn't the number or comparisons or arithmetic instructions, it is the number or cache misses. Cache miss is ~200 cycles, while ALU (multiply, add, etc) is just one.

Branches are worth noting. Modern CPUs are pretty wide and have deep OoO. CPU must guess which way each branch goes. If the guess is wrong, all the speculative work must be discarded and executed again. Avoid branches that are random. Branch that changes infrequently (like null checks and validation) is however fast. Sorting/separating data by branch is a good way to avoid branch mispredictions and also a good way to separate data/code. Leads to better cache line utilization and better code reuse. This is the basis for data oriented entity/component data models.
What does big O and O(N) stand for?

I have a long career of using all sorts of programming languages, and even wrote some training material. But today, I would go about teaching programming completely differently.

I can't go into it too much now, but basically I would teach two different programming styles: agent driven and data driven. Agent driven would be setup very object oriented like, and focus on seeing programming as creating agents that do tasks for you. I would bring in (unit)testing very early too, as a way of thinking about what you want to program before you start, and having quick feedback on whether (and eventually proving that) you succeeded.

The other approach would be to see any programming assignment as a factory line that has resources (data) coming in, and resources (data) coming out, and you are going to try to create as efficient a line as possible.

Then I would try to bring in DDD and MicroService concepts for high-level organization and tie everything together.

While the actual language isn't that important, I would be inclined to go for C# for the agent driven approach, and F# for the data driven approach. I did try Swift on the Mac briefly, and I like that too, especially its sandboxy nature there (create visual effects while you type code). But I don't know if I'd want to give up on the wealth of unit testing frameworks for C#, which I think can really help learning to understand code and learning how to keep code not becoming a giant ball of mud.
As I said, Java is my main language (even read Deitel's 9th edition book on it), although I enjoy C# quite a bit more. I tried several Java IDEs, and NetBeans is probably the best, along with Eclipse. Also tried BlueJ and Dr. Java, which are simpler, especially Dr. Java.

With C#, MonoDevelop (Xamarin Studio) everything works like a charm for me. Visual Studio is very nice as you might know but Xamarin Studio is multiplatform, plus it's so fine and easy to use... F# I am interested in because it's meant for GPUs and I would like to create a game some day.
 
I have a long career of using all sorts of programming languages, and even wrote some training material. But today, I would go about teaching programming completely differently.
In regards to learning/teaching, I am currently reading a 2016 book I downloaded in PDF format (although I am going to buy the actual book when I can import it, you can easily see the great effort the guy put into it) called Essential C# 6.0 which I downloaded here (the link has a collection of good books to learn programming and programming games)

https://onedrive.live.com/?authkey=!ACHCoDPWYJdjcRY&id=3974B306D708001E!4064&cid=3974B306D708001E

As a newbie structured programming looks easier for me, but I think OOP might be better in the long run, especially if you can do that while getting high performance, like in C++ (C# is not as fast, but the unsafe tag might be useful if you want to go lower level and use pointers). This is what it says about it in the book:

The beginning chapters of Essential C# 6.0 take you through sequential programming structure, in which statements are written in the order in which they are executed. The problem with this model is that complexity increases exponentially as the requirements increase. To reduce this complexity, code blocks may be moved into methods, creating a structured programming model. This allows you to call the same code block from multiple locations within a program, without duplicating code. Even with this construct, however, growing programs may quickly become unwieldy and require further abstraction. Object-oriented programming, discussed in Chapter 5, was a response intended to rectify this situation. In subsequent chapters of this book, you will learn about additional methodologies, such as interface-based programming, LINQ (and the transformation it makes to the collection API), and eventually rudimentary forms of declarative programming (in Chapter 17) via attributes.

I also like the Foreword of the book, since I am not a very critical person (I get easily hyped and many many times the fall is far greater than how high you think about something), it happened many times to me, never learn :/) and I admire the people who is, and part of the book is depicted to one of those people):


On a different note, I would like to effectively use LINQ and Lambdas.
 
Back
Top