Sweeney presentation on programming

An interesting read.

I agree that while we use languages like C++, C# or Java were pretty much rellying on our own discipline to write working code, and that's never a good idea.

As Tim points out they have the wrong defaults, the assumption is that everything is mutable by default which is just bad.

I have a soft spot for the ML school of languages, they let you write imperitive code when it's needed. Having said that I have a love hate relationship with the syntax.

I've been playing around writing toy languages in my spare time as of late, I like the ideal of a C like language with ML like properties, but everytime I start expanding the syntax it gets more and more ML like :/ .

I like type inferencing, and I'm not convinced it doesn't scale, I've written some pretty large OCAML apps. The issue he raises has more to do with "strong typing" than it does type inferencing. ML also lets you optionally declare the type of a function.

I think were going to have to run into a brickwall before we start moving away from the imperative languages we've been using and I don't think were quite there yet. Either that or someone is going to have to demonstrate that the better way works.
 
blakjedi said:
Main diff between Xenon and Cell PPE cores AFAIK right now is Xenon has beefier VMX and dot product capability.

FWIW It really isn't anything like that simple.
 
Did'nt the guys behind Crytec Engine 2 that Cell has better Multi threading ability...

[xbox360] analyzing it from a software developer’s standpoint it's no different from hyper threading. That means that you're supposed to have 6 threads, but it's only 1.5 threads by 3 in reality. With [PS3's] cell things are looking differently: the main cpu has 2 threads (slightly better than hyper threading) and then you're getting the synergetic processors
 
ERP said:
FWIW It really isn't anything like that simple.

ERP Could u please expand? I think we are all starved for new information and/or better understanding of the architectures. Hopefully we've gotten past descriptions of whats "better" into just describing what "is" and how it can be used.
 
blakjedi said:
ERP Could u please expand? I think we are all starved for new information and/or better understanding of the architectures. Hopefully we've gotten past descriptions of whats "better" into just describing what "is" and how it can be used.

I don't believe I can get any more specific.
All I can really say is that they are superficially similar, from the publically available docs outside of the VMX differences they both have similar execution resources. But benchmarks tend to imply that they are not as close as the paper descriptions would imply.
 
A little clarification by Tim:

Question: I was quite surprised to know you aren't using any assembly. Since the slides seem to be console centric, is that for all platforms or will you still be optimising for the PC?

Tim Sweeney said:
Currently, we write no assembly code on any platforms. For example, where we use SIMD vector math and other low-level platform features, we write the code in C/C++ using platform-specific intrinsic functions (for example, Visual C++'s intrinsics supporting x86 SSE instructions), rather than writing assembly.
 
pc999 said:
Can you say, just by curiosity (as it cant say which one is better overall), which one is better?

I believe this was discussed previously, and the suggestion was that Xenon's core performed better, but there were unanswered questions - as far as I recall - about the conditions of the benchmark (specifically, for example, about whether cache was locked such that the comparison would be fair, or whether the single xenon core had all 1MB available to it; the nature of the code, for example if it was VMX heavy, where one might expect Xenon's core to be a bit better with its extra registers etc.). Maybe ERP could clarify..
 
I can't shake the feeling that there is something fundamentally wrong with a system which just redoes a computation until it finally finds that none of it's inputs has been changed while doing it ... I hope there is a better way, cause that's just so inelegant.

For those who don't like functional languages there's a C implementation of the ideas from that compositional memory transactions paper BTW.
 
Last edited by a moderator:
One of the things in the presentation I don't agree with is his praise of Haskell, only because it has more descriptive programming or metaprogramming. Of course, that can already be done in C++ to large degree with template metaprogramming, and the next C++ standard should have improved support for more descripitive programming. Basically, letting types have the usual run time data, plus compile time data that better describes the type's context, allowing better type checking and optimisation. Like I said, this can already be done with templates in C++, the language extensions will just make it more intuitive.
 
Made me think about digitalmars' D programming language, which address common C++ pitfall, and add garbage collection, among other things...
(For those interested, it's here)
But it's not quite as much as what Mr Sweeney seems to want...
 
DudeMiester said:
Like I said, this can already be done with templates in C++, the language extensions will just make it more intuitive.
Oh sure - the potential functionality there is great - there's just that minor problem with metaprogramming syntax in C++ being neither very human friendly (given that it's an unintended sideeffect it's not exactly surprising) nor having any real debugging facilities whatsoever.

Template metaprogramming can be an awesome tool in the right hands, but the potential for additional bugs and problems is also staggering, and that's one of the major points of Tim's talk - how to improve code stability, not make it worse.
Personally I want more then language extensions to C++ style syntax - I'd like to have REAL facilities for debugging and writting metacode.
 
C++ is not the answer

On a bit of a tangent, I've recently been doing some stuff in Objective-C, and while that will never be a mainstream language, it gets quite a few things right in conjunction with its class library (e.g. distinction between mutable and immutable objects (immutable ones can be directly shared), introspection, and dynamic message dispatch) in an orthogonal extension to C).
While C++ allows one to do awesome things, it's also very good at making you shoot yourself awesomely in the foot in the process of doing so.
 
Tim wants Garbage Collection in languages games will be written (currently C++), Java and C# are already Garbage Collected.

I'm not the expert, but I remember reading that Carbage Collection requires exclusive access and therefore every other work stops while Garbage Collection thread performs it's work. It seems to me, that there may lie a problem, especially for games, but Tim didn't mention this at all...
 
How is reached half TFlop if the Gpu Xenos if have performance to shader around 240GFlops?

Special function units ("oficial" specs of ATI / M$ in slides/pdf talk about 937 GFlops) ?

(some "extra" performance through 192 processors eDRAM? some kind "extra threads" in 48 shaders pipes Gpu?)
 
Last edited by a moderator:
Especially as things move to a multi-core environment, programming at the level of each individual thread is going to get tedious very quickly. The programmer has to manage and direct all the data-flow manually, especially in a medium level procedural language like C/C++. How many cores is it going to take before the programmer goes insane from writing individual code for each core? If the programmer is smart he will develop code-patterns which can be instantiated automatically at a time of need. In doing this he has begun to walk down the road of abstraction which eventually converges towards a high level functional paradigm for handling data flow.

For tasks which are embarassingly parallel there should also be abstract data patterns which are "embarassingly simple" to describe at a high level and which have very straightforward implementations. Non-functional languages like C/C++ have to approach the problem from the bottom up, i.e. the specification and management of each individual thread. Fucntional languages, on the other hand, approach the problem from the top-down. The programmer describes the abstract or algorithmic flow of data and the compiler writes all the multithreaded code to implement the described algorithm.

  • In terms of mental overhead for the programmer, a good functional language wins hands-down
  • In terms of elegence of expression and length of solution, a good functional language wins, hands down.
  • In terms of efficiency of code, a good compiler for a good functional language will easily compete with if not out-do an optimized hand-coded solution in C/C++

Extensions to C/C++ are not going to cut it. At the heart of C/C++ is a very specific philosophy: programming at the procedural level for one thread at a time. This philosophy is no longer going to cut it in a massively multi-threaded environment. And the situation gets worse in a heterogeneouus environment where the solution may need to be re-implemented for each different type of processor in the pool.

[/rant]

Sorry to go off on the deep end. :) I'm a bit passionate when it comes to programming language paradigms. I pretty much agree with Sweeny. The research into programming languages has not caught up to the advancement in hardware. There is a gaping hole which needs to be filled by some (yet to be designed) high-level functional programming language for the specification of parallel programs. ML and Haskell, while functional, do not address data-parallelisms directly. The ideal functional language will have as few control structures as possible and should be as isomorphic as possible to how we would write the algorithms mathematically.
 
liverkick said:
Shouldn't that be "6-9 hardware threads"?

Unification of GPU-CPU...like one big all purpose chip?

Maybe Sony is reserving an SPE for the OS?

Makes sense, if it's going to act as a location free media server, they're gonna have to make some of the CPU off limits to developers.
 
Back
Top