I'm perfectly trustable, but he was replaced with his evil clone some years ago.Luminescent said:P.S. We trust you, Dio, andypski.
I'm perfectly trustable, but he was replaced with his evil clone some years ago.Luminescent said:P.S. We trust you, Dio, andypski.
Dio said:It is still done.
I have encountered a DirectX game that is written in 100% assembly language. I am not the only one to have commented to the team that they must be 1) asm gods and 2) stark staring bonkers, certainly afterwards, probably before as well..
And if not, when they try to port it to another platform they most definitely will be confined to the padded room.Dio said:It is still done.
I have encountered a DirectX game that is written in 100% assembly language. I am not the only one to have commented to the team that they must be 1) asm gods and 2) stark staring bonkers, certainly afterwards, probably before as well..
Are you talking specifically about the R3xx vs. NV3x architecture?DiGuru said:Sure, API's are great. But why only optimize for the 'bad' videocards?
As are your comments below.Chalnoth said:Are you talking specifically about the R3xx vs. NV3x architecture?DiGuru said:Sure, API's are great. But why only optimize for the 'bad' videocards?
If so, I think you're rather off-base.
How would you know? Did it occur to you that the it appears forgiving because there is a driver between the application and the hardware? I will agree that the R300's shaders are a lot more flexible than the NV3x's.The R3xx architecture is very...forgiving on what the assembly is. Optimizing pixel shaders for the R3xx doesn't yield a huge performance increase.
All of this should be resolved by the driver with the exception of partial precision as you mention below.However, the NV3x is very challenging to program for. It has a lot of quirks and not-so-obvious performance penalties. This alone doesn't necessarily mean it's bad, it just means that it is extremely hard to program for at a low level.
Yet Cg appears to produce results no better than HLSL.It shouldn't be any mystery, then, why nVidia produced Cg. They knew that OpenGL's HLSL (GLSlang, I think it's been called?) wouldn't be ready at the launch of the NV30, and they didn't think, earlier on, that Microsoft's HLSL would be ready yet either (the NV30 was delayed). But nVidia needed to make it relatively easy to program for the NV30, and still get decent performance.
I suspect that a lot of that may be due to an effective assembler/compiler-optimiser buried in the driver but there's a (practical) limit to how much re-arrangement can be done. Recoding the original source may still make a big difference.Chalnoth said:The R3xx architecture is very...forgiving on what the assembly is. Optimizing pixel shaders for the R3xx doesn't yield a huge performance increase.
Of course. It's simply easier to optimize for the R300's architecture. This is why developers shouldn't program in assembly: they would need to write completely different shaders for the two architectures, and will either take too long to write optimized assembly, or won't be able to discover where the performance problems are occuring.OpenGL guy said:How would you know? Did it occur to you that the it appears forgiving because there is a driver between the application and the hardware? I will agree that the R300's shaders are a lot more flexible than the NV3x's.
Which has no bearing on what my statement was. Cg was released prior to HLSL. Also, I'm not so sure that HLSL allows for the possibility of a 'write once, run anywhere' shader. Cg is designed for that sort of thing.Yet Cg appears to produce results no better than HLSL.
Simon F said:And if not, when they try to port it to another platform they most definitely will be confined to the padded room.Dio said:It is still done.
I have encountered a DirectX game that is written in 100% assembly language. I am not the only one to have commented to the team that they must be 1) asm gods and 2) stark staring bonkers, certainly afterwards, probably before as well..
Dio said:It is still done.
I have encountered a DirectX game that is written in 100% assembly language. I am not the only one to have commented to the team that they must be 1) asm gods and 2) stark staring bonkers, certainly afterwards, probably before as well..
What language was RollerCoaster Tycoon programmed in?
It's 99% written in x86 assembler/machine code (yes, really!), with a small amount of C code used to interface to MS Windows and DirectX.
RollerCoaster Tycoon No.1 PC game of 1999
Despite only being released in April of 1999, RollerCoaster Tycoon has been announced as the top-selling PC game overall in 1999, selling more units than any other PC game in the US.
RollerCoaster Tycoon returns to No.1 !
Amazingly, RollerCoaster Tycoon has returned to the number 1 position in the PC games sales chart in the US ! (Source: PC Data, Nov12-Nov18 2000) The recently released Loopy Landscapes expansion pack is at number 6. RollerCoaster Tycoon has been in the top 10 for most of the 20 months since release in March 1999, spending many months at the No.1 spot in 1999 and early 2000, but to have the game back up at No.1 nearly 2 years after release is remarkable.
Chalnoth said:This is also part of the reason why I support going for nothing but high-level language shader programming. Assembly programming is too hardware-specific. The assembly should be done away with entirely, with the compiler compiling directly to the machine code.
Mr. Blue said:Since when has this thread started talking about assembly language? It's getting rapidly off-topic.
-M
That usually happens here after it's been up 2-3 days. This one did a bit better, probably because it's been quite interesting.Mr. Blue said:Since when has this thread started talking about assembly language? It's getting rapidly off-topic.
For twenty years, the overall average frame time has consistently been half an hour.
Why are those things related?MfA said:I wouldnt mind seeing this extended to all programming, only problem is that programmers dislike to expose their source-code
I take "convergence" to mean something entirely different.Pavlos said:This quote is the answer to the original question. Since the average frame time of both offline (30 minutes) and real-time rendering (1/60 sec) is constant, there is not going to happen any converge any time soon.
To my somewhat untrained eyes, LLVM would be a bear to implement either in silicon or in an emulated virtual machine.MfA said:I wouldnt mind seeing this extended to all programming, only problem is that programmers dislike to expose their source-code ... I personally like virtual ISAs as a halfway compromise. Typed SSA virtual assembly languages can be pretty much just as usefull as the original code for the compiler (it is a real shame politics is preventing LLVM from becoming the next gen GCC by the way).