Dummies for Programming or Programming for Dummies

I've no idea why you've raised one badly running Unity game on Android

Because you've mentioned it?

but middleware saves developers from pointless headaches reinventing the wheel

Why do you think so? For example: "scene representation" is a good match for PC and D3D architecture, but probably the worst thing you can do for PS4. How do you intend to abstract that difference away with your middleware?
 
If you want a case of bad design: compiler on PS3 (or X360 to even larger extent) inserting LHS randomly all over the code.
I don't think it was bad compiler design, more like bad CPU design (in-order CPU without any store forwarding hardware *). The compiler basically doesn't have a choice because the calling convention demands storing and loading data from the stack (for function calls). Obviously the compiler could have automatically inlined most of the functions, but the limited L1 instruction caches were already a big performance bottleneck, so it's arguably a better choice to leave inlining as a responsibility of the programmer.

(*) And the CPU didn't have direct path between integer <-> vector/float registers either (another load + store scenario). And the vector pipeline was awfully long. All this combined meant that it was hard to use vector instructions. Basically you could only get any performance boost from the vector instructions if you had hundreds of them in a row (LHS stalls on both sides totaled to almost 80 cycles). Jaguar is a huge improvement in this regard (very short latency vector pipeline) + direct move instruction to/from general purpose registers to vector registers.
 
It's always good to hear about the real ins-and-outs of the hardware at the execution level. Armchair engineers like myself see the paper specs and conclude far too much from them, failing to appreciate that under the hood, performance can be radically affected by the microarchitectures.
 
Because you've mentioned it?
Prior to your clarification, hence it should be dropped?

Why do you think so? For example: "scene representation" is a good match for PC and D3D architecture, but probably the worst thing you can do for PS4. How do you intend to abstract that difference away with your middleware?
I don't know. How's about you ask Telltale Games who operate their own cross-platform middleware, Telltale Tool, whom you seemed impressed with earlier? ;)

And ultimately, it doesn't matter as long as the end result meets the design goals of the game. Who cares if (hypothetically) UE4 isn't an ideal fit for PS4 versus XB1 if the game released performs to spec and sells enough? If you were producing a game and an engineer came to you saying that although UE4 was hitting the graphical targets at the wanted 30 fps, it wasn't an ideal fit for PS4 and could he write his own custom engine from scratch, would it be smart to allow him? Would it be smart as a console company to design a box that only 10% of developers would be able to release games for but you could be sure those games would ideally fit the hardware and be nicely optimised?
 
How's about you ask Telltale Games

Don't need to, they use it only for their games, and these games have a very particular set of features.
Probably their engine will perform as bad as Unity (if not worse) when used for different games.

end result meets the design goals

The result depends ultimately on price. What's the price of using middleware to make a "competitively performant"(1) game?
I would argue that it will always be higher when middleware is used.

(1) means a game that has state of the art graphics pipeline that squeezes as much as possible (compared to other games in that period) from the hardware

only 10% of developers

It doesn't make much sense. Not everybody who wants to make a game is competent enough. Therefore even now it's probably 10% from "all developers" (more like 0.1%).
So, the question that we get to: do we need as many games as we have now? I think, you can already guess my opinion. :)
And for platform holder it doesn't really matter, see Nintendo. They were in black with zero support for third party then, and they are in red with the same support right now, yep, doesn't really matter.
 
Having a bit of experience with UE4 now, there's a lot more to paying for middleware than getting a renderer. You get a ton of tools. When studios say they made a "new engine", they usually don't mean they threw out every single line of code they wrote last gen. They have massive code bases with tools their level designers, artists and animators are familiar with. That doesn't all get chucked out the window. Whatever is useful is still carried over.

If you had the smartest programmers around, and they started a brand new studio from scratch, there's no way you can say it would be more affordable to write absolutely everything from scratch. That's an astronomical amount of work if you're trying to make a AAA title. Even Uncharted 3 used middleware like Havok for physics. That's one of the best technology-driven studios around.
 
I don't think it was bad compiler design, more like bad CPU design

There is no such thing as "bad hardware" design, as I've already said. It's just a matter of perspective. Did we have a better hardware platform back then? (PC doesn't count, for obvious reasons, i.e. it's not a platform).

The compiler basically doesn't have a choice

I beg to differ, compiler did have a choice, most of LHS it did were trivial to avoid. The main problem (in X360 case) was that the platform holder did not allow to use assembly (or change the compiler). People relied on bad hacks, insertion of fake code and other horrors to program something that performs well. That's another case of "let's make smart hardware" that failed miserably. If they were not thinking that "idiots" will program, they obviously could give more control.
 
You get a ton of tools.

That's correct. But usually these tools are not compatible with state of the art, because the engine was in development for some time, it has a lot of legacy code, etc.
The problems with big high-level engines come from their "advantages". I.e. when somebody buys and engine they think: "we are buying a shitload of man-hours invested in the software, for a fraction of the price". But it also means that to tailor and alter that engine even a tiny bit their own developers will need an "order of shitload" hours. Writing an "engine" specifically good for one single game is a very easy task, you do not need all these hours, you do not need all these features, tools can be very simple, etc.

Havok for physics

That's the case of a low-level middleware. I have no problem with that.
 
Having a bit of experience with UE4 now, there's a lot more to paying for middleware than getting a renderer. You get a ton of tools. When studios say they made a "new engine", they usually don't mean they threw out every single line of code they wrote last gen.
The most common thing you do is to rewrite the renderer. Other code doesn't get old as fast as your rendering code.
I beg to differ, compiler did have a choice, most of LHS it did were trivial to avoid. The main problem (in X360 case) was that the platform holder did not allow to use assembly (or change the compiler). People relied on bad hacks, insertion of fake code and other horrors to program something that performs well. That's another case of "let's make smart hardware" that failed miserably. If they were not thinking that "idiots" will program, they obviously could give more control.
Obviously they could have used a non-standard calling convention to keep more stuff in registers. And I fully agree with you that the compilers could have analyzed the loads+stores occurring to the same memory location (if that information was available at compile time) and tried to move them as far away from each other as possible. But it's not a trivial task to do. Problems that require perfect compilers are doomed to fail (whether they are language or hardware based problems). It's always easy to blame the compiler when the hardware (or the programming language) is problematic. C/C++ has so many potential memory aliasing cases that it is almost impossible to compile efficiently (every function call can basically change the whole memory, and every pointer or reference passed to you might align to another, no matter how incompatible the types are). This is one reason why modern CPUs have store forwarding hardware (no compiler can solve this issue, as LHS is a C/C++ language problem).

I would have definitely preferred some language extensions to control the LHS stall cases. As you said the hacks weren't pretty (when you needed them).
 
Back
Top