The Game Technology discussion thread *Read first post before posting*

I'm rather taken aback that you're suggesting the handling and physics model is "possibly better" when you've not even played one of the titles you're comparing! Do you actually have any technical data to back up this assertion? Quantifiably better how, exactly?

With regards NFS, in terms of areas of technical comparison that can be measured, it comes up some way short of what Turn Ten has achieved with Forza. Lower frame rate, far more frame drops, screen tearing, what seem to be lower poly vehicles, less challenging to render tracks - the list goes on.

I'd be genuinely interested to know in what way we can objectively measure physics, because at the end of the day it usually comes down how "good" the handling "feels" and in this respect, based on my playthrough of SHIFT, it feels "OK" but nowhere near as responsive, precise or "realistic" as either Forza 3 or GT5P.

Which is all great but personal opinion is meaningless in a technical discussion thread, hence the query about actual metrics.

He said better physics engine, not better game physics.

And he is right, thE physics engine in NFS Shift has a huge amount of parameters it can take into account.

From stuff like the physics of the movement of the fuel in the tank to the amount of loss of engine cooling from a coolint leak, the engine has just had most of this stuff turned off to make the game more easy.
 
And FM2 had over 9,000 independent physics simulations per car at 360Hz ... and now FM3 has tire deformation!

Ahem, anyhow, Shift on the consoles is 30fps, by their own accord Shift is updated less often, and my subject hands on was the game wasn't so good.
 
And FM2 had over 9,000 independent physics simulations per car at 360Hz ... and now FM3 has tire deformation!

Ahem, anyhow, Shift on the consoles is 30fps, by their own accord Shift is updated less often, and my subject hands on was the game wasn't so good.

1) We should avoid console comparisons like Shift vs Forza 3 in this thread. This will be your last warning. ;)

2) Is 9000 a lot or less than other physics engines? 9000 is a number. That is all we know.

3) We also don't know how much sophistication has the engine got. We know the Xenon is not a brightest kid in the block so if it has many parameters then maybe the algorithm is not so sophisticated.

How many cpu cycles to calculate add vs square root? A less sophisticated arithmetic needs much more iteration to emulate more sophisticated math.

Example:

Sophisticated (1 step)
5x10=50

vs

Simple (5 steps)
10+10+10+10+10=50

More iterations does not meet better. Can be sign of less sophistication.

But we do not know any of this because we do not have code available. We can only make meaningless guesses like you and me are doing, no? :)


PS.

I make a aircraft cost : performance calculator with not many parameters and few free variables. It is good because the algorithm is good. Pick right variables and do statistical research to develop good algorithm and one has a good system that does not need one million parameters.
 
Last edited by a moderator:
No I'm saying racing games by their nature are probably the most overdraw predictable games out there. Aside from smoke/dust, nothing ever randomly overlaps like it can in other styles of games.

Ah. I see. Isn't smoke and dust important for overdraw consideration? In some racing games (I will not say which, I promised Alstrong) you can fill the screen with large smoke clouds.

This to me is not predictable but has heavy overdraw because over-draw can go from small screen percent to large screen percent very fast.


The road is usually at a very skewed angle to the camera meaning to get it to look nice you have to either use a high level of af or tweak the lod bias, both of which mean the road might have a higher texture gpu load than the cars do.

If you take away AF, and you have larger texture for far away road than needed I still don't think your road texture is as much problem than all the car texture because the whole road uses the same texture but each car has different textures and more textures (like normal maps).

Plus the road has to use a similar lighting model as the cars if they are to all match. The road might not have stuff like vehicle reflections, but it does have other stuff unique to it like tire marks which need to be maintained far into the distance, unlike reflections that can be lod dropped fairly quickly.

I think lighting for the road is less complex (no clearcoat, no reflection, etc...) and also specular can be much lower res. Do you agree?

Also, I thought tire marks was using geometry like guide lines? I would like clarification on this.


I don't believe the "no msaa is bad" argument given that most PS3 games ship without it yet few people seem to mind. You can design the issue away like GT does by not letting cars herd together. Or just drop msaa when there are many cars nearby. That would get them at least 12 car support if it was purely a vertex limitation.

No my friend, as I said earlier, I mean that that no MSAA is bad for racing games at 720P. @ Higher res it is less bad but still it is better to have this.
 
I personally don't know of anyone doing that. That software tile idea came up in another thread recently, maybe it was in this thread? Anyways I replied to it then so I'll just repeat what I had said there. I had considered doing something like that way back on PS3, where I'd break the screen up into small spu-able chunks and do a transparency pass on the spu's at full res, but in small pieces. The problem was two fold. First, it was a ton of work for something that ultimately could be designed away and/or resolved with low res buffers. Second, at the time the spu's weren't used much so it made more sense then. But the idea was axed because we figured the spu's would get very busy in the next rev of the game, and maxed on the rev after that. In other words we decided it was better to spend the spu time elsewhere and stick with traditional methods to deal with the issue. Someone else may have gone with that approach, I just personally don't know of any.

Wasn´t your approach quite a bit different to what nAo suggested? I don´t think he suggested that the full transparancy pass would run on the SPUs.

The tiling of the geometry would of course be done on the SPUs, but the tile cache for frame buffer and zbuffer would be made up by the read/write caches of the RSX as I understand it.

The tiling of the geometry wouldn´t be that taxing on the SPUs would it? Couldn´t it just be added to the the back-face culling pass that is done in most PS3 games anyways?
 
Wasn´t your approach quite a bit different to what nAo suggested? I don´t think he suggested that the full transparancy pass would run on the SPUs.

The tiling of the geometry would of course be done on the SPUs, but the tile cache for frame buffer and zbuffer would be made up by the read/write caches of the RSX as I understand it.

The tiling of the geometry wouldn´t be that taxing on the SPUs would it? Couldn´t it just be added to the the back-face culling pass that is done in most PS3 games anyways?


Sacred 2 uses 64 pixel tiles for PS3.

"The problem was the pixel shader used in the deferred pass, which does the entire lighting computation on one go ... with the need to dynamically determine which light-sources could be skipped for any given pixel. Xenos handled that quite fine, RSX is a GeForce 7 and thus not a fan of branching.

Being the sole guy responsible for the rendering on the PS3, this gave me quite some headaches. The solution was to use the SPUs to determine which light sources affect which pixel, and then to cut the deferred-pass into blocks of 64 pixels, so that all blocks touched by the same lights can be drawn at once (*). "


http://www.eurogamer.net/articles/digitalfoundry-sacred-2-1080p-interview
 
That's only for rendering the lights using the G-buffer, a somewhat different thing...
 
EDRAM

This information about PS2 and it's EDRAM use is amazing to read.

"I have my doubts as to whether it can ever really happen per se. This is a case where technical hurdles are every bit as large as the financial hurdles.

I mean, that eDRAM framebuffer and texture cache allowed you to have not only extremely high bandwidth, but also extremely low latency, and in turn meant that one render pass was extremely fast. Many times more so than any chip made today where the focus of the architecture is towards doing more work per pixel in a single pass (but in turn making the drawing of a single pixel many times slower and more complicated than it was on the GS).

There is quite simply more framebuffer and texture bandwidth on the PS2 than the PS3 has. And even the eDRAM of something like Xenos doesn't really help here because the latency is higher and it's explicitly a transient state holding buffer, rather than a final output field. GS not only allowed you to work in eDRAM, but that was the main VRAM itself... it stored the front and back buffer as well as working textures in there. On current chips, the backbuffer and frontbuffer have to go through a resolve step and are always stored in main VRAM.

All these sorts of things are exploited rather explicitly in the vast majority of PS2 games.

Even aside from the peculiarities of the GS, there are peculiarities of the interaction between the EE and GS. For instance, it's quite common to have the render loop be in a race condition against the GPU. We'll issue a display list to the GS which tells it to start processing from a vertex list... except that vertex list hasn't been filled at this point. Instead, the VU will go through the equivalent of a vertex shader and actually fill vertex data into that list while the GS is reading the stream.

The timing is just so that the VU stays constantly a few clock cycles ahead of the GS. That kind of tight timing resolution and synchronicity is 100% impossible on all current architectures because there is just not that kind of relationship between the components, nor is there quite so much constant predictability to the operating performance of any component (i.e. nothing ever *always* takes up x # of cycles anymore).

Also, synchronization primitives for modern bus architectures are actually very strict, so some of these types of things which were perfectly valid on a PS2 would actually not work at all because the bus would detect that a memory page is dirty and therefore wait for a cache write from the CPU before it does anything. This is of course done because it makes for a machine that is infinitely more stable, but that wasn't really the case for the PS2 which was more of a pure console. This sort of restriction means codependent read-write operations must wait for the CPU to finish -- a delay of many hundreds of thousands of clock cycles, while the PS2 only had to wait about 5 or 6 clock cycles."


http://psinsider.e-mpire.com/index.php?categoryid=17&m_articles_articleid=1315
 
That's only for rendering the lights using the G-buffer, a somewhat different thing...

Yeah, that was my thoughts as well, but would Sacred2 use the same kind of tiled geometry information as nAo suggests, in that case there may be more synergies at play.
 
Wasn´t your approach quite a bit different to what nAo suggested? I don´t think he suggested that the full transparancy pass would run on the SPUs.

The tiling of the geometry would of course be done on the SPUs, but the tile cache for frame buffer and zbuffer would be made up by the read/write caches of the RSX as I understand it.

The tiling of the geometry wouldn´t be that taxing on the SPUs would it? Couldn´t it just be added to the the back-face culling pass that is done in most PS3 games anyways?

Our approaches are more or less the same idea.

He's talking about future hardware. From what I gather, he'd prefer to ditch edram totally since it takes up a lot of die space, and spend that die space on more logic units to help shader performance. Then his tile cache would let it blend at full cache speed instead of main memory speed. It's a totally reasonable idea since only some games are alpha bound whereas every game will be shader bound at some point, so overall performance would benefit from both more shader units and fast alpha from the tile cache, at the cost of some cpu use to tile the geometry (unless future hardware does that for you). My only beef with it is that you'd lose a fast edram play space (assuming it had read and write ability). Post processing seems to be getting more and more elaborate as time goes on, so it would be nice to have a chunk of fast memory to do it in.

The tile cache idea doesn't really apply to rsx. Currently the shader runs and generates a final color. That color gets shuffled off to the rop and it has to figure out what to do with it. If it needs to be blended, then it needs to fetch the frame buffer color from xdr memory at xdr speed, blends it and sends it on it's way. That's where the bottleneck is.

What I was suggesting, to get it working on PS3, was to use spu local store more or less as a tile cache. So assuming it's used just for particles which are likely to be small textures, then tile all the geometry for the particles and process each tile sequentially. For a given tile, dma the frame buffer color and Z data to spu local store. Then dma a texture to it, render all particles with said texture, fetch next texture, repeat until all geometry is done for that tile, rinse and repeat with all subsequent tiles.

The added complication of this method is that a spu software renderer would need to be written, on top of code needed to tile the geometry. So overall it's a lot of work. The future tile cache idea is much simple since you'd use the gpu as normal to do the rendering and rely on it to keep the tile cache populated. You just have to help it by tiling your geometry.


Dr. Nick said:
In a case where you got stuck porting a 360 lead game,where whoever was in charge decided to make use of edram bandwidth, to the PS3 you had to preserve most of the original content would you give that idea a try?

I don't think it happens much anymore (Bayontta excluded) since we're all aware that a PS3 port is all but mandatory. If it did though it's still unlikely I would go for the spu renderer approach since it's both unlikely I'd be allotted enough time to complete it, and unlikely I'd be allotted enough spu time to make it work.


ihamoitc2005 said:
Also, I thought tire marks was using geometry like guide lines? I would like clarification on this.

If they were done as separate geometry then a vehicle on top of the road would still cull it's pixels anyways.
 
Last edited by a moderator:
Ahem, anyhow, Shift on the consoles is 30fps, by their own accord Shift is updated less often...

Actually it is possibly 180Hz multiplicated by a factor of 2 or more internally unmodded. The AI gets a 2x multiplier although unknown if it uses the 180Hz setting (no other game file contains more Hz values than main physics file).

And he is right, thE physics engine in NFS Shift has a huge amount of parameters it can take into account.

From stuff like the physics of the movement of the fuel in the tank to the amount of loss of engine cooling from a coolint leak, the engine has just had most of this stuff turned off to make the game more easy.

Correct, the list of cvars for each car part is extensive. From tireflex physics/heat to to complex engine behaviors, gearbox etc etc etc. It pretty much is and improved and more advanced version of GTR2 engine or base rFactor engine. It got the same car cvars and more although a few are disabled like fuel use and some specific damage types (unmodded). Center gravity point multiplier for car is also some notch below the 'most relasitic one' (0.825 vs 1.0) probably for a bit easier driving. However easy to change back to default value unlocking an advanced sim game over the sim it actually already is under the "candy" UI. :)

Easy to fix though! :mrgreen:
 
Last edited by a moderator:
I believe one of their developers is quoted in an online media interview that the consoles versions ran their physics at 120HZ. EDIT: Ok, my memory was right:

Kikizo: With all of this activity on screen what frame rate are you expecting to achieve?
Jesse: Well this is a multi-threaded application and absolutely there are compromises that you have to make to retain a solid state system, there's a whole series of trade offs for performance, and the render thread's running at 30 frames. Whereas the physics and AI threads are running at a full 120hz, so what that you get a very smooth and silky performance attribute in rendering, and a much needed processing route. Again it's a series of compromises, 16 cars on the track fully modelled, fully damaged, fully physically affected which all present a challenge to a render thread, so you really have to compromise, as you see today we don't feel 30fps has an impact on the experience, we're running very fluidly and I think those elements are not mutually exclusive to the 30-frames-to-60-frames argument.
http://archive.videogamesdaily.com/features/need-for-speed-shift-huge-interview-p3.asp

So the console versions are 30Hz framerate and 120Hz physics as I said. FM2 was 60Hz and 360Hz. We can discuss subjectives and educated feedback and also freely admit that "9k what?" is a reasonable question. But the arguement previously that Shift was more advanced physcics on the consoles is gonna need a lot more evidence that name it claim it based on the specs we know and the general feedback of sim reviewers.

My own feedback isn't very favorable toward Shift, but then again I am comparing console products and no a modified PC version.
 
I believe one of their developers is quoted in an online media interview that the consoles versions ran their physics at 120HZ. EDIT: Ok, my memory was right.


http://archive.videogamesdaily.com/features/need-for-speed-shift-huge-interview-p3.asp

So the console versions are 30Hz framerate and 120Hz physics as I said. FM2 was 60Hz and 360Hz. We can discuss subjectives and educated feedback and also freely admit that "9k what?" is a reasonable question. But the arguement previously that Shift was more advanced physcics on the consoles is gonna need a lot more evidence that name it claim it based on the specs we know and the general feedback of sim reviewers.

Well then it is incorrect value said by dev (btw there is another interview where they say ~400Hz with no specific platform in mind). The physics file has the upper word and clearly is set to 180Hz. If internal engine multiplier is applied I am not sure as I haven't extracted all info from exe file.

http://www.virtualr.net/need-for-speed-shift-new-screens-interview/
...We’re talking about an engine that can run unlimited threads, detailed physics parameters running at around 400 Hz on consoles...

Code:
physicstweaker.xml

        <prop name="tick rate" data="180" />

*.xml (chassis files)
AIMinPassesPerTick=2            // minimum passes per tick (can use more accurate spring/damper/torque values, but takes more CPU)

------------------------
My own feedback isn't very favorable toward Shift, but then again I am comparing console products and no a modified PC version.

Yeah I am speaking about PC version but expect Hz values to be same on console. For known reasons I cant take a peak inside console Shift files (no unpacker, dont own the console version!). About "9000" calculations, dunno but what if X amount of car cvars x360Hz (with some cvars updated slower). Bahh one never knows with PR but it is really exciting to verify what devs promise/says vs what really is done!

But the arguement previously that Shift was more advanced physcics on the consoles is gonna need a lot more evidence that name it claim it based on the specs we know and the general feedback of sim reviewers.

The car physics settings line up quite well with same cars in GTR2 (comparing physics files) however Shift has some new cvars and "improved" tire slip curve. However base tunning settings for cars (tire pressure, oversteer/understeer etc) differ greatly and gamepad/wheel settings might not be optimal. There is also some bugs that can bork custom tunning settings for cars until restarting game.
 
Last edited by a moderator:
Yeah I am speaking about PC version

Then you're out of context for my point. "The sky is blue today in Hawaii" -- "Wrong! It is black on the moon!" :LOL:

but expect Hz values to be same on console.

Pretty big assumption with no basis. And as I noted, their developers don't indicate this is true of the console games which are 30Hz. Or are you telling me your PC version is locked at 30Hz as well ;)

Anyhow, FM1 was sternly rebuked for being 30fps even though it added damage modeling, online play, better AI, etc. Those are all pretty big leaps from where console sims were and yet it still didn't win over a lot of affection. Being an "also ran" at 30Hz is pretty much a killer.

It is a curious decision to leave out 60Hz in this market segment, especially after seeing the response FM1 received.
 
If they were done as separate geometry then a vehicle on top of the road would still cull it's pixels anyways.

Of course it would cull those road pixels my friend and IMHO (very humble) I will say that a car is more pixel/texture processing intensive than the road area it makes to cull.
 
Last edited by a moderator:
Then you're out of context for my point. "The sky is blue today in Hawaii" -- "Wrong! It is black on the moon!" :LOL:



Pretty big assumption with no basis. And as I noted, their developers don't indicate this is true of the console games which are 30Hz. Or are you telling me your PC version is locked at 30Hz as well ;)

Actually for some reason you seem to have skipped this part of my post...

http://www.virtualr.net/need-for-speed-shift-new-screens-interview/
...We’re talking about an engine that can run unlimited threads, detailed physics parameters running at around 400 Hz on consoles...

So yes I have bassis, and devs are indicating it is true. And this seems to indicate of a 2x multiplier hence doubling the 180Hz cvar value I posted earlier from games file.
 
Last edited by a moderator:
Back
Top