Could this Flightsim engine work on PCs?

As far as the graphics go, I'm sure a modern high-end PC would have no problems.

The problems may come in with detailed flight models, or with obtaining the level of detail that that simulator appears to have (that is to say, the amount of detail on the ground....my suspicion is that it may take more texture memory than would fit in cards today).

But as far as the graphical effects, well, they're nothing special.
 
I would think that modern highend PC could handle that. especially since the framerate was not very high. it looked like it was around 20 frames per second. the detail on the ground was nice. the lighting was nice. the effects were nice, too. but I don't see anything that is way beyond (if at all) what can be done on PCs.

after playing the 60fps Ace Combat 5 on the lowly PS2 with its paltry 40 MB memory, I don't see why 3+ GHz PCs with 1-2 gigs of system memory and 256 MB videocards with PS 2.0/3.0, 16 pixel pipes, 6 vertex shaders couldnt handle that flight sim. it would be interesting to know what hardware that sim was running on. maybe it was PC hardware to begin with.

edit: oh look

http://www.cae.com/www2004/Products...sual_Solutions/Image_Generators/tropos.shtml#

CAE Tropos combines CAE's expertise with ATI's latest graphic processing chips and exceeds every performance standard currently set by CAE's Maxvueâ„¢ image generator. In addition, Tropos is the only image generator developed using commercial off-the-shelf graphic processors with the calligraphic capability required to generate realistic runway lighting with occlusion, a critical achievement since the FAA requires realistic runway lighting for Level D certification.

basicly it's using off the shelf ATI and (probably also) CPUs. so now its really a question of can 1 CPU and 1 ATI VPU in a PC environment reproduce that sim. I'd say probably very closely. unless that sim is using
multipul ATI VPUs in a tighter architecture than PC with more bandwidth & less latency.


btw check out the video of CAE's highend Medallion system running a simulator with Eurofighter and Tornado
http://www.cae.com/www2004/Products..._Solutions/Image_Generators/medallionS.shtml#
 
Megadrive1988 said:
especially since the framerate was not very high.
You can't judge that from a video.

Anyway, I still suspect that it's too much data to realistically display on a modern system. With a custom-built system, this company may have the luxury of designing, in effect, graphics cards with gigabytes of RAM. This may be necessary to show something like that at interactive framerates.

The resolution is probably also a fair bit higher than you could get out of desktop systems (if it's used for a simulator, a resolution of 4000x3000 or higher wouldn't be unheard of).
 
mattredd said:

It can, but you lose features. It also depends on what you mean by PCs. The standard platform is a dual pentium 4 but with 4 raster boards (16 GPus) and a calligraphic board. If you use a standard graphics card you lose features such as stochastic AA and calligraphic light points. In addition sychronized frame/refresh rate is not offered.

The military version (Medallion-S) also runs on the same platforms and has been demonstrated on a standard graphics card.
 
Chalnoth said:
As far as the graphics go, I'm sure a modern high-end PC would have no problems.

The problems may come in with detailed flight models, or with obtaining the level of detail that that simulator appears to have (that is to say, the amount of detail on the ground....my suspicion is that it may take more texture memory than would fit in cards today).

But as far as the graphical effects, well, they're nothing special.

The flight model runs on another PC. The graphics image generator only renders the scene based on an eyepoint provided by the other PC.

The real light lobes and the layered fog model are better than what you will see in games. What effects where you expecting. It's made to look realistic and meet FAA level D training standards. The military version (Medallion-S) would have better special effects.

The airport lighting should also look better with calligraphic lights since the intensity can go beyond what a standard monitor can project. Unfortunately you can't see the calligraphic light points in the video.
 
Fred da Roza said:
The real light lobes and the layered fog model are better than what you will see in games. What effects where you expecting. It's made to look realistic and meet FAA level D training standards. The military version (Medallion-S) would have better special effects.
I somewhat doubt that a modern PC couldn't do these effects.

The airport lighting should also look better with calligraphic lights since the intensity can go beyond what a standard monitor can project. Unfortunately you can't see the calligraphic light points in the video.
I'm not sure how that's possible with ATI hardware (overbright lighting). But it certainly would be possible with the NV4x modified to support a new output buffer format for higher dynamic range. Care to enlighten us as to what calligraphic lights are?

Edit:
Nevermind, decided to look 'em up myself. Looks like calligraphic light points essentially use separate hardware to produce very bright lights separate from the scene displayed by the graphics card (though they may use the graphics card to decide where the light points go, for occlusion testing and whatnot). Obviously you won't get these on PC hardware.

An implementation on a CRT will set the electron beam to focus on specific points on the screen for a certain length of time, not shining on points not lit by calligraphic lighting. This may be done on a display with a very high refresh rate, displaying first the rendered image, then the calligraphic lights, etc.
 
Megadrive1988 said:
I would think that modern highend PC could handle that. especially since the framerate was not very high. it looked like it was around 20 frames per second. the detail on the ground was nice. the lighting was nice. the effects were nice, too. but I don't see anything that is way beyond (if at all) what can be done on PCs.

after playing the 60fps Ace Combat 5 on the lowly PS2 with its paltry 40 MB memory, I don't see why 3+ GHz PCs with 1-2 gigs of system memory and 256 MB videocards with PS 2.0/3.0, 16 pixel pipes, 6 vertex shaders couldnt handle that flight sim. it would be interesting to know what hardware that sim was running on. maybe it was PC hardware to begin with.

edit: oh look

http://www.cae.com/www2004/Products...sual_Solutions/Image_Generators/tropos.shtml#

CAE Tropos combines CAE's expertise with ATI's latest graphic processing chips and exceeds every performance standard currently set by CAE's Maxvueâ„¢ image generator. In addition, Tropos is the only image generator developed using commercial off-the-shelf graphic processors with the calligraphic capability required to generate realistic runway lighting with occlusion, a critical achievement since the FAA requires realistic runway lighting for Level D certification.

basicly it's using off the shelf ATI and (probably also) CPUs. so now its really a question of can 1 CPU and 1 ATI VPU in a PC environment reproduce that sim. I'd say probably very closely. unless that sim is using
multipul ATI VPUs in a tighter architecture than PC with more bandwidth & less latency.


btw check out the video of CAE's highend Medallion system running a simulator with Eurofighter and Tornado
http://www.cae.com/www2004/Products..._Solutions/Image_Generators/medallionS.shtml#

The video was probably recorded at 20 frames per second. FAA certification requires 60 Hz synchronized frame/refresh rate (in day mode)so it runs at 60 frames.

CAE made an F-18 demo last year and a new military demo this year which was shown (last year and this year respectively at ITSEC. The F18 video in on the site. It's unfortunate that they down sampled the F-18 video so much. The sim actually runs at 60Hz, 2 million pixels.
 
Chalnoth said:
Fred da Roza said:
The real light lobes and the layered fog model are better than what you will see in games. What effects where you expecting. It's made to look realistic and meet FAA level D training standards. The military version (Medallion-S) would have better special effects.
I somewhat doubt that a modern PC couldn't do these effects.

Remember we render scenes that have 40+ miles of visibility. We are required (by FAA standards) to maintain a 60 Hz frame rate while rendering 1600 calligraphic light points in day. It doesn't have to be calligraphic points per say (if you want to get picky) but it must meet a certain contrast and resolution requirement that is unacheivable with standard displays. In addition the scene is rendered using 16 sub-pixel stochastic AA.

Would you believe a lot of visual system manufacters (like SGI) don't even offer real light lobes on their military visual systems because of the performance impact. In addition the calligraphic light points cut our performance in half.
 
Fred da Roza said:
Remember we render scenes that have 40+ miles of visibility. We are required (by FAA standards) to maintain a 60 Hz frame rate while rendering 1600 calligraphic light points in day. It doesn't have to be calligraphic points per say (if you want to get picky) but it must meet a certain contrast and resolution requirement that is unacheivable with standard displays. In addition the scene is rendered using 16 sub-pixel stochastic AA.
Right, this is the sort of stuff I was expecting (aside from the calligraphic points) from my first post.

Would you believe a lot of visual system manufacters (like SGI) don't even offer real light lobes on their military visual systems because of the performance impact. In addition the calligraphic light points cut our performance in half.
That is interesting. I wouldn't think that calligraphic lights points should be able to cut performance in half. That is to say, all that you should need from the graphics card for the calligraphic lights is depth information for the scene. Something would definitely seem to be not optimized properly if performance is cut in half (whether it's hardware or software). For example, if your system is entirely geometry-limited, you should be able to use MRT's to get the calligraphic points information at no additional geometry cost. And, furthermore, the calligraphic point information should require much less processing than the other shaders which make up the scene.
 
Chalnoth said:
That is interesting. I wouldn't think that calligraphic lights points should be able to cut performance in half. That is to say, all that you should need from the graphics card for the calligraphic lights is depth information for the scene. Something would definitely seem to be not optimized properly if performance is cut in half (whether it's hardware or software). For example, if your system is entirely geometry-limited, you should be able to use MRT's to get the calligraphic points information at no additional geometry cost. And, furthermore, the calligraphic point information should require much less processing than the other shaders which make up the scene.

You might find this interesting.

http://www.cs.rochester.edu/u/wyi/sgi/ch18.html

They do mention that:
Light point computations are expensive.
 
Fred da Roza said:
Chalnoth said:
That is interesting. I wouldn't think that calligraphic lights points should be able to cut performance in half. That is to say, all that you should need from the graphics card for the calligraphic lights is depth information for the scene. Something would definitely seem to be not optimized properly if performance is cut in half (whether it's hardware or software). For example, if your system is entirely geometry-limited, you should be able to use MRT's to get the calligraphic points information at no additional geometry cost. And, furthermore, the calligraphic point information should require much less processing than the other shaders which make up the scene.

You might find this interesting.

http://www.cs.rochester.edu/u/wyi/sgi/ch18.html

They do mention that:
Light point computations are expensive.

Yes, light calculations are expensive, but the semi-defered(sp?) method that Calnoth suggested could improve the speed in which the lighting is calculated.
 
Megadrive1988 said:
I saw the F/A-18 Hornet video, it was ok.

the Apache video is much more impressive since it's in quicktime and runs at what seems to be 60fps (the video) http://www.cae.com/www2004/News_Room/videoGallery_HL.shtml#

Imagery was better in the F/A-18 video. It was satellite, I think, 1 meter resolution of Cold Lake, Alberta. But yes the video fps was crap. I actually have a copy of it at the captured resolution (think it was 1024 x 768). It's something like 100 - 150 mb so there is no way they were going to load that.

Apache textures look crappy. I like the Eurofighter video better and the database is actually what is being delivered on the contract.
 
Fiver said:
Yes, light calculations are expensive, but the semi-defered(sp?) method that Calnoth suggested could improve the speed in which the lighting is calculated.

Calligraphic lights still take ~40% of the time the projector has to draw one frame (at least for the number we are drawing at 60Hz).

Maybe I should add a 50% pixel fillrate hit.
 
I guess I still just don't see how. Unless you're doing the calligraphic lighting calculations in full for each and every pixel on the screen, and only sending those points that return nonzero brightness results to the calligraphic hardware.

But given the specification, well, you're only doing the calligraphic lighting for a limited number of points on the screen. Each one may require a fair bit of calculation, but certainly no more than a complex shaded pixel. Heck, if I were doing it what I'd probably do is use the vertex shader for these calculations, since I would tend to think that they'd be identical for all pixels within the light point, and just use the pixel shader for antialiasing (which might be done analytically) and occlusion information.

Now, given that you're only going to need 1300 calligraphic lights on screen at once, and given that they should be 100% vertex limited (assuming the above rendering algorithm), I see no reason why these need even approach normal rendering requirements. That is to say, imagine that these calligraphic lights use 10 times the vertex shader power of a normal vertex in the scene (which would seem fairly high to me, given what's required for a calligraphic light). That would make the whole scene the (rough) equivalent of drawing 13,000 triangles. As we all know, this is well within the bounds of modern display hardware. You could even bump it up by a factor of 10 and it shouldn't be half of the triangles on display on commercial hardware.

Edit:
Hrm, just looking at that page, it looks like the calligraphic lighting may be done almost entirely in software. That might explain the low performance.
 
Back
Top