Sega Linbergh naked

Megadrive1988 said:
you're right.

the i740 was not at all a GPU. it had no geometry processor / T&L. it was just a rasterizer like all the other consumer PC 3D chips until the NV10 - GeForce256. the only thing i740 had going for it was high image quality. it had poor performance.

if Lockheed Real3D had the balls to produced an single-chip GPU version of the Real3D-100 chipset for $300 with 8 MB RAM in the 1996-1997 timeframe, with a timely follow-up in 1999, Real3D would've wiped out the competition.

You're assuming that was possible, wasn't Real3D-100 incredibly expensive? And $300 was still a rather uncommon price at the time, they would have been aiming at the extreme high end with that price, and the low end was really were the money was being made at the time. Or the Real3D-100 could have just died a sad death due to improper API support, and everyone could mourn it the same way they mourn PowerVR.
 
Fox5 said:
You're assuming that was possible, wasn't Real3D-100 incredibly expensive? And $300 was still a rather uncommon price at the time, they would have been aiming at the extreme high end with that price, and the low end was really were the money was being made at the time. Or the Real3D-100 could have just died a sad death due to improper API support, and everyone could mourn it the same way they mourn PowerVR.


Real3D-100 was introduced in 1995. released in 1996. but with seperate chips (AFAIK). it was the 20-24 MB version that cost a few thousand. memory was one of the main costs.

Real3D could've intergrated all three main processors or the Real3D-100 chipset
(geometry processor, texture processor, graphics processor) into one chip, used 8 MB of memory, and released it in early to mid 1997 instead of the i740 on the Starfighter cards.
they would've had a GPU that was not nearly as dependant on CPUs as all the other non-GPU 3D accelerators were. API would've had to have improved. OpenGL was exellent, but Real3D could've worked with microsoft on a DirectX APU for Real3D-100.

the problem was, Real3D was not used to operating at the low-end of the market without huge markups in cost. Real3D-100 was also very expensive because of the small numbers of them being produced. economics of scale would've helped. even a $200 card would've been possible by 1997. Real3D-100 was solid technology whereas i740 was not.
 
As far as I remember they also used an outdated manufacturing process for the chips going into Model 3 at the time. Maybe not outdated, but not current comapred to even other graphics chips at the time.
 
I had an i740 card back in the day and it was pretty awesome for its time. The first AGP card (no sidebanding) too IIRC.
 
Lindbergh's off-the-shelf design approach might be just a temporary change in strategy for Sega Sammy. When the PowerVR project fell behind, a PC chip like nVidia's GeForce 6 made for a good back-up plan because its development environment and toolset were already well in-place for use... a ready-made system.

Now that more time has been bought for Sega Sammy R&D, they're back to designing custom systems, like the new portable they're developing. Perhaps they'll go with a custom solution for their next high-end arcade board, too.
 
Sonic said:
As far as I remember they also used an outdated manufacturing process for the chips going into Model 3 at the time. Maybe not outdated, but not current comapred to even other graphics chips at the time.


that's probably true IIRC. they used an older, larger manufacturing process. but, that in no way meant Model 3 was less capable than other graphics chips. nothing in the consumer PC industry could touch Model 3 until 1998, and actually more like 1999. Model 3 was basicly completed in 1995.

Model 3's only competition was mid-range to highend graphics workstations from SGI and E&S. Model 3 was actually quite a bargin in price/performance for 1996 when it reached arcades. under 10k machine with performance of machines that cost 50k to 100k
(i.e. SGI RE2)
 
Lazy8s said:
Lindbergh's off-the-shelf design approach might be just a temporary change in strategy for Sega Sammy. When the PowerVR project fell behind, a PC chip like nVidia's GeForce 6 made for a good back-up plan because its development environment and toolset were already well in-place for use... a ready-made system.

Now that more time has been bought for Sega Sammy R&D, they're back to designing custom systems, like the new portable they're developing. Perhaps they'll go with a custom solution for their next high-end arcade board, too.


well we'll see. but even the low-end PowerVR-based Aurora board has yet to appear. not holding my breath for PowerVR anymore.

ATI / Nvidia are pretty much the only games in town for highend graphics. during PowerVR's heyday, Nvidia was just getting to be a good 3D graphics company and ATI was awful. things are MUCH different today.

SEGA should just do like SGI and E&S now by using ATI chips in their highend systems.
 
I think SEGA is willing to stick with Nvidia for its arcade offerings. It makes more sense than going with a custom design, especially if they can get a motherboard from Nvidia with a Conroe chip while they are at it. It would basically be a high end PC but the games would be designed to take full advantage of the hardware.
 
Lazy8s said:
Lindbergh's off-the-shelf design approach might be just a temporary change in strategy for Sega Sammy. When the PowerVR project fell behind, a PC chip like nVidia's GeForce 6 made for a good back-up plan because its development environment and toolset were already well in-place for use... a ready-made system.

Now that more time has been bought for Sega Sammy R&D, they're back to designing custom systems, like the new portable they're developing. Perhaps they'll go with a custom solution for their next high-end arcade board, too.

What new portable is that?

that's probably true IIRC. they used an older, larger manufacturing process. but, that in no way meant Model 3 was less capable than other graphics chips. nothing in the consumer PC industry could touch Model 3 until 1998, and actually more like 1999. Model 3 was basicly completed in 1995.

A process or two difference doesn't make that big of a difference in graphics cards, usually it takes at least two steppings before a big jump is achieved. Anyhow, wasn't model 3 multiple chips? And in some aspects it was beat quite significantly in in 1998, it just took until 1999 before it was beat/matched in everyway with the original geforce. (since there was no t&l on graphics cards before that)
 
Fox5 said:
What new portable is that?



A process or two difference doesn't make that big of a difference in graphics cards, usually it takes at least two steppings before a big jump is achieved. Anyhow, wasn't model 3 multiple chips? And in some aspects it was beat quite significantly in in 1998, it just took until 1999 before it was beat/matched in everyway with the original geforce. (since there was no t&l on graphics cards before that)

on the PC side, there were not graphics chips that could beat Model 3's combined quality and performance until 1999. the best that the PC had in 1998 was Voodoo2 SLI and TNT1. not really Model 3 quality. PowerVR2 was delayed for PCs until a year later.

Dreamcast in 1998 rivaled and surpassed Model 3 in 1998 in Japan in many
(but not all areas) areas.

in 1999, the PC got Voodoo3, TNT2/U, Rage Fury MAXX and GeForce. I'd say that TNT2 or TNT2 Ultra was the first that could really challenge Model 3 in quality, but that still wasnt a GPU. so really IMO GeForce was the first to match and surpass Model 3 in probably every area.

remember the high polygon rates and pixel fillrates of earlier PC chips meant nothing because they depended on CPU performance, and dropped like a rock when features were turned on.
 
Megadrive1988 said:
on the PC side, there were not graphics chips that could beat Model 3's combined quality and performance until 1999. the best that the PC had in 1998 was Voodoo2 SLI and TNT1. not really Model 3 quality. PowerVR2 was delayed for PCs until a year later.

Dreamcast in 1998 rivaled and surpassed Model 3 in 1998 in Japan in many
(but not all areas) areas.

in 1999, the PC got Voodoo3, TNT2/U, Rage Fury MAXX and GeForce. I'd say that TNT2 or TNT2 Ultra was the first that could really challenge Model 3 in quality, but that still wasnt a GPU. so really IMO GeForce was the first to match and surpass Model 3 in probably every area.

remember the high polygon rates and pixel fillrates of earlier PC chips meant nothing because they depended on CPU performance, and dropped like a rock when features were turned on.

I dunno, I seem to recall playing games in 1998/1999 that didn't require a gpu and surpassed model 3 arcade games in quality. Model 3 may have been a true GPU, but I believe its cpu was still rather crap, and video cards of the time got a big boost from 3dnow and SSE support. (well, 3dfx got a big boost from 3dnow support, and I assume SSE did the same, except gpus really took off by the time it became prevalent)
Model 3 was also lower resolution than vga, and had less memory than later graphics chips. Going from memory and now looking at pictures of model 3, and it doesn't seem to hold a candle to Naomi, or what would eventually come out on dreamcast, or on PC running on 1998 video cards. Well, except for the lighting and the shadowing, but the texturing and polyogn counts seem far lower than what the PC would produce, along with being far lower res. Probably the combination of the lower fillrate and the lower vram?
 
http://www.system16.com/museum.php?id=1

It looks like Model 3 ran PowerPC 603, ranging from 66 MHz in the earliest model to 166 MHz in the final revision of the system.

Now from what I've read at Ars Technica about 603, it's kinda a crap CPU. So I suppose the machines were limited in that way, especially compared to a P6-based Pentium II or up. K6 probably wasn't well off either.

SSE didn't show up until Katmai in 1999, and Katmai didn't really have much SSE performance anyway. No P3 really did for that matter. Bad SSE implementation. But they probably would well outpace the sad 603.
 
swaaye said:
http://www.system16.com/museum.php?id=1

It looks like Model 3 ran PowerPC 603, ranging from 66 MHz in the earliest model to 166 MHz in the final revision of the system.

Now from what I've read at Ars Technica about 603, it's kinda a crap CPU. So I suppose the machines were limited in that way, especially compared to a P6-based Pentium II or up. K6 probably wasn't well off either.

SSE didn't show up until Katmai in 1999, and Katmai didn't really have much SSE performance anyway. No P3 really did for that matter. Bad SSE implementation. But they probably would well outpace the sad 603.

But did model 3 have programmable vertex processors? Well, doesn't relaly matter, the specs say 2 million polys, which is rather low (but fast for when it came out). I think geforce 1 could handle around 30 million, and a 1ghz x86 processor about the same. Even if the processors of 1998 were 1/10th the performance of an athlon or pentium 3 at 1ghz, that'd still be 3 million polys, though only if the cpu were dedicated for the task. A voodoo 2 could handle 2 million polys, a banshee 4 million, and a voodoo 3 6 million polys. I guess a high end PC in 1998 could have beat out the model 3. 400mhz pentium 2 or k6-2, sli voodoo 2s for much higher fillrate and higher triangle thoroughput (4 million max), and oodles more ram than the model 3 could have had.
 
Fox5: as I said before Model 3 was completed in 1995, before PCs were even getting decent, N64-level 3D graphics.

the 1 million rectangle polygons or 2 million triangle polygons per second performance of MODEL 3 with all features (texture, lighting, g-shading, alpha, trilinear, perspective correction, etc) including AA enabled, was basicly unsurpassed by consumer technology until 1999 with the GeForce. the only other exception would be the Dreamcast in 1998, which surpassed MODEL 3 in some areas (but not all areas).

PCs in 1998 and through most of 1999 were not doing MODEL 3 level graphics in games.

and also as I said, the polygon performance rates quoted by 3D chip makers, were GREATLY exaggerated, peak theoretical specs that never came close to being achived in realworld. Model 3 specs were real-world specs.

Model 3's 1 million square polys or 2 million triangles blew away PC 3D cards that claimed 4 to 9 million polygons/sec.
 
Last edited by a moderator:
I'm not even sure we pull much over 1 million polys/s today in games. And there's the whole movement away from geometry with all the fancy bump mapping techniques.
 
Looking at the Virtua Fighter 5 scenario, which goes from Lindbergh to PS3, I think there's a clear advantage to this scenario, keeping the port from the Arcade to its natural console home base a smooth one.

After all, I am now assuming (but correct me if I'm wrong) that games like Tekken and Virtua Fighter actually make more money on the consoles than they do in the Arcades. Or at least we're talking very significant amounts, and there's a clear business advantage in using systems that match closely. Another advantage that Sega would have perceived is that, being also a significant console game developer, they can share more of their R&D across the teams and possibly even consolidate.

I think then as the lifespan of the consoles moves on, they can move to more advanced hardware if they please, but in the meantime gain considerably also on the software end. In Europe I've seen a trend where Arcades have been disappearing (in no small part due to the consoles), and I don't know how this trend is in other countries, but there could be a move to having more affordable Arcade machines out there in the first place.

Just some musings ...
 
swaaye said:
I'm not even sure we pull much over 1 million polys/s today in games. And there's the whole movement away from geometry with all the fancy bump mapping techniques.

I've heard over 30 million polys/s in current xbox 360 and PC games.
 
Arwin said:
Looking at the Virtua Fighter 5 scenario, which goes from Lindbergh to PS3, I think there's a clear advantage to this scenario, keeping the port from the Arcade to its natural console home base a smooth one.

After all, I am now assuming (but correct me if I'm wrong) that games like Tekken and Virtua Fighter actually make more money on the consoles than they do in the Arcades. Or at least we're talking very significant amounts, and there's a clear business advantage in using systems that match closely. Another advantage that Sega would have perceived is that, being also a significant console game developer, they can share more of their R&D across the teams and possibly even consolidate.

I think then as the lifespan of the consoles moves on, they can move to more advanced hardware if they please, but in the meantime gain considerably also on the software end. In Europe I've seen a trend where Arcades have been disappearing (in no small part due to the consoles), and I don't know how this trend is in other countries, but there could be a move to having more affordable Arcade machines out there in the first place.

Just some musings ...

It's been the same in America and Japan since the death of Model 3 really. There was a small reemergence with Naomi (affordable and easy to port), but for the most part arcades are dead. The ones that do have more powerful hardware generally are PS2, Xbox, or Gamecube based, or run on Naomi 2.
 
Fox5 said:
I've heard over 30 million polys/s in current xbox 360 and PC games.

Well maybe we should measure in polys/frame cuz who knows what framerate these are calculated from. I wouldn't be overly surprised though. It's a # that isn't spouted out much anywhere other than the console market.
 
swaaye said:
I'm not even sure we pull much over 1 million polys/s today in games. And there's the whole movement away from geometry with all the fancy bump mapping techniques.


I believe many Xbox 360 and PS3 games are, or will be, pushing 1 million+ polygons per frame which means anywhere from 20 to 60 million polys/sec depending on the framerate.
 
Back
Top