Should the SEGA DC and Saturn have launched with these alternative designs?

I do wonder if Megadrive's system could fit in one chip in 2000 and what the cost to make the thing would be. What kind of compromises would need to be made?



Well I don't know if you are confusing what I was proposing for Saturn in 1996 (a faster single chip version of Real3D/100 called "Real3D/120"), capable of 900,000 fully featured polys/sec with what I was proposing for Dreamcast in 2000, which would be a next-gen "Real3D/500" or something with more than an order of magnitude more power/performance than the combined two Real3D/Pro-1000s used in the MODEL 3 arcade board. Something like 36 million full-featured polys/sec with multiple light sources, bump-mapping (thanks to TriTech/BitBoys) and everything short of true pixel-shaders, something like what ArtX did with Flipper, but more powerful. The SEGA console using this next-gen Real3D chip would also be the base MODEL 4 board, although with scalability, SEGA could use 2, 4 or 8 chips for higher-end versions for even more impressive arcade experiences, like NAOMI 2 only greater, even that the base MODEL 4 board (and console) would be more powerful than NAOMI 2. The next-gen Lockheed Martin Real3D GPU would've been a joint effort between Real3D, PowerVR and TriTech, although Real3D would've been responsible for the vast majority of the design. It could've been manufactured by 3 companies: Lockheed Martin, Intel and NEC, to bring costs down further, and to make sure that supply would not be at all a problem for a worldwide launch in 2000.

The Real3D/100 was a PCI board with alot of chips designed by 1995. It was very expensive for many reasons i.e. very very very VERY low number of boards produced, a multi-chip card, lots of RAM (over 20 MB in some configuration and with RAM being expensive in 1995. By 1996 the situation could've been VASTLY different. All those chips could be combined into a single chip. RAM prices dropped in mid 1996. With massive production, the price could be reduced from thousands of dollars to a few hundred ($200~$300).


Both the PS1 and 3DO M2 had massive intergration, i.e PS1 CPU with the on-chip GTE (geometry transform engine) and the M2's BDA (Bulldog ASIC) containing something like TEN processors on one chip. If 3DO could do that with the M2, then certainly Lockheed Martin could've done that with Real3D/100, making a single chip, speed-bumped version for Sega in 1996.


As far as such a next generation Real3D chip for a SEGA console launching in 2000, I think that could've been done, too. Especially as an effort to combat the combined power of the Emotion Engine and Graphics Synthesizer. The GS which was a 16 pixel pipeline/pixel engine design, even though it had to use 8 of its pixel pipes for texture units. Lockheed Martin Real3D would've been far ahead of Nvidia (and thus the original GeForce GPU) by 1999, had Real3D been competing in the consumer market upto that point.

I know in reality Lockheed Martin Real3D never really tried to compete in the consumer/gamer space, all they had was the i740 chip used in their StarFighter cards and on Intel motherboards. It could only compete with the original Voodoo Graphics at a time when Voodoo2 was coming out. If LM R3D had made the decision in 1994/95 to compete in the consumer/gamer space, they could've beaten everyone, 3Dfx, PowerVR, Nvidia, ATI, etc. Lockheed Martin Real3D's only true competition was Evans & Sutherland with their RealIMAGE family which was a direct competitor to Real3D, but RealIMAGE did not take off in the arcade market like Real3D did. E&S never even attempted to get into the consumer/gamer market. Although E&S did provide the texture-mapping / graphics rendering technology inside NAMCO's System 22 (and 23) family of arcade boards, starting with the original Ridge Racer in 1993.

BTW it was not *exactly* Lockheed Martin Real3D that did the graphics in SEGA's MODEL 2 family of boards. It was Martin Marietta (before the Lockheed merger) with their texture-mapping and database tech. MODEL 2 graphics was a precurser to Real3D/100, and Real3D in general, designed in 1992/93. Not only was MODEL 2 not as powerful as Real3D/100 (300,000 polys/sec vs 750,000 polys/sec) but also not nearly as feature-rich. MODEL 2 even lacked gouraud shading, something even the PS1 had. Likewise, the technology that NAMCO got from Evans & Sutherland for System 22 (starting with Ridge Racer) was called TR³ ( Texture Mapping, Real-Time, Real-Visual, Rendering System) a precurser to E&S RealIMAGE family, much like SEGA's MODEL 2's tech from Martin Marietta was a precurser to Real3D.


Imagine Lockheed Martin Real3D technology mass produced on the scale that Nvidia did, with single-chip designs. It could've been priced lower than 3Dfx's multi-chip boards. Then add even larger/greater scale of production for game consoles compared to PC add-on boards.

Please don't bash me too hard, lol, this is afterall, a "what if" thread about Saturn and Dreamcast. It's that my Saturn and Dreamcast use highly mass-produced Lockheed Martin Real3D GPUs and PowerPC CPUs. The Saturn could've sold at $299 or even $399 in 1996, the Dreamcast could've sold at $299 in 2000. Oh and BTW, there would've NEVER been a Genesis 32X in 1994. Sega would've made the MegaCD/SegaCD more like their 'X Board' (used in AfterBurner II in 1987) or their 'Y Board' (used in Galaxy Force II in 1988) as the basis for the CD-ROM upgrade in 1991/1992, holding out with the MegaDrive / Genesis plus CD-ROM upgrade until the PowerPC + Real3D based Saturn arrived in 1996.

Well, it's a good thing that today (as of the early 2000s actually) ATI / AMD have probably the lions-share of Lockheed Martin Real3D's IP / technology and engineers, although Nvidia has some too.
 
Last edited by a moderator:
i still think using the sh-2 cpus and a model 1 graphics chip from power vr would have been the best bet for the Saturn.

The 3d would have been good for the time and the cost would have been pretty low. I believe both the saturn and psx were 320x240

http://www.youtube.com/watch?v=4LF9J_O5ens&feature=related
http://www.youtube.com/watch?v=E5BW4jLi-eM&feature=related

Resident Evil certianly looks better running on the card then on the saturn or ps1


For dreamcast , a TNL chip like Elan would have certianly helped. Perhaps a faster chip than what the power vr card in it was at the time. I don't know how much faster they could have pushed it.
 
i still think using the sh-2 cpus and a model 1 graphics chip from power vr would have been the best bet for the Saturn. /quote]

You mean PowerVR PCX1 or PCX2 (both Series 1) from 1996/1997, not the General Electric MODEL 1 arcade technology introduced in 1992 with Virtual Racing. I disagree with PowerVR because by 1996, Sega could've done better by using the 3DO M2 which was slightly more powerful than PowerVR PCX2 and Sega almost had a deal with Matsushita for M2.

The 3d would have been good for the time and the cost would have been pretty low. I believe both the saturn and psx were 320x240

Two SH-2 CPUs would not have been powerful enough to drive PowerVR PCX1/PCX2, which required a powerful CPU to get the most out of it. Sega would've been better off with using the 3DO / Matsushita M2 which had twin PowerPC 602 CPUs with one of them acting as the geometry processor for the BDA (Bulldog ASIC) graphics chip. Remember Sega almost had a deal with Matsushita over M2,

Yet, Sega could've done better than even the M2.

Sega should have listened to one of factions within their managment who wanted to scrap the Saturn altogether and shoot for a far better, more powerful Lockheed Martin Real3D-based console, probably with a single PowerPC CPU, most likely the 603e, in 1996. The power/speed of the CPU would not have mattered much at all (as in the MODEL 3 arcade board) because of the geometry processor that the Real3D/100 had. The three main chips could've been combined into a single GPU, offering a much cheaper manufacturing solution compared to the actual Saturn's absolute MESS of at least 8 seperate chips. The Sega console of 1996 could've been done with 3 chips: 1.) PowerPC CPU, 2.) Real3D GPU, 3).sound hardware+backwards compatibility hardware for Genesis + CD-ROM, plus any other stuff the console needed, on that same chip.

From Totalgames.net (cannot find link even on web.archive.org)

As soon as any console is launched, work is usually underway on a replacement but the Saturn's troubles gave this process an unusual urgency for Sega. By 1995, rumours surfaced that US defence contractors Lockheed Martin Corp. were already deep into the development of a replacement, possibly even with a view to releasing it as a Saturn upgrade. There were even claims that during Saturn's pre-launch panic a group of managers argued the machine should simply be scrapped in favour of an all-new LMC design.

Yeah.........^That^ is *exactly* what should've been done, beginning in mid-late 1994 before Saturn launched in Japan on November 22. Sega should have outright killed the Saturn altogether, opting to hold out with the Genesis + CD-ROM until 1996 when a far better PowerPC + Real3D based console could've been launched. If based on Real3D/100, it would've been far superior to not only PlayStation, Nintendo 64, the Martin Marietta/SEGA MODEL 2 arcade board but also Rendition Verite, 3Dfx Voodoo Graphics, PowerVR PCX1/PCX2, 3DO / Matsushita M2 and any other consumer (and many professional) 3D system of 1996-1997.

Such a console could've launched at $399 or even probably $299 because of massive chip integration, massive production of a consumer device, as well as falling RAM prices by mid-late 1996. By late 1997 and late 1998, the price could've be reduced to $199 and then $149 to match PS1 & N64 prices, but with vastly better hardware and more compelling games. Sega would've won the battle for 3rd party support. They would've had Dragon Quest 7 in Japan--even with the Saturn, Sega almost won DQ7 but Sony edged them out. However with a far better console that at least matched PS1's userbase, things would've gone SEGA's way, not Sony's. Nintendo was already out of the picture in Japan with N64 as it was in the real history of things, imagine how much worse they'd be if Sega and Sony had split the market.



http://www.youtube.com/watch?v=4LF9J_O5ens&feature=related
http://www.youtube.com/watch?v=E5BW4jLi-eM&feature=related

Resident Evil certianly looks better running on the card then on the saturn or ps1


For dreamcast , a TNL chip like Elan would have certianly helped. Perhaps a faster chip than what the power vr card in it was at the time. I don't know how much faster they could have pushed it.

The ELAN T&L chip was pretty damn good at 100 MHz, but with a seperate rendering chip, even one faster than PowerVR CLX2 aka PowerVR2DC, it would've cost more than a single GPU design with combined geometry engin and multiple pixel pipelines, etc. Sega needed something like GeForce 256 but more powerful. Lockheed Martin Real3D, again, could've provided such a solution at consumer price levels with massive integration on .25 micron aka 250nm and massive high-volume production at Intel & NEC as well as Lockheed themselves. Toshiba & Sony did it with the Emotion Engine CPU in 1998/1999 for an early 2000 launch of PlayStation2. Certainly Lockheed, SEGA, Intel and NEC could've done it in the same timeframe for a late 2000 launch of the next-gen SEGA console.
 
Last edited by a moderator:
update: dreamcast @ 268.4 mhz completely stable with a new cooling solution based on small ramsinks...substantial performance gains in some games...i am currently looking for faster oscillators to see how far my little beast can go...

the only real issue thus far is vmu corruption...i need to overclock my vmu too for both sides to communicate properly...but i will deal with this later :)
 
The 3d would have been good for the time and the cost would have been pretty low. I believe both the saturn and psx were 320x240

http://www.youtube.com/watch?v=4LF9J_O5ens&feature=related
http://www.youtube.com/watch?v=E5BW4jLi-eM&feature=related

Resident Evil certianly looks better running on the card then on the saturn or ps1

Resolution varied but that was the minimum basically. Stuff like Tekken 3 ran at 385x480 and IIRC, Wipeout 3 was even higher.

RE1 definitely looked better on PC but unlike the Saturn version, it was based on the PSX original asset-wise. Background resolution was the same though at 320x240, though it ran at a noticeably higher frame rate.
 
Oh boy we're back to Sega using M2 in 1996.....

But in the reality of the situation R3D didn't have what it took to compete at the time. And that was having excellent bang for the buck while providing 3D graphics that nothing else could rival at the time for the price even computers priced much higher.

This is probably why nobody used Real3D's hardware for a console. The spec sheet was cool, but the reality was probably different.

This doesn't look like a solution that would be economical in a $200-300 game console. Also, if it was amazing competitive hardware, why didn't they make a PCI card with just one of their GPUs and compete with Voodoo Graphics in '96? I have a feeling that Voodoo Graphics would give it a whipping, or Real3D was really out of touch with how to make money.


update: dreamcast @ 268.4 mhz completely stable with a new cooling solution based on small ramsinks...substantial performance gains in some games...i am currently looking for faster oscillators to see how far my little beast can go...

the only real issue thus far is vmu corruption...i need to overclock my vmu too for both sides to communicate properly...but i will deal with this later :smile:
I've overclocked a N64's CPU (to around 150MHz, I think). It has been many years but I recall that there are a few possible multipliers for the CPU.

Games ran faster, but the game play ran too fast too. It was like a turbo button.
 
Last edited by a moderator:
If sold at $299~$399 the Real3D/100 would've whipped the 3Dfx Voodoo Graphics and given the Voodoo2 a rough time, IMHO.

Granted, Real3D/100 would not have been on par with Voodoo3, PowerVR2DC, or TNT2, but I'm sure Lockheed-Martin could've developed a Real3D/200 by 1998-1999.

Remember, I am not speaking of the very low-end, low-cost Real3D i740 chip (codenamed 'Auburn') that was co-developed with Intel, which released in early 1998 around the time Voodoo2 came out. The i740 lacked the performance that Real3D/100 had, including the geometry processor.
 
And how do you know that R3D 100 performs anything like what they claimed? There are other 3D chips that boasted amazing specs but as you added effects their performance imploded. That was what made Voodoo Graphics special: it could actually maintain full performance with exceptional image quality and lots of effects.

Yes I know you're not referring to 740. Though 740 isn't THAT bad. It's similar to a Riva 128, which means it's not much different than Voodoo Graphics. It has some image quality concessions though, like approximated trilinear filtering. It's biggest issue is that it was a year too late.

Check out my little tiny project. ;)
http://vogons.zetafleet.com/viewtopic.php?t=26536
 
Last edited by a moderator:
And how do you know that R3D 100 performs anything like what they claimed? There are other 3D chips that boasted amazing specs but as you added effects their performance imploded. That was what made Voodoo Graphics special: it could actually maintain its 45 megapixels/s with full effects.

Because Lockheed-Martin was known for publishing real-world performance numbers/specs for their stuff, including the Real3D/Pro-1000 used in Sega's MODEL 3 boards. Real3D/100 got 33 megapixels/s and 750,000 textured polys/s with all features on. Way higher than Voodoo1 in polygon performance (350,000 something polys/s) and the lower fillrate didn't matter because it included AA and all other features. Can't say the same about Voodoo1's figures in real-world situtations.

Real3D/100 and Voodoo Graphics/Voodoo1 are of the same generation, designed in the same timespan: 1994-1996. Yet developers were seeing better performance and IQ on the Real3D/100 board.

old USENET post circa 1996

First, let me start off by saying I am going to be buying a Voodoo card.
For low end comsumer grade flight sims and such, the Voodoo looks like
about the best thing available. Second, I am not necessarily responding
to just you, because there seems to be a hell of a lot of confusion
about Lockheed Martin's graphics accelerators. I have been seeing posts
all over the place confusing the R3D/100 with the AGP/INTEL project that
L.M. is working on. The R3D/100 is *NOT* the chipset that is being
developed for the AGP/INTEL partnership.

However, since your inference is that the Voodoo is faster than the
R3D/100, I have to say that you are totally dead wrong. While the specs
say that the Voodoo is *capable* of rendering a higher number of pixels
per second, or the same number of polygons per second as the R3D/100,
the specs fail to mention that these are not real world performance
figures any you probably will not ever see the kind of performance that
3Dfx claims to be able to acheive.
This does *not* mean that the Voodoo
is not a good (its great actually) card, just that the game based 3D
accelerator companies (all of them) don't tell you the whole story.


The Voodoo uses a polygon raster processor. This accelerates line and
polygon drawing, rendering, and texture mapping, but does not accelerate
geometry processing (ie vertex transormation like rotate and scale).
Geometry processing on the Voodoo as well as every other consumer (read
game) grade 3D accelerator. Because the cpu must handle the geometry
transforms and such, you will never see anything near what 3Dfx,
Rendition, or any of the other manufacturers claim until cpu's get
significantly faster (by at least an order of magnitude). The 3D
accelerator actually has to wait for the cpu to finish processing before
it can do its thing.


I have yet to see any of the manufacturers post what cpu was plugged
into their accelerator, and what percentage of cpu bandwidth was being
used to produce the numbers that they claim. You can bet that if it was
done on a Pentium 200, that the only task the cpu was handling was
rendering the 3D model that they were benchmarking. For a game,
rendering is only part of the cpu load. The cpu has to handle flight
modelling, enemy AI, environmental variables, weapons modelling, damage
modelling, sound, etc, etc.


The R3D includes both the raster accelerator (see above) and a 100 MFLOP
geometry processing engine. Read that last line again. All geometry
processing data is offloaded from the system cpu and onto the R3D
floating point processor, allowing the cpu to handle more important
tasks. The Voodoo does not have this,
and if it were to add a geometry
processor, you would have to more than double the price of the card.


The R3D also allows for up to 8M of texture memory (handled by a
seperate texture processor) which allows not only 24 bit texturemaps
(RGB), but also 32bit maps (RGBA) the additional 8 bits being used for
256 level transparency (Alpha). An addtional 10M can be used for frame
buffer memory, and 5M more for depth buffering.


There are pages and pages of specs on the R3D/100 that show that in the
end, it is a better card than the Voodoo and other consumer and
accelerator cards,
but I guess the correct question is, for what? If
the models that are in your scene are fairly low detailed (as almost all
games are - even the real cpu pigs like Back to Bagdhad), then the R3D
would be of little added benefit over something like the Voodoo.
However, when you are doing scenes where the polys are 2x+ times more
than your typical 3D game, the R3D really shines. The R3D is and always
was designed for mid to high end professional type application,
where
the R3D/1000 (much much faster than the 100) would be too expensive, or
just plain overkill. I've seen the 1000 and I have to say that it rocks!
I had to wipe the drool from my chin after seeing it at Siggraph (We're
talking military grade simulation equipment there boys, both in
performance and price!)


Now then, as I mentioned before, I'm going be buying the Voodoo for my
home system, where I would be mostly playing games. But, I am looking
at the R3D for use in professional 3D application.
More comparible 3D
accelerators would not be Voodoo, Rendition based genre, but more along
the lines of high end GLINT based boards containing Delta geometry
accelerator chips (and I don't mean the low end game base Glint chips,
or even the Permedia for that matter), or possibly the next line from
Symmetric (Glyder series), or Intergraph's new professional accelerator
series.


Ted K.
Shadowbox Graphics
Chicago - where being dead isn't a voting restriction.

http://groups.google.com/group/comp...t-sim/msg/555aacb2319f5834?dmode=source&hl=en


Now granted, the Real3D/100 was a mid-to-high-end 3D complete 3D graphics processor for professional applications. The much more powerful Real3D/Pro-1000 was used (2x in parallax) in SEGA's MODEL 3 board. If the Real3D/100 had been given just 8 MB RAM as a consumer version, it would've beaten the 3Dfx Voodoo by a mile, had developers supported it with it's own API/and or OpenGL.
 
Last edited by a moderator:
Ok so in other words, R3D 100 was probably a very expensive chip if it had a geometry processor integrated. Nobody else tried to do that and sell it to the consumer market until 1999 with NV10.

It actually sounds like Voodoo Graphics could push about a million polys/sec if you put enough CPU behind it (a P166 according to a usenet post). Obviously this would require the CPU focused entirely on processing geometry though.

However, from what I've read, with games the texturing performance is vastly more important than geometry capabilities. Massive numbers of polygons are more a computer aided design realm. The author of that post essentially says that. Even today games keep geometry levels down and use fancy texture mapping to fake polygonal detail.

It would be interesting to know what it cost to build one of those R3D 100 chips in comparison to modular Voodoo Graphics chipset and the relative performance per cost. Could you buy several times the fillrate for the same price with Voodoo?
 
Last edited by a moderator:
@swaaye: the animation in some games runs faster, but in most of them (that is, the properly coded ones) animation speed remains the same, it just becomes more fluid...
 
Ok so in other words, R3D 100 was probably a very expensive chip if it had a geometry processor integrated. Nobody else tried to do that and sell it to the consumer market until 1999 with NV10.


That's absolutely true. I do believe though, that one version of the canceled TriTech Pyramid3D used an on-chip geometry processor. No, I am certain of it. The TR25201 version, here and here, had the geometry processor. It would have been a GPU out in 1997.

It actually sounds like Voodoo Graphics could push about a million polys/sec if you put enough CPU behind it (a P166 according to a usenet post). Obviously this would require the CPU focused entirely on processing geometry though.

1 million polys/sec is more like raw performance, not with all effects on. I'm sure the Real3D/100's raw performance was far higher than 750,000 polys/sec.

However, from what I've read, with games the texturing performance is vastly more important than geometry capabilities. Massive numbers of polygons are more a computer aided design realm. The author of that post essentially says that. Even today games keep geometry levels down and use fancy texture mapping to fake polygonal detail.

Mostly true. Yet if games had been developed on Real3D/100, if the chipset/board had been targeted, then game performance would've been far superior to games targeted toward 3Dfx Voodoo Graphics.

It would be interesting to know what it cost to build one of those R3D 100 chips in comparison to modular Voodoo Graphics chipset and the relative performance per cost. Could you buy several times the fillrate for the same price with Voodoo?

By adding more PixelFX chips, yes, I think. Well, maybe. I don't really know. However I do know the original SST-1 (Voodoo Graphics) supported upto three TexelFX chips per board. The version used in Atari/Midway's SF Rush arcade game used two TexelFX chips.
 
Last edited by a moderator:
I cannot let this thread go without posting this article I've posted in several other threads,
from the November 1995 issue of Next Generation on Saturn 2 using Real3D/100.



Also this article from the August 1995 Next Generation on the Real3D/100 itself
http://i41.tinypic.com/30bz8s3.jpg
http://i39.tinypic.com/1z1vl15.jpg

One of the reasons why the Real3D/100 cost so much ( several grand) is because it had upto 23 MB RAM on board. 5 MB z-buffer/depth buffer + 10 MB framebuffer + 8 MB for textures. If that total had been brought down to 8MB it could've probably been brought down to $399~$499 even without making a single chip version, because RAM prices crashed in mid-late 1996. Now certainly, if the three main processors (geometry, graphics, texture) had been merged into a single chip and moved down to a smaller process node, the price would've plummeted further. Add to that mass production in the millions and a willingness of Lockheed Martin (which never happened) to compete in the consumer/gamer markets (PC and console) the price could've been cheaper than 3DFX Voodoo Graphics two-chip solution.
 
Last edited by a moderator:
Pyramid3D was a really cool sounding chip but obviously they hit some unforeseen complications. I believe its geometry engine was a sort of proto vertex shader. Those poor Bitboys guys never got a break. There's a thread on here somewhere with all of the details.

There was another PC consumer board that was planned, the Hercules Thriller Conspiracy with the Verite V2200 + Fujitsu Pinolite geometry chip. It never came though, probably because V2200 became obsolete so quickly, and probably because the geometry chip wasn't particularly useful for more than the Quake engine. D3D literally did not support geometry acceleration until DX7.

On the topic of specialized geometry hardware, I had a little realization last night. Almost every 3D console has had specialized hardware for it. N64's RSP was designed to do it instead of the CPU. PS1 has GTE. PS2 has its two VUs. DC has a main CPU designed for it.

The PC was stuck with the CPU doing it. But thankfully the PC CPUs quickly became quite capable of decent performance due to better FPUs (P6/K7) and floating-point SIMD (SSE/3DNow). I remember that in 2000 the NV10's T&L hardware actually became a potential bottleneck because CPUs had surpassed it.


edit:
lol. Megadrive1988, we've had this discussion before.
http://forum.beyond3d.com/showthread.php?t=31484
 
Last edited by a moderator:
Pyramid3D was a really cool sounding chip but obviously they hit some unforeseen complications. I believe its geometry engine was a sort of proto vertex shader. Those poor Bitboys guys never got a break. There's a thread on here somewhere with all of the details.

Yup. I was really excited about the TriTech/BiyBoys Pyramid3D chip with embedded geometry engine.

There was another PC consumer board that was planned, the Hercules Thriller Conspiracy with the Verite V2200 + Fujitsu Pinolite geometry chip. It never came though, probably because V2200 became obsolete so quickly, and probably because the geometry chip wasn't particularly useful for more than the Quake engine. D3D literally did not support geometry acceleration until DX7.

I was going to mention the Thriller Conspiracy board but chose not too. I didn;t forget about it.

On the topic of specialized geometry hardware, I had a little realization last night. Almost every 3D console has had specialized hardware for it. N64's RSP was designed to do it instead of the CPU. PS1 has GTE. PS2 has its two VUs. DC has a main CPU designed for it.

Yeah the N64's RCP / Reality Co-Processor was a full blown GPU, not needing the help of the MIPS CPU.
The RCP had the Reality Signal Processor as its geometry engine. The PS1's graphics chip lacked that, and so it relied on the GTE on the CPU for geometry.


The PC was stuck with the CPU doing it. But thankfully the PC CPUs quickly became quite capable of decent performance due to better FPUs (P6/K7) and floating-point SIMD (SSE/3DNow). I remember that in 2000 the NV10's T&L hardware actually became a potential bottleneck because CPUs had surpassed it.

Yep.

edit:
lol. Megadrive1988, we've had this discussion before.
http://forum.beyond3d.com/showthread.php?t=31484

lol, indeed.
 
-polygon performance with all features enabled-

N64: 160,000 polys/sec
NAMCO System 22: 240,000 polys/sec
SEGA MODEL 2: 300,000 polys/sec (no gouraud shading)
3DFX: 250,000 ~ 350,000 polys/sec
PowerVR PCX1/2: 250,000 ~ 350,000 polys/sec
3DO M2: 300,000~500,000 polys/sec
Real3D/100: 750,000 polys/sec
3DO MX: 1,000,000 polys/sec
SEGA MODEL 3 (2x Real3D/Pro-1000s): 1,000,000 ~ 1,500,000 polys/sec
SEGA Dreamcast/NAOMI: 3,000,000 ~ 7,000,000 polys/sec

PS1 and Saturn not included because they contain no Z-buffer--Their figures would make them look more powerful than they actually are.
 
Last edited by a moderator:
In a way that shows how sega made the right decision to go with the CLX for Dreamcast - as it's polygon performance is way better than even the high end Model 3 hardware. And the saturn performance no longer seems quite so bad on paper when compared with the N64 numbers :)
 
yes, dreamcast outguns model 3 in every aspect...it's sad that sega did not port many games from the platform to dc and the ports that they did develop/outsource were of low quality...
 
Back
Top