SCEI & Toshiba unveil 65nm process with eDRAM

Once in a while I notice it. Most people game with the refresh @60Hz where it is a rather serious issue(pervasive). Up the refresh rate high enough and it is significantly reduced.
Well, as I mentioned, 60hz is bothersome to me even with static image - the refresh itself is too low. As for tearing, I don't doubt it wouldn't bother me in multiplayer FPS games, but then the visuals are secondary in those - I actually preferred playing Q2 in software mode, no matter how bad it looked, simply because the blander lighting made everything easier to see :)

How much fill would you need? Running 640x480 with 4x OD you have enough fill for six passes on a GF1 at 60FPS, twelve @30.
Actually overdraw with volumes can get a whole lot higher then that, but that's besides the point.
Things just don't work as simple as calculating OD and available fill like that. But that has been debated before on this forum, so let's not go over it again.
Just a note though - with the ratio of average volume triangle size vs its vertex calculation overhead, increasing fillrate would result in faster rendering speed going well over 3gpix (at 640x480 that is).

At any rate, GF1's better balance might mean it would be closer to stock GF2 performance, but the baseline performance we're looking at is still rather low, relatively speaking.
 
Faf-

Well, as I mentioned, 60hz is bothersome to me even with static image - the refresh itself is too low.

Static images(particularly white bg) are worse case scenario for refresh rates though. TV is @60Hz(NTSC anyway). I don't like using refresh rates under 100Hz on a normal basis(even 85 gives me eye fatigue after a few hours), but fast moving games it isn't nearly as bad as a static image.

I actually preferred playing Q2 in software mode, no matter how bad it looked, simply because the blander lighting made everything easier to see :)

You know, you could just kill the lightmaps and still run in hardware mode, game runs a hell of a lot faster that way :)

Things just don't work as simple as calculating OD and available fill like that. But that has been debated before on this forum, so let's not go over it again.

IIRC from that conversation noone came up with a fill limited title @ console res though.

Just a note though - with the ratio of average volume triangle size vs its vertex calculation overhead, increasing fillrate would result in faster rendering speed going well over 3gpix (at 640x480 that is).

Over 3Gpix for what type of rasterizer? One along the lines of GS perhaps, but how would you possibly use that much fill @640 on say a DX9 part? Sure, if your texel rate is half your pixel rate, and you are forced to multipass even for simple effects like Dot3, then I could see where you could chew up fill pretty fast, but how would you go about it running more advanced hardware?

At any rate, GF1's better balance might mean it would be closer to stock GF2 performance, but the baseline performance we're looking at is still rather low, relatively speaking.

How many passes would it take to pull off cube maps on the GS? You've stated that you could pull off Dot3 in two passes on the GS, factoring that and the fact that you must apply textures, aren't you cutting your MPixel fill down to less then 100MPixel edge over the GeForce1(ignoring cube maps)?

Chap-

Sorry, but i just don't believe this.
Again, i have a GF2MX( = GF1) and even at 640x480 + low details, current games either do not run at stable framerates and or they certainly do not look better than many PS2 games.
I have tried running games/demos of NWN, RSC, Fifa, RtCW, AmericaArmy, Battlefield1942, AvP2, NOFL2, RF on my setup.

OK, you don't believe it-

image038.gif


MDK2_640.gif


q3_640.gif


ut_min_640.gif


I tried finding benches of games that were also available on consoles for comparisons sake. Also some benches running UT2K3 high quality settings on a GF2MX @35.7 FPS/71FPS(76FPS/168.2FPS at medium settings)

http://www.anandtech.com/video/showdoc.html?i=1647&p=6

http://www.anandtech.com/video/showdoc.html?i=1647&p=11

For point of reference, an earlier build of the UT2K3 engine had the GF2MX boards running at best even with the GF1 DDR(@640x480 the MX440 was as fast, slower in all higher resolutions).

http://www.anandtech.com/video/showdoc.html?i=1580&p=5

You should be able to run a slew of games at console res and console framerates on a GF2MX.
 
Sorry ben, benchmarks are cool but going by my experience with the latest games, things are different.

Lets take NWN, i can run it at good framerates, but i have to drop the details to low or i can crank up the details and the framerates go to hell, all at 640x480.
Now comparing to BGDA, with great details and runs at great 60fps and stay that way when the screen is covered with enemies.

Then there is also RSC. 640x480 low details give me >30fps but when the CPU cars are onscreen, the fps flutters. 640x480 high details give me a stuttering <30fps. And i assure you GT3 slaughters RSC at 640x480 low details.

Which brings me to why this GF1 > PS2 using certain PC/Console games is nonsense.
Since they are different hardware working in different environments. There are just too many factors to consider for a fair comparison.
 
Sounds like we have similar tastes in refresh rates Ben. :) I also prefer 100hz.

You know, you could just kill the lightmaps and still run in hardware mode, game runs a hell of a lot faster that way
Probably, but I guess it was sort of a 'used to' thing. I had most of my play time with Q2 early on when still using RagePro that couldn't handle hw accelerated version at all.
After I upgraded the graphics, I sort of couldn't get used to accelerated look too well (it actually hurt my play - online).

IIRC from that conversation noone came up with a fill limited title @ console res though.
Might be, but as you said yourself, PC titles aren't using graphics hardware very effectively.

Over 3Gpix for what type of rasterizer?
I was referring only to pixel volume fill, no other operation. The untextured, single color value triangles that only update stencil count - the same for any current rasterizer - higher texel rate won't do you any good here.
GS can actually draw these at max 2.4gpix, but as I said, faster fill would still make a difference.

How many passes would it take to pull off cube maps on the GS? You've stated that you could pull off Dot3 in two passes on the GS, factoring that and the fact that you must apply textures, aren't you cutting your MPixel fill down to less then 100MPixel edge over the GeForce1(ignoring cube maps)?
Yes, and I also stated that I believe GF1 would probably have a performance edge on DOT3 and stuff, but that would be greatly offset by other factors.
As for cubemaps, pre GF3 hw didn't have dependant texture lookups yet, so the pixel shader math where cubemaps lookups are used for renormalization and other nifty things doesn't apply (which is just as well, since GS doesn't have dependant lookups either).
You're pretty much left with basic reflection uses and stuff, which I'd say are a rather small part of total rendering.
In OpenGL there are 3 passes used for applying a cubemap, I think it could be done in 2 for our case.
 
MDK2_640.gif

:-? What the hell?? When I had GF2MX (the original one, purchased before they made 200 and 400) with 633MHz Celeron, I could barely get MDK2 running at stable 40-50FPS in that resolution with everything at maximum. Everything above that was just painful. Not to mention how simplistic MDK2 looks tech-wise compared to best looking games on PS2.
 
I was referring only to pixel volume fill, no other operation. The untextured, single color value triangles that only update stencil count - the same for any current rasterizer - higher texel rate won't do you any good here.
GS can actually draw these at max 2.4gpix, but as I said, faster fill would still make a difference.

FWIW if your drawing Z or stencil only NV2A can draw at 3.7Gpixels, the trick involves swapping to a multisampled target. You have to fiddle the matrices and stuff but it does work, we use the trick when rendering shadow buffers, but it would work on the main frame buffer to render stencil shadows aswell. Obviously bandwidth is the limiting factor here, and I have no idea what the actual rate would be, but you won't be fetching textures and color writes are disabled so it does a lot better than it's single texture rate of around 700MPixels.
 
:D
Yep, that should definately be a nice step up from normal pixel fill, no matter the bandwith limit. Kinda reminds me of the GS trick to clear screen at 4.8GPix/s...
Curious, do you know the shape of pixel footprint in MS mode, is it still a square?
 
Curious, do you know the shape of pixel footprint in MS mode, is it still a square?

Obviously you could use either multisamplimg mode (I can't see why you'd choose the 2x pattern though), so either you get the quincunx patern replicated on a 2x2 grid written to the buffer as a 4x2 grid, or you get the 2x2 square replcated on a 2x2 grid (effectively a 4x4 square).

And of course if Z compression is enabled, you can do a Z/Stencil Clear on NV2A at even higher rates :p
 
Faf-

Probably, but I guess it was sort of a 'used to' thing. I had most of my play time with Q2 early on when still using RagePro that couldn't handle hw accelerated version at all.
After I upgraded the graphics, I sort of couldn't get used to accelerated look too well (it actually hurt my play - online).

RagePro... ouch. I had one of those too, got rid of it because of its 'wonderful' OpenGL support ;)

You're pretty much left with basic reflection uses and stuff, which I'd say are a rather small part of total rendering.
In OpenGL there are 3 passes used for applying a cubemap, I think it could be done in 2 for our case.

So the GS would be faster then the GF1 using CubeMaps?

Marco-

What the hell?? When I had GF2MX (the original one, purchased before they made 200 and 400) with 633MHz Celeron, I could barely get MDK2 running at stable 40-50FPS in that resolution with everything at maximum.

Your bottleneck certainly wasn't the graphics card;) MDK2 isn't a very stressful game at all, although it is available on the PS2(why I posted the scores).

Not to mention how simplistic MGS2 looks tech-wise compared to best looking games on PS2.

MaxPayne also plays faster on the GF2MX then the PS2, as does UnrealTournament. I tried finding scores for all the games that were on both platforms, hell the GF2MX even posts higher scores then the XBox does in the games that are on both platforms. The NV1X chips have quite a bit more power then what most people seem to give them credit for, the XBox's power really hasn't been pushed yet.

Chap-

Which brings me to why this GF1 > PS2 using certain PC/Console games is nonsense.
Since they are different hardware working in different environments. There are just too many factors to consider for a fair comparison.

I don't own NWN or RSC(I'll dl the demos next time I bring the kids up to see their grandparents and try it on their nForce based system and see how RSC runs- they have bb I'm on dial-up and it's a 220MB dl, is there a demo for NWN?) so I can't tell you anything about their performance, but there were titles that you have listed that run very nicely on GF1 level hardware. Look at Marco's comment about his performance experience versus what the chip is capable of, 40-50FPS versus 135FPS, well over twice as fast. All the different factors don't change what the chip is capable of. BTW- Which GF2MX do you have?

V3-

Those benchmarks are good at comparing rigs, but sadly, it doesn't reflect the performance when you are playing the actual game.

You can turn the framerate counter on in some of those games, and as long as you have VSync disabled they are fairly accurate. One of the benches I posted was minimum framerate, not average.
 
RSC is absolutely brutal on lowend systems. On my GF2 & 1Ghz P3-M its unplayable, like ~15 FPS at max, with serious stutters. My buddy's Athlon XP 1700 + GF3 doesn't do it justice either. Many reviewers have said that even on highend equipment the game chugs.

NWN on the other hand was fine for me (retail). I tend to run everything at 640x480 anyways (free AA from LCD scale-down :p), and with everything on the framerate was fine. Not silk, but the game doesnt demand that anyways.

It'll be interesting to see how KOTOR turns out, since I'm pretty sure its using the Aurora engine as well. Hopefully it won't suffer the same CPU-bound fate as UC.
 
MaxPayne also plays faster on the GF2MX then the PS2, as does UnrealTournament. I tried finding scores for all the games that were on both platforms, hell the GF2MX even posts higher scores then the XBox does in the games that are on both platforms. The NV1X chips have quite a bit more power then what most people seem to give them credit for, the XBox's power really hasn't been pushed yet.

I doubt Max Payne and all the other games you keep comparing are even optimized for PS2. If you'd run those games throught the performance analyzer II, you'd probably see that those games don't even tap what PS2 is capable of, sadly. With all respect, this arguement about GF1 vs PS2 is pretty moot, as there aren't any games (to my knowledge) that really make use of the hardware as it would be capable of. In about 2 years from now, we'll all know better of how much PS2 is capable of. As for now, I can only repeat what others have already said: both hardware seem to have distinct advantages, on a GF card mainly textures etc. I do find it funny though when we look at Xbox (which, by your arguement) should be a considerably better seeing that it is also a fixed platform - yet the real-world differences reflect by now way what you are argueing here.

BTW: Since you haven't played Baldurs Gate, I can only recommend it, as it really is one of the best PS2 games out graphically. :)
 
NWN on the other hand was fine for me (retail). I tend to run everything at 640x480 anyways (free AA from LCD scale-down )

Wouldn't the downscaling by the LCD make everything somewhat blurry and not just the edges? I could see how it would help a PSX's point sampling, but a GF2?
 
Heheh it was more of a joke than a practical point, but depending on the LCD model, scale-down can be a blurry mess or look like its had a slight "soften" filter applied. (for me its the later, Hitachi SVGA+ 1400x1050)

It also depends on what resolution you're running and how evenly it divides into the native res. IMO its a nice trade off from the harsh jaggies that a PC screen is capable of displaying.
 
The NV1X chips have quite a bit more power then what most people seem to give them credit for, the XBox's power really hasn't been pushed yet.

Well i dunno about that the ps2's EE been taped up to 70%+, and the xbox GPU resources being more easy to tap and all should be at least on par... heck xgpu's resources might even be in use more effectively than the EEs considering the difference in dev. difficulty...
 
You can turn the framerate counter on in some of those games, and as long as you have VSync disabled they are fairly accurate. One of the benches I posted was minimum framerate, not average.

Q3 ? The performance you get, largely depend on the map you played, also Q3 fluctuates quite abit, depending the number of bots or players on screen.

So what about the minimum ? The minimum, is the minimum framerate you encounter in that benchmark run, not when you are playing the game. The same for averages.

Those benchmark are good, for rig comparisons nothing more.
 
Well all this GF1 talk aroused my interest so i searched for specs... i assume the Gf256 is the geforce 1.... so here are the specs....

120MHz/150MHz(300Mhz) core/memory clock :LOL:
Integrated geometry transform engine
Integrated dynamic lighting engine
Four-pixel rendering pipeline :LOL:
480 million pixels per second fill rate :LOL:
15 million triangles per second throughput :LOL:
Cube-environment mapping
Single-pass emboss and dot-product bump mapping
DX6 Texture Compression
350MHz RAMDAC
2D resolution of 2048x1536 at 75Hz
AGP4X with Fast Writes
Up to 128MB SDRAM/SGRAM w/ DDR support
OpenGL ICD for Windows98, NT4, 2000
DX7 Support Powerful HDTV motion compensation.
Full frame rate DVD to 1080i resolution.

Well if that's the Geforce 1 it simply can't compete... 15M polys PEAK... hope that's not the real Geforce 1....

I mean 500000 polys x 60fps ingame is TWICE that, meaning the Geforce 1 can't even deliver the geometry the PS2 can do... if we go for peak numbers is not even worth talking about....

And since this is nvidia we all know that peak number is like their fabled 125M+ peaks... IOW No way that's gonna be happening on a real game...

EDIT:

According to 3dfx CTO Scott Sellers, "It appears as if the raster engine in the GeForce 256 is really just 2 TNT2 raster engines put in parallel (with some feature additions added)."

PS Don't know if this is real(didn't read the article...)but....
"Give a GeForce 256 a million triangles to draw and watch it die. " :eek:
http://www.gamespot.com/features/cave_quake2/p7_01.html
 
q3_640.gif


Id Software's John Carmack stated in his 9/2 .plan update that Quake 3 targets about 10,000 triangles per frame.

Nice... so even the Gf2PRO can't even push more than 2M polys per second... just what i expected... let's not talk about Gf1...

EDITED
 
hell the GF2MX even posts higher scores then the XBox does in the games that are on both platforms
Real life perfomace that I've experienced, aside, that makes me even more suspicious about those benchmarks :\
 
Back
Top