New Madden 360 Shots

If you watch the E3 video, the bottom pic is from the last 5 seconds of that video.

So the bottom pic IS cgi, and is not new, I don't know about the top pic.
 
also, for teh person complaining abotu the :textures" on the grass, here what a real football field looks like:

t1_edge.jpg


And here's a pic of NFL2k on XBOX, look at this mess:
1090285959.jpg
 
Too bad that stadium looks nothing like the Seahawk's field, which has turf that looks like grass!

Seahawks Stadium, now known as Qwest Field, has FieldTurf (TM) which does not look like a flat nasty mat/rug like above! ;) FieldTurf looks like real grass and is the highest rated artificial turf out there (Qwest Field was ranked in the top 5 by NFL players for "Nicest Fields to play on"). So pictures (could not find a close up of the Seahawks field):

SFA_2.jpg


mexico_liceo_field.jpg


mexico_liceo_box.jpg


Falcon players were asked in a 7-way blind study to rate different turfs and 100% of them thought it looked like grass. Which the above pictures and the one below diagramming its construction shows:

http://www.fieldturf.com/product/images/prodOverview07.jpg

So yeah, the EA pic is not doing any favors to the turf. It may be a good pic for old nasty AstroTurf, but it surely does not represent what is in that stadium.

Ps- Yes I am a Seahawk fan... yes I have suffered... and yes I have Alexander in my Fantasy League. And sadly I have many wounds from that NASTY artificial turf junk in the pic Scoob posted...
 
Last edited by a moderator:
Okay, so this discussion would probably require someone with a programing background to get into the details, as I'm more of an end user. I'll try to bring up a few possible issues though.

Sampling is key. AA for geometry and shading, better motion blur (if the studio is willing to pay in rendering time), high quality texture filtering, shadow map samples - they all add up to a cleaner look.

Shader quality and precision is probably different as well. An offline renderer usually offers a more complicated Phong shading model than the hardwired stuff in GPUs, and then there's the more precise Blinn shader, and Ward anisotropic stuff, layered shaders and so on.
True, there are advanced shaders for hardware as well, but AFAIK they only calculate the lighting for the vertices, and the pixel shader interpolates that for the actual pixels. Also, shading is calculated for a pixel only once, and it gets source data like normals from textures which might not have a 1:1 texel to pixel ratio. 24 bit normal maps, especially when compressed, won't look good either.

Geometry. A rendered version of a CG character is usually tesselated to a few hundred thousand polygons in the basic renderers already (like if you run it through the built-in 3ds max scanline, Maya or LW renderer). You usually get very close to the 1:1 pixel to polygon ratio. More geometry means better shading, which is especially evident when you take a look at PRMan images - there's always a better than 1:1 ratio between micropolygons and pixels.
If you render low-poly geometry in an offline renderer, it'll have shading interpolation artifacts as well, and the result will loose some of its quality. I guess it has to do with the difference between the interpolated normal across a planar polygon between its neighbouring vertices, and a 'true' normal from an actual vertex. It simply responds better to the lights... and games still have low-res geometry, so the lighting data fed to the pixel shader probably isn't good enough when working with normal mapping.
This all goes for both diffuse shading and specular highlights, by the way.

Texture resolution is obvious and has been mentioned already. Mip mapping isn't as agressive most of the times either, so objects keep their sharp details even when they're further away from the camera.

Also, there's a lot of compositing done on CG stuff, finetuning colors, contrast, blur/sharpness, and tweaking everything until it looks good enough. It's pretty similar to how you'd enhance an image in Photoshop. Some of this should appear in nextgen console engines by the way...


It's also worth noting that CGI assets might have more time put into them simply because you don't need that many sets and characters. Warcraft 3 had about a dozen 'hero' resolution CGI characters, but at least 5-6 times as many ingame characters/creatures. You simply cannot spend as much time with ingame stuff because of the sheer amount of work.
Also, CG quality can range from just above ingame levels to almost movie VFX quality stuff. Blizzard, Blur, Square and a few other studios are on the edge of producing animated features (Blur is in preproduction most likely, and we all know about 'Spirits Within' and 'Advent Children' I guess).
 
The people in the stands look like cardboard cut outs, how funny.

*edit* does this mean we'll see cardboard tree's in racing games?
 
deathstar121 said:
I think people are expecting too much from next gen.

Fixed hardware+ console developpers WILL provide a big jump compared to PC.I'm expecting to be very impressed .above KZ2 and motorsport.
i'm betting on that. :)
after all ,some devs pulled some miracles with just 3 blending modes ( add, subst and alpha blending) , and paleted textures on a 4 meg videocard.
 
Last edited by a moderator:
deathstar121 said:
I think people are expecting too much from next gen.

I guess you missed all the threads from the past year or two of people posting shots from the FF X CG and asking "will teh PS3 do this?!??"
 
Back
Top