WiiGeePeeYou (Hollywood) what IS it ?

Status
Not open for further replies.
that 16+8 bit format is not valid.
I was only correcting MDX's misunderstanging of colours and bits. He was mistaking 32 bit to mean 2^32 colours. The only thing I missed that's relevant is the possibilty of 6+6+6+6 bits, but either way, it's saying 24 bit is either low-colour or without alpha channel. If you want alpha and true colour (as it used to be known, when everyone was saying we'd never need more colours, before everyone got to see colour banding in their perfect 24 bit colour gradients!), you need 32 bits.
 
I don't believe that all of the banding problems are a result of the 6-6-6-6 mode. I think some shading techniques just hit some internal precision limits, or they use texture lookups that are too coarse etc.

There's a type of dull metal door in Red Steel that, in a nutshell, looks like shit. The banding is completely absurd. However, you can have lots of finer detail on the screen at the same time as that type of metal door, and it looks alright.

I think I've learned well enough to recognize the 18 bit color mode when playing Tales Of Symphonia, and what I'm seeing on the Wii, at least in Red Steel, is IMO a different issue. Not sure about what's going on with the weather channel, but AFAICS the globe's edges are very smooth and the lighting there isn't per-vertex. It could be a static lightmap, which doesn't explain anything, or some per-pixel shader/indirect texturing mojo. It could be FSAA or just edge AA or edge blurring. Very hard to tell.
 
Well, dunno how true this is, but some dood on GAF said his friend at Activision confirms Hollywood is about 3 times as powerful as Flipper. Dunno how true that is..
 
all of my hopes for 32bit color went away when they released the weather channel. there is some serious banding the water on the globe, and there isn't really any reason for a scene that simple to be throttling back color depth for performance, unless it's a hardware limitation. if there is more embedded memory on hollywood, i hope we can at least start seeing some AA in future titles.

Huh, I didn't notice. Are you using component cables? Even on cube, there was considerably more banding using svideo than component, and interlaced rather than progressive.

BTW, doom 3 has severe banding when run on a geforce 3, perhaps hollywood just can't do more than 8 bit integer precision for its pixel shaders.

Well, dunno how true this is, but some dood on GAF said his friend at Activision confirms Hollywood is about 3 times as powerful as Flipper. Dunno how true that is..

2 TEVs + a 50% increase in clockspeed would do that, but we've yet to see games that represent such a dramatic increase in power. I'm still hedging my bets on advanced pixel and vertex shading features (at least dx9.0 level) using some custom ati api. Why? Well, there was a bit of hoo-hah about the wii having "physics acceleration" and ati had been advertising that as a feature of their graphics cards for a while, so I think there's a good chance that while nintendo probably doesn't care about shaders, they would care enough about physics (since it relates directly to gameplay) to put in advanced enough shaders to make it possible for hollywood to do the calculations.
 
I think the issue with this screenshots is the smal texture size and the indexed colors.Due to this they don't used bilinear filtering and the quality of the pictures is bad,very bad.
Porbably they did the artworks on the GC dev units,and the memory of that is very limited.So,they didn't spend to much time to optimize it (because the time is cost), and they don't use the extra memory.

The far cry use same 2-3kbyte texture instead of a good loking 20 kbyte texture on some "mission critical" places, like the user hub.It have to be some big bandth/rendering issue (the far cry wasn't on ps2 or on gc) or same severe memory issue.
Personal oppinin:the softwares that loking very badly is not using texture optimized rendering but same other rendering optimisation method (probably for xbox or pc)

The guy that created the textures calculated with 1 megts tex buffer,not with 24 megs of tex buffer.
 
Last edited by a moderator:
2 TEVs + a 50% increase in clockspeed would do that, but we've yet to see games that represent such a dramatic increase in power. I'm still hedging my bets on advanced pixel and vertex shading features (at least dx9.0 level) using some custom ati api. Why? Well, there was a bit of hoo-hah about the wii having "physics acceleration" and ati had been advertising that as a feature of their graphics cards for a while, so I think there's a good chance that while nintendo probably doesn't care about shaders, they would care enough about physics (since it relates directly to gameplay) to put in advanced enough shaders to make it possible for hollywood to do the calculations.


Althought that is true, I think they would prefer if they didnt have such a advanced features as once they have it tech like normal mapping is easly possible and thus the cost of games (overall) may raise quite a bit. That dont mean they wouldnt want to have a new feature set (or new hardwired features) as they can still have new and better gfx and offloading the CPU that overall would improve a lot the gfx but not make the cost raise that much. For physics I would think that they would prefer something else to, much more spicialized and that wont mess to much with the rest of the game (unlike using shader and try to get a good balance out of it) that could have a much better ratio of performance/cost/power.


DX9?! I thought DX8 in itself would be stretching it?

It is possible, there is smaller chips that have full SM3, like the above mentioned 7100 meybe even the 7300 after we take out all the transistores from purevideo and such (IIRC they are a lot 25M+). Still at 243Mhz I wonder if that would be a good soluction and if there wouldnt be much better uses for those transistores than spending so many transistores in a feature set that will have hard time to be really effective.
 
SM3.0 is there on 7100GS/6200TC and 7300GS cards, just like SM 2.0 was on GeforceFX 5200 before them.
Meaning, they may have it for a nicer-looking checklist of features on the outside of all those shiny boxes, but it can't really be used due to a severe lack of raw horsepower to take advantage of it.
 
I think the issue with this screenshots is the smal texture size and the indexed colors.Due to this they don't used bilinear filtering and the quality of the pictures is bad,very bad.
Porbably they did the artworks on the GC dev units,and the memory of that is very limited.So,they didn't spend to much time to optimize it (because the time is cost), and they don't use the extra memory.

The far cry use same 2-3kbyte texture instead of a good loking 20 kbyte texture on some "mission critical" places, like the user hub.It have to be some big bandth/rendering issue (the far cry wasn't on ps2 or on gc) or same severe memory issue.
Personal oppinin:the softwares that loking very badly is not using texture optimized rendering but same other rendering optimisation method (probably for xbox or pc)

The guy that created the textures calculated with 1 megts tex buffer,not with 24 megs of tex buffer.

Look at Star Wars Rogue leader, and you will see that Far Cry is really not good for Wii!!!
The Wii (and the GC) can do 8 textures by cycle with less problems than Xbox (which can do only 4) but developpers were not very aware of the optimisation! Hope that with Wii, we will have some demonstration of the real power of the Wii/GC tech.
 
I think it all boils down to the same old "PS2 engine ported with very little extra optimizations to Cube" problem. Far Cry Wii looks like the XBox game with all normal mapping removed, and Red Steel is, IIRC, Unreal Engine 2.5 modified. We may have to wait for Metroid Prime 3 to be released to see what an excellent GC engine upgraded for Wii can do, and even later to see from-the-ground Wii engines (Mario Galaxy perhaps ?).

Considering that even the DS has FSAA, I was at least hoping for Wii to have a feature-set similar to Gamecube's, but with much better IQ (I was hoping for "free" FSAA and AF). Guess we will have to wait to see what Wii can really do. Some stuff looks pretty good, but IQ is definitely a downer so far.
 
I spoke to a developer who plans to make a game for the DS and the Wii.
We were speculating on the power of the GPU.

One thing that we discussed was the tech demos for the Game Cube.
http://www.youtube.com/watch?v=ylyXEMPaVHQ&mode=related&search=

and that got me to wonder, what was/is the tech demo for the Wii?
Crisis: DoD?
Mario Galaxy?
Was there any?

The developer also mentioned that often times console makers will have first party/second party developers test their boards to see where they can improve on design or increase power.
We know that Gears of War influenced the design of the 360 so
which game do you guys think most shaped the final specs of the Wii GPU and what do you think they would want in there?
 
2 TEVs + a 50% increase in clockspeed would do that, but we've yet to see games that represent such a dramatic increase in power.

I wouldn't say 2 TEVs + 50% would equal three times as powerful GPU, the T&L engine would also have to be doubled to even consider it three times as powerful IMO.
 
Where on GAF did you read that?
No idea but isn't it a reasonable assumption? Hollywood is roughly twice the size it should be if it were a dumb shrink of Flipper. It has the same precision issues as Flipper which makes its processing pipes look not so revamped. Same thing, twice as wide, clocked 50% higher, ergo three times as fast.

Even if that's an impostor, he/she made a reasonable guess.
 
Huh, I didn't notice. Are you using component cables? Even on cube, there was considerably more banding using svideo than component, and interlaced rather than progressive.
i'm using the composite cables that came with the system. just zoom all the way out and look at the pacific ocean. you'll see a banded specular map. or zoom in a bit anywhere and lower the angle so you get a good view of the banded atmosphere.

the globe is something that isn't very taxing to render. there aren't any mountains or even waves in the ocean. there aren't even cloads. and since it's made by nintendo and not available for other systems i think it's safe to assume it's not a port from a PS2 engine. there isn't really a reason to have it run in anything less than 32bit unless it's a hardware limitation. the specular map could just be low color to save space, but the atmosphere more like a gradient fog of some sort, and it's banded as well.
 
SM3.0 is there on 7100GS/6200TC and 7300GS cards, just like SM 2.0 was on GeforceFX 5200 before them.
Meaning, they may have it for a nicer-looking checklist of features on the outside of all those shiny boxes, but it can't really be used due to a severe lack of raw horsepower to take advantage of it.

I dont know if it wouldnt take any advantage of it, chips like the X300/6200 have less transistores and they ran things like FC or HL2 at much more than 30FPS, I doubt that the speed diference would be enought to bring it down.

Still they would probably get much better resuts if they did their own thing. Personally I think that if they a) add vertex shaders b) gain performance from the architeture (like the EMBM, TEV... c) add either some hardwired functions/dsp for the most comun and desirable things (eg self shadowing, physics...) or some GPGPU capabilitys, this would probably still be possible within the range of 2-2,5x more transistores (looking at the above mentined chips) and I bet that the end result would be a very significant step in how the game looks.

I wouldn't say 2 TEVs + 50% would equal three times as powerful GPU, the T&L engine would also have to be doubled to even consider it three times as powerful IMO.

Unless the TEV unit overall limits a lot the use of the TnL engine, althought I never heard anything in that sense.

Same thing, twice as wide, clocked 50% higher, ergo three times as fast.
Quite probably more than that, anyway they can use it in features that may not make the GPU being faster (eg offloading the CPU with vertex shaders/physics/animation/...) overall the "GPU numbers" could only see a 50% increase (althought a overall big increase of the "console numbers").
 
So are pixel shaders being performed using Hollywood and vertex on Broadway?

I've wondered how much its multi-texture capabilities tie into its overall performance when it comes pixel shader performance?

This is of course, me assuming Hollywood has pixel shaders.
 
Status
Not open for further replies.
Back
Top