Hollywood development time question?

I've read that development started soon after Flipper was finished, with them starting in 2001. What advantages does this bring and what disadvantages does it bring?

4 years of R&D should allow them to come up with some new designs, that would make Hollywood different then Xenon GPU?
 
eh . Its going to be using alot of ati designs from the past . Alot of the things that worked for them really well and alot of new inovative things .

I wouldn't say it has many disadvantages and i'm sure in the 4 years its design has changed alot .
 
Ooh-videogames said:
Did the Flipper share any design similarities with ATI GPU's that were on the market at the time of launch?
No because the filpper was finished before ati bought artx . I believe the artx team did the r300 though .
 
jvd said:
Ooh-videogames said:
Did the Flipper share any design similarities with ATI GPU's that were on the market at the time of launch?
No because the filpper was finished before ati bought artx . I believe the artx team did the r300 though .

Increasing my curiosity, I'm going to do a search to see what was implemented. Thanks for response JVD.
 
Increasing my curiosity, I'm going to do a search to see what was implemented. Thanks for response JVD
no problem but i think you will find that the r300 was a clean sheet design and artx team had about 3 years to make it before it was released
 
IIRC, the GPU had a hardwired DSP core. An evolution of that would be to merge that functionality with general purpose SIMD/vector cores to take care of sound/media etc. and vertex shading. Replace this hardwired core now with an integrated PhysX PPU core and spice up the pixel pipes with some nice ones from the R520/620 equivalent.

And for Broadway CPU, throw in a couple of those in-order cores from Xenon/CELL PPE but without the VMX units but with a very large chunk of L2 cache, say 2-4MB and nicely stream this to the GPU.

My 2 cents...
 
So now I have another question, Does the R300 do texture mapping similar to Flipper?

Texture read
Combiner Op
Combiner OP
Texture read

Or is it more similar to NV2AX?
 
Jaws said:
IIRC, the GPU had a hardwired DSP core. An evolution of that would be to merge that functionality with general purpose SIMD/vector cores to take care of sound/media etc. and vertex shading. Replace this hardwired core now with an integrated PhysX PPU core and spice up the pixel pipes with some nice ones from the R520/620 equivalent.

And for Broadway CPU, throw in a couple of those in-order cores from Xenon/CELL PPE but without the VMX units but with a very large chunk of L2 cache, say 2-4MB and nicely stream this to the GPU.

My 2 cents...

Puts to 2 cents in pocket. Interesting indeed.
 
Ooh-videogames said:
Jaws said:
IIRC, the GPU had a hardwired DSP core. An evolution of that would be to merge that functionality with general purpose SIMD/vector cores to take care of sound/media etc. and vertex shading. Replace this hardwired core now with an integrated PhysX PPU core and spice up the pixel pipes with some nice ones from the R520/620 equivalent.

And for Broadway CPU, throw in a couple of those in-order cores from Xenon/CELL PPE but without the VMX units but with a very large chunk of L2 cache, say 2-4MB and nicely stream this to the GPU.

My 2 cents...

Puts to 2 cents in pocket. Interesting indeed.

And for 3 cents...don't forget the 8-16MB of ultra-fast GPU eDRAM! ;)
 
Jaws said:
Ooh-videogames said:
Jaws said:
IIRC, the GPU had a hardwired DSP core. An evolution of that would be to merge that functionality with general purpose SIMD/vector cores to take care of sound/media etc. and vertex shading. Replace this hardwired core now with an integrated PhysX PPU core and spice up the pixel pipes with some nice ones from the R520/620 equivalent.

And for Broadway CPU, throw in a couple of those in-order cores from Xenon/CELL PPE but without the VMX units but with a very large chunk of L2 cache, say 2-4MB and nicely stream this to the GPU.

My 2 cents...

Puts to 2 cents in pocket. Interesting indeed.

Too generous.

And for 3 cents...don't forget the 8-16MB of ultra-fast GPU eDRAM! ;)
 
Jaws said:
Ooh-videogames said:
Jaws said:
IIRC, the GPU had a hardwired DSP core. An evolution of that would be to merge that functionality with general purpose SIMD/vector cores to take care of sound/media etc. and vertex shading. Replace this hardwired core now with an integrated PhysX PPU core and spice up the pixel pipes with some nice ones from the R520/620 equivalent.

And for Broadway CPU, throw in a couple of those in-order cores from Xenon/CELL PPE but without the VMX units but with a very large chunk of L2 cache, say 2-4MB and nicely stream this to the GPU.

My 2 cents...

Puts to 2 cents in pocket. Interesting indeed.

And for 3 cents...don't forget the 8-16MB of ultra-fast GPU eDRAM! ;)

heh, 8-16MB is exactly the amount of embedded memory Flipper was supposed to have. 8 to 16 MB of emedded 1T-SRAM. as of autumn 1999 through mid 2000. final amount was wittled down to 3.12 MB :( when Gamecube was revealed at spaceworld 2000.
 
Ooh-videogames said:
I've read that development started soon after Flipper was finished, with them starting in 2001. What advantages does this bring and what disadvantages does it bring?

4 years of R&D should allow them to come up with some new designs, that would make Hollywood different then Xenon GPU?


Well, Flipper was built fairly quickly. Nintendo had been scrambling to find a graphics partnet for the then-next generation console in 1997. After the CagEnt (3DO Systems) deal fell through for the MX technology, Nintendo settled on ArtX. they started mapping out what Flipper would be around mid 1998, then actually built it in 1999, finishing sometime in 2000. I suppose there tweaks were happening to Flipper probably still in early 2001 (the clockspeed downgrade was discovered at E3) but by then Flipper was completed.

So I guess development of Hollywood started in mid-to-late 2001, yeah.


So now, Hollywood has been in development for at least 3.5 years, maybe 4 years, with another 6-9 months to go. obviously I am guessing, about how much time Hollywood has left in development--it could be 3-12 months for all I know.
 
Megadrive1988 said:
Jaws said:
Ooh-videogames said:
Jaws said:
IIRC, the GPU had a hardwired DSP core. An evolution of that would be to merge that functionality with general purpose SIMD/vector cores to take care of sound/media etc. and vertex shading. Replace this hardwired core now with an integrated PhysX PPU core and spice up the pixel pipes with some nice ones from the R520/620 equivalent.

And for Broadway CPU, throw in a couple of those in-order cores from Xenon/CELL PPE but without the VMX units but with a very large chunk of L2 cache, say 2-4MB and nicely stream this to the GPU.

My 2 cents...

Puts to 2 cents in pocket. Interesting indeed.

And for 3 cents...don't forget the 8-16MB of ultra-fast GPU eDRAM! ;)

heh, 8-16MB is exactly the amount of embedded memory Flipper was supposed to have. 8 to 16 MB of emedded 1T-SRAM. as of autumn 1999 through mid 2000. final amount was wittled down to 3.12 MB :( when Gamecube was revealed at spaceworld 2000.

Megadrive, do you have a link to those early Flipper figures? Thanks in advance...
 
16megabytes of embedded ram in 1999/2000 would have produced a $500 GPU with very very very very poor yields. I doubt that was ever seriously considered outside gamecube fansites. Especially considering the GC only has total of 32MB ram, making half of it embedded would have been a poor design choice. I believe the Bitboys Glaze3d at that time was experimenting with adding 1.125 Megabytes and that was expensive and difficult to manufacturer.
 
I think originally they had planned for 9MB of eDRAM, but I guess after adding the GPU, north bridge, and sound block, there wasn't enough space for 9MB of eDRAM to fit into a small die.
 
Ooh-videogames said:
I've read that development started soon after Flipper was finished, with them starting in 2001. What advantages does this bring and what disadvantages does it bring?

4 years of R&D should allow them to come up with some new designs, that would make Hollywood different then Xenon GPU?

MS went ahead with a much more aggressive time schedule, (as they had no other choice) while Hollywood progressed & evolved over time. There will undoubtedly be overlapping similarities & customizations. Disadvantages like launching with antiquated GPU technology? It's extremely doubtful. Having had the time to develop some truly unique features? Possibly.

Also remember that 4 years may not be in regards solely to the GPU's dev. alottment time. These in all probability came later, an official announcement wasn't made by ATi until '03. (although R&D work was started before this) Console conceptualization, Nintendo's lead engineer Genyo Takeda's ideas, (which carry tremendous influence)various platform design aesthetics, selecting & cementing non-existant technology partnerships, (Yellowstone XDR for example) in-house & 3rd party developer queries, media format, platform networking features, etc.

I would say that it's doubtful that Hollywood itself was in actual development for 4 years, but longer than the XBX 360's was is a foregone conclusion.
 
Back
Top