Official PS3 Thread

Status
Not open for further replies.
Speaking of system designs.. What would you like PS3 to look like?

I'm thinking something Silver colored not unlike Sony's Blu-ray player with blue lights on the drive and all. It would look cool as hell.

Something high-tech looking.

sony.jpg
 
Saem said:
Moore's law isn't a law. It's an observation. I can't remember the guys first name, but he worked at Intel at the time. He noticed that every 18 months the transistor count of DRAM kept doubling, this has held up. Realise, that it's specific to DRAM and it doesn't relate to anything else.
According to Intel's page on the subject, his first name was Gordon, and he didn't just work at Intel, he was one of the co-founders of the company (and is currently the "chairman emeritus", whatever that means). And AFAIK, it's not a dram-specific observation, it applies to ICs in general. Given that it's held more or less steady for 30 years or so, now, i'd say it was a pretty astute observation...
 
Rather than moores law being outpaced, as the features of chips get smaller and smaller eventually the physical limitations of silicon chips will be reached and progress will "hit a wall." There is a limit to how small transistors can get, and the gate oxide is not likely to ever be smaller than 1nm (even that's pushing it). Of course, there is always the option of using an array of physically seperate processors instead of one large processor. But the trend of packing more and more into a single piece of silicon cannot continue forever, eventually progress will not be able to keep pace with moores law.
 
I was googling for some screens for the up coming Hulk movie, come across this old article of SIGGRAPH 2000, I think.

http://www.gamasutra.com/features/20000804/crespo_01.htm

Interesting reminder of what to look forward for.

At the Sony booth, we enjoyed a real-time battle between characters from the movie Antz rendered in real time, as well as interactive sequences from the upcoming Final Fantasy movie shown at 1920x1080 pixels and a sustained rate of 60FPS.

In the Antz demo, I counted 140 ants, each comprising about 7,000 polygons, which were rendered using a ported version of Criterion's Renderware 3. All ants were texture mapped, and the results looked surprisingly close to the quality of the original movie. The Final Fantasy demo was just data from the now-in-development full-length CG movie based upon the game series, rendered in real time by the GScube. It showed a girl (with animated hair threads) in a zero-gravity spaceship, with a user-controllable camera viewpoint. The demo rendered about 314,000 polygons per frame, and included an impressive character with 161 joints, motion-blurring effects, and many other cinematic feats. According to Kazuyuki Hashimoto, senior vice president and CTO of Square USA, the GScube allowed them to show real-time quality, in "close to what is traditionally software rendered in about five hours." Sony believes that the GScube will deliver a tenfold improvement over a regular PS2, and future iterations of the architecture expect to reach a 100-fold improvement.
 
Yea, its a shame Sony never released any direct feed movies and/or screens of the GSCube in action.

edit: btw, what was the CPU setup for the GSCube? A massive rasterizer is nice and all, but who did all the setup work? EECube? :p
 
edit: btw, what was the CPU setup for the GSCube? A massive rasterizer is nice and all, but who did all the setup work? EECube?

Well there is an EE attached to every GS in that setup. I assumed the EE would do the stuff.

And...

The device must be controlled by an external broadband server which feeds data to the GScube, and at Siggraph that device was the brand-new SGI Origin 3400.

Anyway, it support motion blur. So maybe motion blur will be common next gen. Still not sure about micropolygon though.
 
THE SEMICONDUCTOR wing of Toshiba has demonstrated what it claims is the world's first memory cell for embedded DRAM devices made on silicon on insulator wafers.
The breakthrough is likely to into mass production for broadband network applications in 2006, Toshiba said.

It presented the results of its invention at the VLSI Symposium in Kyoto.

Silicon on insulation (SoI) wafers use three layers – one single crystal silicon layer, a base silicon substrate, and a thin insulator which prevents waste electronic leakage from the single crystal layer to the substrate.

This gives lower power consumption and higher transmission speeds.

The DRAM memory cell technology Toshiba showed uses the characteristics of such a wafer and doesn't need capacitors. Called floating body cell (FBC), the technology will be used for processes at the 45 nanometer level.

Transistors using this method can act as both capacitor and electric switch so the area of a DRAM cell is half that of current devices.

Toshiba claims that the 96Kbit cell array it demonstrayed has a 36 nanosecond access time, 30 nanosecond data switching time, and a 500 millisecond data retention time at 85 degrees Celsius
 
Tsmit42 said:
Even at 1920 X 1080 that is still only 2,073,600 pixels(124,416,000/sec @60 fps). [...]
I don't know much about how 3d graphics work, but wouldn't anything over 124.4 million polygons/sec be overkill with today's HDTV standard?

Google up Pixar's Renderman standard and method. It's essentially about mother big shader programs applied to meshes of micropolygons (smaller than a pixel, and several of them per pixel). For that, 124M polys/sec isn't at all enough to look good and smooth at that 1920x1080 HDTV rez. On top of that, you have to factor in accidental overdraw (occlusion), and intentional overdraw (transparency/translucency), and you are soon past the 1B polys/sec mark 8)
 
Neat. Though it's nothing new, expcet possibly in application.

In anycase, that should bring up density a fair bit.
 
Saem, do you mean SOI eDRAM or Renderman?

(Did you mean SOI is nothing new etc, or that Renderman is old but hasn't been implemented in dedicated hardware before? And did you mean transistor density or polygon density?)

Sometimes hard to tell ;)
 
GSCube ver 1 had 16 EEs + 16 GSs - 32MB eDRAM memory per GS
(512 MB eDRAM) plus 128MB * 16 main memory, from what I recall

GSCube ver 2 had 64 EEs + 64 GSs. everything else was pretty much the same..the memory (eDRAM + main mem) was obviously greater in total thanks to 4x the processors, but per EE+GS it was the same, AFAIK.
 
GSCube ver 2 had 64 EEs + 64 GSs. everything else was pretty much the same..the memory (eDRAM + main mem) was obviously greater in total thanks to 4x the processors, but per EE+GS it was the same, AFAIK.

Was this ever shown ? or made ? or is just something they planned ?
 
I didn't realised that REYES machine had a really ambitious goal for 1986 !!

Code:
REYES Machine Goals (1986)
Micropolygons (area=¼ pixel)         80,000,000
Pixels                            3000 x 1667 (5 MP)
Depth complexity                       4
Samples per pixel                     16
Geometric primitives                 150,000
Micropolygons per grid               100
Shading flops per micropolygon       300
Textures per primitive                   6
Total number of textures               100 (1 MB/textures)

Goal ~ 1 frame in 2 minutes, not real-time

I wonder what it is now.
 
Status
Not open for further replies.
Back
Top