whats that!?!?!?
looks sophisticated, but jesus its long....
looks sophisticated, but jesus its long....
According to Intel's page on the subject, his first name was Gordon, and he didn't just work at Intel, he was one of the co-founders of the company (and is currently the "chairman emeritus", whatever that means). And AFAIK, it's not a dram-specific observation, it applies to ICs in general. Given that it's held more or less steady for 30 years or so, now, i'd say it was a pretty astute observation...Saem said:Moore's law isn't a law. It's an observation. I can't remember the guys first name, but he worked at Intel at the time. He noticed that every 18 months the transistor count of DRAM kept doubling, this has held up. Realise, that it's specific to DRAM and it doesn't relate to anything else.
At the Sony booth, we enjoyed a real-time battle between characters from the movie Antz rendered in real time, as well as interactive sequences from the upcoming Final Fantasy movie shown at 1920x1080 pixels and a sustained rate of 60FPS.
In the Antz demo, I counted 140 ants, each comprising about 7,000 polygons, which were rendered using a ported version of Criterion's Renderware 3. All ants were texture mapped, and the results looked surprisingly close to the quality of the original movie. The Final Fantasy demo was just data from the now-in-development full-length CG movie based upon the game series, rendered in real time by the GScube. It showed a girl (with animated hair threads) in a zero-gravity spaceship, with a user-controllable camera viewpoint. The demo rendered about 314,000 polygons per frame, and included an impressive character with 161 joints, motion-blurring effects, and many other cinematic feats. According to Kazuyuki Hashimoto, senior vice president and CTO of Square USA, the GScube allowed them to show real-time quality, in "close to what is traditionally software rendered in about five hours." Sony believes that the GScube will deliver a tenfold improvement over a regular PS2, and future iterations of the architecture expect to reach a 100-fold improvement.
edit: btw, what was the CPU setup for the GSCube? A massive rasterizer is nice and all, but who did all the setup work? EECube?
The device must be controlled by an external broadband server which feeds data to the GScube, and at Siggraph that device was the brand-new SGI Origin 3400.
Tsmit42 said:Even at 1920 X 1080 that is still only 2,073,600 pixels(124,416,000/sec @60 fps). [...]
I don't know much about how 3d graphics work, but wouldn't anything over 124.4 million polygons/sec be overkill with today's HDTV standard?
GSCube ver 2 had 64 EEs + 64 GSs. everything else was pretty much the same..the memory (eDRAM + main mem) was obviously greater in total thanks to 4x the processors, but per EE+GS it was the same, AFAIK.
REYES Machine Goals (1986)
Micropolygons (area=¼ pixel) 80,000,000
Pixels 3000 x 1667 (5 MP)
Depth complexity 4
Samples per pixel 16
Geometric primitives 150,000
Micropolygons per grid 100
Shading flops per micropolygon 300
Textures per primitive 6
Total number of textures 100 (1 MB/textures)
Goal ~ 1 frame in 2 minutes, not real-time