DeanoC do you know when you get the final PS3 Harware?

Alpha_Spartan said:
IIRC, the only thing that seperates betas from final kits is the RSX and the motherboard (with all the memory interfaces and chipsets and shit).
Hopefully that stuff will be removed before delivery.. :)
 
Alpha_Spartan said:
Also, hell, you can stack a couple of pizza boxes on that bitch AND it will keep them warm. Sony engineering FTW!

:LOL: Now that's funny. To all the devs here, will the jump from 2.4 GHz to 3.2 GHz actually visually show a difference to the end user/gamer? What will that extra 800 MHz actually enable for a dev to do that he or she probably couldn't have done before the jump?
 
one said:
I think it's 1U rack server (pizzabox) rather than blade.

BTW, Embedded Technology 2005 (http://www.jasa.or.jp/et/english/index.html) conference was held at Yokohama, Japan in Nov 15-18, and at the panel discussion Masakazu Suzuoki said they'd start to supply workstations with 3.2Ghz Cell to PS3 developers in December, which indicates they are on the track.

http://itpro.nikkeibp.co.jp/article/NEWS/20051120/224899/

Some interesting numbers in there. Nice to see Sony as ambitious as ever ;)
 
mckmas8808 said:
What are the ambitious numbers? I can't read Japanese.:cry:

Just some "visioneering" numbers re. the amount of power to simulate a HAL-like computer i.e. "a human" (10 Petaflops, I think), and the world ala the Matrix (100 Petaflops). Don't know how they come up with these numbers or how it all links back to any technology we're using today, but at least they're still dreaming big ;)
 
Last edited by a moderator:
Titanio said:
Just some "visioneering" numbers re. the amount of power to simulate a HAL-like computer i.e. "a human" (10 Petaflops, I think), and the world ala the Matrix (100 Petaflops). Don't know how it all links back to any technology we're using today, but at least they're still dreaming big ;)

You're right those are ambitious. 10 petaflops will probably happen in my lifetime though. Not sure about 100 Petaflops.
 
mckmas8808 said:
You're right those are ambitious. 10 petaflops will probably happen in my lifetime though. Not sure about 100 Petaflops.

Easily happen in the next 10-15 years.
 
Titanio said:
Just some "visioneering" numbers re. the amount of power to simulate a HAL-like computer i.e. "a human" (10 Petaflops, I think), and the world ala the Matrix (100 Petaflops). Don't know how they come up with these numbers or how it all links back to any technology we're using today, but at least they're still dreaming big ;)
What a world, ten humans max. :(
 
flick556 said:
maybe the sum is greater than the parts :)
no ,but modeling masses requires less power.in fact , the more people you model ,the less flops you need.THe more people in one place ,the more dumb the model.:)
 
Not sure where you guys are coming from, but 100Petaflops is perfectly possbile even with this CELL. Human simulation is a massively parallel problem by nature (cf. the brain parallelism between modules, and the memory/influx systems) - although also an horribly inefficient one to simulate.
I think the key reason why it's safe to say a computer being hundreds of times smarter than any human only requires 10x more FLOPS (and I disagree with the FLOPS measurement of this, it isn't FLOPS limited imo), is that the vast majority of humans' capabilities are special-purpose.
Yet, the chances of a chip simultating neuron-for-neuron a normal human are absolutely nil (differs from person to person, from second to second, and there's several orders of magnitude more neurons than transistors, and they're more complicated than transistors...). This implies that such a scheme would use general purpose processing power for highly special-purpose things (example: the vision part of the brain & the related influxes).

So, if you create something solely based on "intelligence", using text information and sending text and/or commands, you can achieve significantly more impressive results for a given amount of power. Problem though remains that unlike some of the special-purpose systems of the brain you need to simulate, the general-purpose ones are much more memory/HD-hungry, so much in fact that I doubt current technology would be sufficient, even scaled 10000x in a cluster.
IMO, the only thing preventing proper human-like simulation TODAY is the low speed of HDs. You either need a ludicrous amount of RAM, or, well, you're screwed. That's why I'm personally especially interested in Colossal-Storage-like technologies, since those promise an unified scheme for all Read/Write systems in the long-term, so you'd get at least near-RAM speed on a HD with 100Terabytes. As soon as you've got that, human-brain-like preprocessing schemes can become a reality, siignificantly simplifying the simulation problem.
From my POV, it also seems a bit naive to look at researches willing to simulate the brain's response scheme etc. - it tends to be rather inefficient, badly fit for current computation models, etc. - which is why I'd be more interested to see a scheme with self-recompiling-code that is capable of module creation and logic-linking. IMO, that can scale much better, especially so since specific modules can be coded by actual humans.
And it also gives better control over the entire architecture after it starts "running", at the (significant) condition it manages to properly separate parts of its functions.
Another big limitation obviously is branching performance, but I would tend to believe there has to be an amazingly efficient solution for such a thing in a system with hundreds of thousands of parallel threads, since the brain manages it; after all, if you think about it at a neuron level, the amount of parallelism the brain manages is downright insane, and beyond what any computer has ever done, even per-thread.

Anyhow, enough gibberish about things that don't interest you all - can I join the mass and ask "Where's our RSX information, goddamnit?!" :)


Uttar
 
Not sure where you guys are coming from, but 100Petaflops is perfectly possbile even with this CELL.

Sure if you live in a dream world where there is no limit to the number of processors you can use and no limit to the budget and time required to build this 100PFLOPS dream. It aint gonna happen in the next 15 years. Sure it's possible sometime in the future.
 
PC-Engine said:
Sure if you live in a dream world where there is no limit to the number of processors you can use and no limit to the budget and time required to build this 100PFLOPS dream. It aint gonna happen in the next 15 years. Sure it's possible sometime in the future.

IBM's BlueGene/L in it's full incarnation is slated to use 65,356 specialty ASICs in 64 cabinets. For shits 'n giggles, a comparable Cell based, commodity, solution yeilds:

Code:
[65,356] * [200] = 13,071,200 GFlop/sec, or 13,071TFlop/sec, or 13PFlop/sec.
Fifteen years is a long time... especially for something which needs only 600,000 units to be accomplished from a device which will see production volumes exceeding 100,000,000+ units for the PlayStation Platform alone if history repeats, yet again.
 
Last edited by a moderator:
Fifteen years is a long time... especially for something which needs only 600,000 units to be accomplished from a device which will see production volumes exceeding 100,000,000+ units for the PlayStation Platform alone if history repeats, yet again.

The millions of PS2s currently out there don't make up a complete system nor will the millions of PS3s. Might as well call the hundreds of millions of Gameboys out there a complete system and be done with it.
 
PC-Engine said:
The millions of PS2s currently out there don't make up a complete system nor will the millions of PS3s. Might as well call the hundreds of millions of Gameboys out there a complete system and be done with it.

Do you purposefully miss the point of people's comments, or is it natural, or what? The point is that to accomplish something you stated is impossible in the timeframe of 15 years, you only need to utilize 0.6% of the potential Cell production volume by 2008 if historic trends continue (without factoring in the increasing rate of PlayStation sales per unit time seen in PS2 Vs PS1).

Just over 1/2 of 1% of the total production volume to attain your unattainable preformance is not alot... That can feasibly be accomplished if an entity so chose to do so. Say, UIUC's NCSA or Sony PictureWorks for example.
 
Last edited by a moderator:
Back
Top