Cure@PS3 project

As for the response to Mckmas (and I do know you're just joking around btw), no honestly I thought PCs did more on average in terms of their output. But average is the key word, and there are a lot of 'ancient' chips slaving away there. ;)

What did you think I was joking about? But it's good to see the average PS3 will crush the average PC. Even the better PCs probably aren't doing more than 10 gigaflops per second. Seriously (I hate to say it like this but) Sony can push this in a marketing way.

Do you want CELL blades? Not sure? Well guess what? Our CELLs in our PS3 are pushing 100 gigaflops per sec while the average blah blah blah only does 8 gigaflops per sec.

BAM! Thousands of CELL blades sold a month. It's crazy to see the cell processor already surpass the EE processor in the PS2. And the so called "crazy Ken" will see his dream of distributed computation at or near launch. AMAZING!:oops:
 
Guys, you are vastly underselling current processing. A 3.0HGz P4 is rated at 12GFLOPS of MADD capcability in the pipeline and F@H is also optimised for SSE extensions. AMD64's and Conroe's are going to be higher than this still. Multiply your PC CPU factors by a scale of at least 10x (and you'd still get cained by a PC with X1900 anyway! ;) )


Dave me and Xbd were using the current processor speeds of PCs at F@H now. Right now the average PC is only pushing 1 gigaflop. But like xbd said that also includes the super slow PCs too.

Anyway what's more likely people buying $2,000/$3,000 PCs or buy a $500/$600 console? Simply put we (PS3 owners) will beat you ATI guys.

Wanna bet? ;) Friendly of course I mean this is for a good cause.
 
The numbers you are quoting are processed numbers, which are far from peak numbers - we're talking about PC's that are being used, PC's that are being powered up and down, etc., not running folding 24/7 exclusively and actually giving peak f@h stats, unlike the 100GFLOPS number for the PS3 you're comparing it to.

Anyway, at this point in time I'm not certain that there will be direct comparisons. I'm hearing that Stanford are considering the scoring system they have at the moment and it may not scale linearly with the performance.
 
@Dave: I'm just replying to Mckmass's hard numbers via F@H itself - I mean numbers don't lie, right? ;) Besides, I've already made the case that those numbers are being dragged down by all the Athlon XPs, Pentium IIIs, and other such chips workign on folding, and that modern chips should be better - don't worry as to their being slighted!

That said, the site itself makes clear that the numbers given are actual yields, and that theoretical numbers do not apply, and are not reached.

@Mckmass:
What did you think I was joking about?

No, I didn't think you were... I was talking to Gradthrawn.

****************************************

I don't want this thing to turn into a 'PS3 kicks so much ass' type of thread to tell you the truth, because that's not what this is about. This is about a specific strength that I think should be respected due to it's ability to contribute to something important. Frankly, I could care less if ATI cards do twice as well - if they do, everyone go out and buy an ATI card! ;)

But even for those that *need* the PC to stay ahead of the PS3 in this regard - for whatever PC-centric reason - c'mon let's give a *little* respect here to a strong performer: Cell. :cool:

(and I certainly build my own PC's too, so not like I don't understand the draw)

EDIT: Ok, written before your last post going over GFLop yields Dave. See, we're all tripping over each other now! :p
 
Last edited by a moderator:
The numbers you are quoting are processed numbers, which are far from peak numbers - we're talking about PC's that are being used, PC's that are being powered up and down, etc., not running folding 24/7 exclusively and actually giving peak f@h stats, unlike the 100GFLOPS number for the PS3 you're comparing it to.
.

That's understandable. But don't you find it amazing that 10,000 PS3s linked together can push 1 petaflop? And understand I also want the ATI card to do well too. But it's just so so so shocking to see a console do this. You know what I saying?

I mean shocking. Remembering 12 years back I never imagined my SNES being able to do something like this. This is also a big tribute to the new internet of today. Once broadband got mainstream lots of things are now possible.

Can you imagine what a PS4 and FIOS internet connection will be able to do for F@H? :oops:
 
That's understandable. But don't you find it amazing that 10,000 PS3s linked together can push 1 petaflop? And understand I also want the ATI card to do well too. But it's just so so so shocking to see a console do this. You know what I saying?
I'm not really shocked by the numbers, because I've heard of more for a long time as we've been tracking the high performance client (GPU) - I'd suspect that if they did a similar client for 360 then they'd possibly be pushing twice that.
 
Can you imagine what a PS4 and FIOS internet connection will be able to do for F@H? :oops:

Well, the speed of your Internet connection has like zero bearing on what your computer can contribute to the F@H project - the bottleneck is computing power, and will be for quite some time to come.

I'm not really shocked by the numbers, because I've heard of more for a long time as we've been tracking the high performance client (GPU) - I'd suspect that if they did a similar client for 360 then they'd possibly be pushing twice that.

Hey believe me, if they actually port a client over to 360 to make use of Xenos, I don't doubt that it might. But first they've got to do it! :)
 
Last edited by a moderator:
Similarily, would it not be possible to port this over to RSX as well and gain an increase in numbers while the CELL and RSX both work together instead of having the RSX just render fancy graphics?
 
Early numbers had 7800 performing slower than a P4 @ 3Hz due to numberous architectural reasons, which aren't altered on G71.
 
I think the fancy graphics are a nice touch actually! :)

I swear, people have no sense of exploration these days... we're talking about *navigating* a protein as it folds! This is high science here!
 
I think the fancy graphics are a nice touch actually! :)

I swear, people have no sense of exploration these days... we're talking about *navigating* a protein as it folds! This is high science here!


Hey I'm new to this folding stuff. But it is very interesting. Maybe in 5 or so years us PS3 owners and PC users folding can get some TV time when something is announced. You know get some good videogame headlines for once, instead of 10 year old kills man because of GTA3. :p
 
Early numbers had 7800 performing slower than a P4 @ 3Hz due to numberous architectural reasons, which aren't altered on G71.

Where are these numbers and what are the numbers about (anything less vague would be nice)? And if they did them again now, would it be different?

In any case, seeing a general purpose CPU doing something this quick is quite amazing when you consider that its such a leap beyond whats on the market now and for quite some time to come in the same category :)
 
Early numbers had 7800 performing slower than a P4 @ 3Hz due to numberous architectural reasons, which aren't altered on G71.
Um, so a X1900 does 100+ Gflops, a 7800 less than 12, just from running a bunch of pixel shader programs on polygon data? Allow me to express some slight scepticism over this huge enormous performance discrepancy, what 'numberous reasons' would it be that holds NV back to such a gigantic degree?
 
Which GPUs will be supported? We have not made any final decisions on this issue. However, our software will likely require the very latest GPUs from ATI (especially now that the newest ATI GPUs support 32 bit floating point operations). Previous work of ours used NVIDIA GPUs as well, but we have now concentrated on ATI GPU's as they allow for significant performance increases for FAH over NVIDIA's GPU's (at least at the current generation). Our GPU cluster has 25 1900XT's and 25 1900 XTX's. We find a considerable performance increase of 1900XT's even over 1800XT's, due to the architectural differences between the R580 and R520 GPU's. Our code will run on R520's, but considerably more slowly than R580. We're very much looking forward to trying out R600's.
http://folding.stanford.edu/FAQ-highperformance.html
 
Guden Oden, Nvidia's chips have dreadful Dynamic Branching support, while ATI's is excellent. Its probably just that simple.
 
Um, so a X1900 does 100+ Gflops, a 7800 less than 12, just from running a bunch of pixel shader programs on polygon data? Allow me to express some slight scepticism over this huge enormous performance discrepancy, what 'numberous reasons' would it be that holds NV back to such a gigantic degree?
i agree in a sense, but more over of comparing a gpu against a cpu who saiz it can run on a gpu in the first place. basically its a given that it will run on the cell at X Gflops but a GPU at 20billion gflops is really a bestof senerio
ala
GPU: I CAN OUPUT 20billion gflops of black + white answers
FOLDING: but sometimes we want green
GPU: I CAN OUPUT 20billion gflops of black + white answers

== 0.00000 gflops of meaningful data :)

GPU: but but but im really fast
 
Primary issue cited for F@H has been FP32 register space and handling when they are exceeded. If I recall correctly they use textures for lookup tables of data and they need good thread handling and latency hiding when the register space is filled - there are sizable differences between the R5xx and G7x series in how threads are managed and executed and R5xx's suites F@H better. The memory capabailities, I think, also come into play and R5xx has support for things like scatter and gather whcih isn't supported by G7x.

In general, R5xx has got plenty of elements that were designed specifically for the GPGPU, which is why ATI have produced an API that gets to low level access of the chip, allowing GPGPU programmers to go directly "the the metal", bypassing the limitations and overheads of traditional 3D API's. I don't yet know if Stanford has adopted this or not, which prompted the announcement, however I know that one of their concerns was the potential for different driver releases altering how DX is handled in a driver and potentially invalidating results that are submitted without them knowing and such a concern could be bypassed by using ATI's API.

http://www.ati.com/developer/siggraph06/dpvm_e.pdf
 
i agree in a sense, but more over of comparing a gpu against a cpu who saiz it can run on a gpu in the first place.

Stanford.

What are GPU's and how can they help FAH? GPU's are Graphics Processing Units -- chips used in today's PC's to help speed high performance graphics, such as 3D games or 3D scientific visualization. GPUs have the possibility to perform an enormous number of Floating Point OPerations (FLOPs). However, they achieve this high performance by losing generality -- there are only certain types of calculations which would be well-suited to GPUs. However, after much work, we have been able to write a highly optimized molecular dynamics code for GPU's, achieving a 20x to 40x speed increase over comparable CPU code for certain types of calculations in FAH. This means that we will be able to make an enormous advance over what we could do only just a few years ago
 
note the certain in the quote
'certain types of calculations'
precisely what i mean, GPUs are blistering fast for doing some things but lack the general programmity, ie if u require green youre buggered cause it only outputs black + white though it does do black + white at an incredible pace i admit.
 
Back
Top