Official: ATI in XBox Next

Status
Not open for further replies.
Grall said:
nonamer said:
This post is nonsensical. Ever heard of a GPU?

No, YOUR post is nonsensical. When discussing family sedans on a forum, do you often feel an urge to interject with words similar to, 'this post is nonsensical. Ever heard of a tractor?'

You don't think PS3 will have a GPU as well or something? We were discussing CPUs here, not GPUs.

Before accusing others of being nonsensical, try and make sense yourself first. o_O

Acting tough won't get you far Grall. You whole argument of "Nothing in the PC world is going to come near Cell in the next five years or so" is simply false. GPUs are where most of the improvements in graphics will happen in the future, not CPUs. You claim that no PC CPU could match the PS3 CPU in FP calculators in five years is possible, but not for the whole system. Obviously the PS3 will have a GPU, but it won't save it from a PC with a graphics card far more powerful than its GPU.
 
isn't the issue moot for closed systems in general. that they get maxed out and provide the equivalence performence for all concerned play a factor no?

that is we do not have to wait for the dispertion of tech to hit a criticle fique before we see reesults.
 
nonamer said:
Grall said:
nonamer said:
This post is nonsensical. Ever heard of a GPU?

No, YOUR post is nonsensical. When discussing family sedans on a forum, do you often feel an urge to interject with words similar to, 'this post is nonsensical. Ever heard of a tractor?'

You don't think PS3 will have a GPU as well or something? We were discussing CPUs here, not GPUs.

Before accusing others of being nonsensical, try and make sense yourself first. o_O

Acting tough won't get you far Grall. You whole argument of "Nothing in the PC world is going to come near Cell in the next five years or so" is simply false. GPUs are where most of the improvements in graphics will happen in the future, not CPUs. You claim that no PC CPU could match the PS3 CPU in FP calculators in five years is possible, but not for the whole system. Obviously the PS3 will have a GPU, but it won't save it from a PC with a graphics card far more powerful than its GPU.

I´d like to make a comment here. One of PS3´s goal is to achieve a gigantic leap in comparison to PC companies, correct? One of the results of that is the CELL architecture, which will achieve a performance of 1 TFLOPS, if all goes according to plan of course.

Now, from what I understand, the benefits of this will obviously be known in the graphics department, but wouldn´t it be a lot more obvious in other areas, such as physics, A.I., calculations? I mean, from what I can understand, PC cards and CPUs will get better, but will they achieve the computational power to compete in the same ground with PS3 in those terms (physics, A.I.)?

I mean, for example, using nifty shaders and such, PC cards will apply plenty of tricks on pixels, lightning and such, but wouldn´t they appear "dead" on comparison to PS3 titles? I mean, every object flowing with the wind, reactive hair, clothes, etc. would be possible with that much computational power. Or what about cars breaking down, or objects being blown away realistically? If I understood correctly that´s a big part of what can/could be done with PS3´s capabilities.

I mean, there´s only so much you can do with just a low number of polygons and pixel tricks.
 
Game players wouldn't know the difference between real physics and fake physics anyway so why would it matter?

If you shot a window and the glass shatters and falls according to real physics compared to fake physics how would the average gamer tell the difference?
 
PC-Engine said:
Game players wouldn't know the difference between real physics and fake physics anyway so why would it matter?

If you shot a window and the glass shatters and falls according to real physics compared to fake physics how would the average gamer tell the difference?

Is that directed at me?

Oh well, I´ll answer anyway. I don´t think it´s a matter of real vs fake physics, but rather an issue of how extensive it´s applications are or could be. I´m just thinking of a potential use of that computational power.

I mean, imagine FFXIII on PS3. Imagine the main character´s clothes soaking and reacting convincingly to water, or see hair moving convincingly. What about the weapons getting deformed according to fights, or the entire scene reacting and getting covered convincingly by rain or snow (like leaves, grass, etc.). I´m talking of an extensive use of that computational ability, reactions happening all the time.
 
A lot of physics will be done ON the gpu with dx10 and beyond. Some of the easier jobs are already done like cloth and such on the gpu right now. It will be interesting to see how much of that "1TFLOPS" can actually be utilized.
 
vrecan said:
A lot of physics will be done ON the gpu with dx10 and beyond. Some of the easier jobs are already done like cloth and such on the gpu right now. It will be interesting to see how much of that "1TFLOPS" can actually be utilized.

But to what extent? That´s my question. Would a convential set up be able to surpass what PS3 will be able to do in a short time frame? I mean, you´ll find many, many polygons next generation (hopefully, BM on characters and such can be avoided where possible and real geometry be used instead), and I imagine insane ammounts of animation and "world-reaction" as I call it. Not everything is lightning and bump mapping.
How long will it take for a PC card to tackle on physics by itself and be as effective as PS3 is supposed to be? 1 year? 2 years?
 
But why would a GPU/CPU need to surpass the CELL processor doing physics and AI? Is Xbox surpassing PS2 in physics and AI? If it's on par then it wouldn't be a problem.
 
PC-Engine said:
But why would a GPU/CPU need to surpass the CELL processor doing physics and AI? Is Xbox surpassing PS2 in physics and AI?

Well, I think the difference between PS3 and XB2 in terms of FLOP performance would be a lot bigger next gen, and hence there has to be an immediate advantage to such a huge ammount of FLOPS. But are we talking about PS3/XB2?

I mean, I consider it a given that XB2 will try to compensate the disadvantage it most likely will have in CPU performance and go with nifty pixel tricks. Also, I´m refering to PC cards arriving later on afer PS3´s release, because IMO and from what I know, XB2 won´t be able to match PS3. :)
 
PC-Engine said:
Game players wouldn't know the difference between real physics and fake physics anyway so why would it matter?

If you shot a window and the glass shatters and falls according to real physics compared to fake physics how would the average gamer tell the difference?

To add my thoughts on the subject: while I do believe CELL will be unmatched in certain areas for years to come, in others, I am sure it won't take too long to gain similar results on latest PC-tech. Physics is probably one of the areas that I think will be unmatched for a pretty long time - which I think will make for quite a different style. Given the differences in strength, I would say that while future games on PC are likely to emphasize on textures or other things that require a lot of memory, I'm expecting future PS3 games will hit off in the direction games like MGS2 are taking (well choses art, heavy emphasis on weather like effects etc).

At the end of the day, and this is IMO of course, I think this will make for more realistic looking games than future PC games will be.
 
When talking a about physics/AI of the next gen (beyond PS2/X-Box/GC), I think of 100+ AI driven on-screen software agents being the citizens of a next gen GTA town, realistic deformation car damage in racing games, etc ... for applications like that, there will be never enough calculating power, but 1tflop would be a start though :)
 
Well, I've not yet read the rest of the posts, I'll do so soon, but here goes...
ATI and NVIDIA are used more and more frequently in studios these days. In fact ATI had a hand in rendering in LotR:tTT

Hmmm, are they used for those cheapo real-time scene previews that are becoming ever more popular? Or are they actually used in the real rendering process?

In any case… it is time to give the answer to the question given….
Are gpus used?….

IT DOES NOT MATTER!

Studios require the use of RIDICULOUS resolutions, Ridiculous levels of precision, Ridiculously large AMOUNTS of high quality textures(dozens of GBs), ridiculous complex and taxiing gphx technics, and the movement of MASSIVE amounts of data.

These things would slow down even the most powerful of today’s gpus, but as we all know even if gpus were used, these movies would not use today’s greatest(for they precede their arrival.), Gpus of yesteryears are exponentially weaker, with far less local v-ram and features, perf, the farther back you go… and just as cpus cumulative realworld perf. is brought down to a fraction, the distributed nature of these endeavors also affect the gpus realworld perf, sharing the data across the network slows things down significantly…

Furthermore the handling of ridiculous amounts of textures(even for a single char.) and data, means that many of these might have to come all the way from the Hdd, and in any case they still have to go through ALL those tiny bottlenecked busses on the PC, and you very well know what that means… as we all know YOU’RE only as strong as your weakest link… and it will take QUITE A SIGNIFICANT AMOUNT of time to move them all the way to the processing elements….

The device that’s mentioned in the patent does not have to deal with these factors, for its output won’t be blown up in a LARGE THEATER screen, this means textures, resolution, precision, gphx technics(lighting/reflect/refra/etc… hacks/tricks that yield similar results, but are far less taxing can be used), physics/animation does not have to be as large, precise or taxiing either…

Further dealing with far lower amounts of data, rez, texture info, etc. means it can always be kept at a moment’s notice, and with orders of magnitude more bandwidth, and with orders of magnitude less distance between the processing elements that will share all of this data, it means that….

IOW, freed from these bottlenecks, freed from these ridiculous requirements, freed from this ridiculous latency, what is in the patent could very well perform its tasks orders of magnitude faster than these render farms… and these are not render farms from yesteryears, these are state of the art render farms designed for the next gen. of upcoming Hollywood CG…

I think a small quote is in order… from someone who knows what he’s talking about…
“Miracle machineâ€￾
 
Vince said:
DaveBaumann said:
ATI and NVIDIA are used more and more frequently in studios these days. In fact ATI had a hand in rendering in LotR:tTT

http://web.umr.edu/~dsistler/articles/computerrelated/lotr-cgw.pdf

http://www.tafensw.edu.au/summerschools/showcase/newcastle/ict/groupwork/movies/movies.htm

ATI demonstrated the Massive engine rendering in realtime at SIGGRAPH 2002, and I assume that this was used to accerate production which is why they cite LotR:TT as one of the films that have used ATI hardware in production. NVIDIA also got in on the act at this years SIGGRAPH, and I dare say that we'll see more of this type of thing, especially now SGI is making visualisation system based on ATI chips.
 
ATI rendered RoTR, nVidia did FF:TSW, Sony did FF:TSW and Antz. Again, this doesn't address where in the development pipeline these IC's are being utilized. For example:

"Programmable NVIDIA graphics solutions enable a more interactive production workflow so visual effects artists are able to review their imagery more frequently," said Cliff Plumer, chief technology officer at ILM

I have a hard time believing that Weta did a final render on ATI hardware and didn't even mention it. But I do believe that they did it on their 20,000+ CPU Linux clusters (of which they have several). Call me crazy, but I don't think they used TRUFORM in the movie render.
 
Vince I think you accidentally added a cero to that figure... The most I've heard is 2000+ largest in the world... recently bought 5xx dual xeon 2.8Ghz...

Here's a quote:
"Peter Jackson's special-effects company Weta Digital has just taken delivery of 588 IBM blade servers, each with two 2.8 gigahertz Intel Xeon processors. Seven racks of IBM blade servers have been added to Weta's existing 15-rack server cluster to make up the largest Intel-based high- performance computer site in the world with more than 2000 linked processors. The cluster will be used to render the frames drawn by the animators to complete the final installment of The Lord of the Rings trilogy, The Return of the King."

Remember they're also only a fraction of their theoretical potential, in the real world...
 
"The amount of material we have to move over the network every day is pretty extraordinary," says Jon Labrie, CTO at Weta.

Very well said my friend...

A quote from another article... this is for the original
When artists are ready to turn their work into a movie scene, they send jobs to the "rendering wall," a cluster of 400 Linux processors, or 200 dual 1-GHz Pentium servers. The rendering wall combines computer-animated and live-action elements into digital movie files.

As you can see the new one, for RoTK, is most likely over an order of magnitude more powerful than their previous one...
http://www.nzherald.co.nz/storydisp...section=business&thesubsection=technology
 
Status
Not open for further replies.
Back
Top