Official: ATI in XBox Next

Status
Not open for further replies.
nonamer said:
Acting tough won't get you far Grall.

Rofl! What's that supposed to mean, are you challenging me to 'step outside' or something? :LOL: Stop being so dramatic, dude, you look more like a drama-queen throwing a hissy-fit when you act that way.

You whole argument of "Nothing in the PC world is going to come near Cell in the next five years or so" is simply false. GPUs are where most of the improvements in graphics will happen in the future, not CPUs.

Again you're comparing apples to oranges! I wasn't talking about GRAPHICS when I said that! You can't take things out of context like that. I was talking about processor performance, and I am sure that nothing even approaching one quarter teraflops (just to be on the safe side) will be released within the next five years, it simply isn't a requirement with that high floating-point performance in the PC realm.

...And if Cell hits the Tera limit... Woe to all x86 chips I say.

You claim that no PC CPU could match the PS3 CPU in FP calculators in five years is possible, but not for the whole system.

I wasn't talking about a whole system. Anyway, we know nothing about PS3s graphics capabilities. If those are as extensive as the CPUs alledged floating point performance (assuming 1Tflop performance), PCs would be put in a real spot.

Obviously the PS3 will have a GPU, but it won't save it from a PC with a graphics card far more powerful than its GPU.

This is assuming PS3s GPU will be easily surpassed by a PC chip. Maybe that is not the case, ever thought of that?

"Save it" is a strange choice of words btw, since how do you define "save it" anyway? Just extremely flashy graphics, or very very good graphics with superior physics and realism?

Assume you wanted to do a game where you wreck things a la Blast Corps. With 1Tflop of performance, you could do some seriously advanced stuff when it comes to collapsing buildings! A PC GPU would not be nearly as effective in this scenario. You can't do true physics on the GPU, the game engine needs to keep track of what is happening with all the debris flying around or it would just be pretty bits of decorative pixels. Hence you need CPU oomph.

Cell provides that.


*G*
 
"Too long" only matters if it's talking about nothing. We've been switching on and off randomly for a while now, but there's always something to talk about. ^_^
 
DaveBaumann said:
I said they were used in the production pipeline - not that they produce the final image. At the moment they are talking about acceleration of pre-production, which can dramatically increase the workload output (and have direct effect on the final rendered image).

To me, this comment is like someone saying, "I hear the Unreal Engine is used by Spielberg for his movies" in the middle of a debate about programs like PRman or Entropy or a similar renderer.

Is it used in the development pipeline in a similar way in pre-production? Sure. In the future will 3D gaming engines scale upwards and have a bigger part? Most likely. Does it have anything to do with the current or near-future topic of the debate? Nope.

Which is why I responded how I did.
 
Phil said:
To add my thoughts on the subject: while I do believe CELL will be unmatched in certain areas for years to come, in others, I am sure it won't take too long to gain similar results on latest PC-tech.

Not so much a disagreement, just using the post as a jumping off point:

NOTHING on silicon is "unmatched in ... for years to come". There has been no grand paradigm shift. The PS3 will be very powerful when it comes out. Five years later it will be relatively weak.

Of course this is just my prediction after consulting the magic eight ball. If I'm wrong please do point it out to me 5 years from now ;)
 
No direct replies to my post?

I just brought up the fact that the WORLD's MOST POWERFUL supah computah's perf. hovers around 30-60ish %, and that most other supah computahs go far below this 10-40%ish(thanks to pcengine for that number)... and that many of these machines use the fastest, widest, and best interconnects/ram/etc... some custom designed for this very purpose...

Then I brought up the fact that the world's most powerful render farm is composed of some 2000ish cpus each capable of a few gflops, but in reality will only reach a really really small portion of that potential...

Furthermore the ridiculously large amounts of data, that are being moved around, and into the processing elements, take ridiculously long times going through all of these bottlenecked busses, connections, etc...
 
No direct replies to my post?

I just brought up the fact that the WORLD's MOST POWERFUL supah computah's perf. hovers around 30-60ish %, and that most other supah computahs go far below this 10-40%ish(thanks to pcengine for that number)... and that many of these machines use the fastest, widest, and best interconnects/ram/etc... some custom designed for this very purpose...

Then I brought up the fact that the world's most powerful render farm is composed of some 2000ish cpus each capable of a few gflops, but in reality will only reach a really really small portion of that potential...

Furthermore the ridiculously large amounts of data, that are being moved around, and into the processing elements, take ridiculously long times going through all of these bottlenecked busses, connections, etc...

And cell tries to address these issues. The question is does it do anything particularly different to address these issues. Are they going to use any new porgramming paradigms -- stuff that current massively parallel systems don't use already -- which will give it some sort of advantage.

Additionally, in the context of typical PS3 workload, what sort of performance realisations will be made? Some things will work out easily on Cell; however, how do you address the parts that don't and how effective will those solutions be? I'm not sure Cell will break any new ground in the software architecture department, ultimately, I'm sure much of what it uses will have been fairly well explored by academia or companies that deal with super computers or even super computahs. ;)
 
Vince said:
Is it used in the development pipeline in a similar way in pre-production? Sure. In the future will 3D gaming engines scale upwards and have a bigger part? Most likely. Does it have anything to do with the current or near-future topic of the debate? Nope.

And visualisations systems such as SGI's won't?
 
Are they going to use any new porgramming paradigms -- stuff that current massively parallel systems don't use already -- which will give it some sort of advantage.

Well as I said the cell ain't spread across an entire facility and connected with cheap wiring. Nor is it required for it to render at ridiculously high resolutions, or the use of Gbs of texture data, nor is it starved with cheapo pcish busses.

All processing elements and memory are RIGHT THERE, the amounts of data, textures ARE FAR SMALLER(and the bandwith to the processing elements is ORDERS of magnitude larger), since its gphx won't be displayed in a Gargantuan theater screen. The physics, a.i., lighting, etc does not have to be as precise either, hacks/tricks that yield similar results but are far less taxiing can be used instead.

What I mean is these render farms have the realworld perf similar to what could be achieved with what's described in the patent, but unlike it, their processing elements are spread across entire rooms, and joined by kilometers of cheapo wires, and the pcish bandwith bottlenecks are abounds between the local ram and local processors... IOW data is moved around orders of magnitude slower, and it is orders of magnitude larger... than the data that will be used for ps3.

PS not to mention, these render farms are several fold(some even an order of magnitude) more powerful than those used for the likes of toy story and Monsters.inc...

ed
 
DaveBaumann said:
And visualisations systems such as SGI's won't?

I give up - the wall is calling. This is very simple. ATI or nVidia didn't render any movie that I, or you, saw on the big screen. What they did render was a hacked down version of the real movie for PR value and some early pre-production work.

This is just like what Spielberg uses the Unreal Engine for during his preproduction. But, just like Unreal - it's not used in the actual rendering, nor is it even close to being indicative of the future preformance when a "dedicated IC" is rendering an actual movie (as opposed to what you're talking about").

Thus, when people here talk about current high-end production quality rendering that are using Linux clusters of CPUs - ATI and nVidia's comments are just PR and have no bearing on this.

You stating that, what was that line, "In fact ATI had a hand in rendering in LotR:tTT" - thats like saying the automobiles driven by the modelers and programmers "had a hand in rendering LotR:tTT" because they got the employees to the office faster and thus more productive and thus closer to the final rendering being done on a CPU farm that much faster. Both yeilded increased productivity and both had nothing to do with the final rendering. This is fact.

I could connect Kevin Bacon to the rendering quicker than I can how ATI's Radeon influenced the final rendering. ;)
 
Vince said:
DaveBaumann said:
And visualisations systems such as SGI's won't?

I give up - the wall is calling. This is very simple.

Vince - you're the brick wall; we've already been there - the question is, and you didn't answer, do you feel that Visualisation systems such as SGI's won't be used for broadcast/film media content output?
 
DaveBaumann said:
Vince - you're the brick wall; we've already been there - the question is, and you didn't answer, do you feel that Visualisation systems such as SGI's won't be used for broadcast/film media content output?

Wow, this is futile beyond belief. I already answered this if you think about it:

Vince said:
To me, this comment is like someone saying, "I hear the Unreal Engine is used by Spielberg for his movies" in the middle of a debate about programs like PRman or Entropy or a similar renderer.

Is it used in the development pipeline in a similar way in pre-production? Sure. In the future will 3D gaming engines scale upwards and have a bigger part? Most likely. Does it have anything to do with the current or near-future topic of the debate? Nope.

I already paralleled this and talked about the future. Perhaps I need to explicitly state things nowadays since this is appearently way to obscure.

Now, care to explain what "hand" ATI had in rendering of LOTR:TT? I bet it had something to do with that Truform :rolleyes:
 
Now, care to explain what "hand" ATI had in rendering of LOTR:TT? I bet it had something to do with that Truform

Vince, again, we've already been there.

I already answered this if you think about it:

I thought that was a responce to the notion of FireGL's or Quadros being used, not the SGI Visualisation system I was talking about, which SGI states its target market as:

"Onyx4 UltimateVision is the emerging choice for high-end TV and film production and post-production, real-time broadcast effects, theme parks, and real-time processing of high-definition satellite images. Graphics and digital media-optimized processing, together with an architecture created for high throughput, allows users to work with high-definition uncompressed video- or film-resolution images, create multilayered 2D and 3D effects, and edit simultaneous streams of standard- or high-definition video in real time."
 
DaveBaumann said:
Vince, again, we've already been there.

Ok, since you're obviously moving the conveersation away from how you said ATI had a "hand" in rendering LOTR:TT (which is blatently false), I really don't need to continue this.


I thought that was a responce to the notion of FireGL's or Quadros being used, not the SGI Visualisation system I was talking about, which SGI states its target market as:

"Onyx4 UltimateVision is the emerging choice for high-end TV and film production and post-production, real-time broadcast effects, theme parks, and real-time processing of high-definition satellite images. Graphics and digital media-optimized processing, together with an architecture created for high throughput, allows users to work with high-definition uncompressed video- or film-resolution images, create multilayered 2D and 3D effects, and edit simultaneous streams of standard- or high-definition video in real time."

Just a comment on this. This is never going to replace a Renderfarm running PRman in this current incarnation. I have no idea why you're even posting this, although I have my thoughts. This is far more akin to an advanced Quantum3D type device targeted at the mid-range 3D (eg. between the content houses like Pixar and the home user)

I mean, if you still have any slim idea that this isn't a Quantum3D targeted device and that it's going to render the next Pixar movie, look at the sucesses:

http://www.sgi.com/visualization/onyx4/success.html

Hell, I thought I was looking at Quantum3D's website. All industrial, aerospace, simulation, defense, medicine... where's Pixar and LOTR:TT?

And the funny part, you go to all the trouble of bolding this part and they don't even have it listed bellow where they have the following uses (which it's much more suited for) expanded upon:

http://www.sgi.com/visualization/onyx4/media.html

What are you trying to prove or say? ATI hardware (and for the nVidia haters - neither has nVidia) has not rendered a Pixar calibur movie, nor will it with the current parts on the market - how they had "a hand" in "rendering" the LoTR:tTT movies I'm still trying to figure out. And this part you keep linking to is never going to replace a Linux renderfarm. Just as Q3D's parts have yet to at any comperative period in 3D.
 
Vince said:
DaveBaumann said:
Vince, again, we've already been there.

Ok, since you're obviously moving the conveersation away from how you said ATI had a "hand" in rendering LOTR:TT (which is blatently false), I really don't need to continue this.

Vince, look at the previous page, I've already said:

DaveBaumann said:
I said they were used in the production pipeline - not that they produce the final image. At the moment they are talking about acceleration of pre-production, which can dramatically increase the workload output (and have direct effect on the final rendered image).

How many time do you want to cover the same ground?

This is never going to replace a Renderfarm running PRman in this current incarnation. I have no idea why you're even posting this, although I have my thoughts. This is far more akin to an advanced Quantum3D type device targeted at the mid-range 3D (eg. between the content houses like Pixar and the home user)

Ultimately this type of system probably will replace renderfarms, since they are just renderfarms of s different kind. The parallels with Quantum may be there in that this this is a scaleable system, however Quantum is still pretty much based in the fixed function era and has no programmability, floating point internal precision or floating point input or output, which this does.

However, you've answered the question that you don't believe that this will be used in media/film product -- despite the fact that its clearly one of the market SGI are aiming it at.
 
Vince, I understood Dave’s post and subsequent responses to mean that they used the SGI workstation for creating and proofing scenes. Nothing more, nothing less. I don’t see why this point is escaping you. Dave nowhere suggest that it is used to render the final images.
 
DaveBaumann said:
I said they were used in the production pipeline - not that they produce the final image. At the moment they are talking about acceleration of pre-production, which can dramatically increase the workload output (and have direct effect on the final rendered image).

How many time do you want to cover the same ground?

Then don't say ATI had "a hand" in rendering a movie? If you haven't caught on, he was talking about current high-end renderers that use CPUs. And you responded with a comment about how ATI had a hand in rendering LOTR - which is as false as it gets.

And even what it rendered was a cut down hack like nVidia's FF: TSW demo. How does that compare to what he was discussing. It's a horrible parallel because: (a) It's false, (b) They didn't render the same thing - the frickin' GSCube was more accurate in what it rendered.

Ultimately this type of system probably will replace renderfarms, since they are just renderfarms of s different kind. The parallels with Quantum may be there in that this this is a scaleable system, however Quantum is still pretty much based in the fixed function era and has no programmability, floating point internal precision or floating point input or output, which this does.

Wow, you're killing me here. Tell me if these solutions have some overlap in their market, ok?

[url said:
http://www.sgi.com/visualization/onyx4/success.html[/url]]Creating a New Air Defense System for the 21st Century
The AADC Capability system allows military commanders to monitor the action in real time,using wide-screen, high-definition displays -supplied by SGI -that show the battlespace three-dimensionally. [Full Story] (PDF 104K)

University of Utah's SCI Institute
Much of the SCI Institute's work has been aimed at developing technology that enables engineers and scientists to visualize extremely large data sets. The shared-memory architecture of SGI is a powerful environment for the Institute's ray-tracing software. [Full Story] (PDF 313K)

Saab Aerospace
SGI helped Saab create the fourth-generation, first all-digital fighter aircraft-The Gripen-and Do so under strict Swedish government cost constraints. [Full Story] (PDF 353K)


[url said:
Quantum3D.com[/url]]SUCCESS STORIES

LOCKHEED MARTIN:
United Kingdom Combined Arms Tactical Trainer

(UK CATT) Program


. . . The UK CATT Program enables precision
training of the British Army’s personnel, from armored and infantry crews to battle groups with reconnaissance, aviation, artillery, engineering, and air-defense elements, collaborating in a wide variety of tactical situations.


ADACEL INCORPORATED:
United States Air Force (USAF) Tower Simulation System (TSS) Program


. . . Around the world, military and civilian aviation authorities have recognized the importance of providing initial and recurring training for Air Traffic Control (ATC) personnel, employing superior quality synthetic environments to familiarize and prepare ATC personnel for standard, high stress, and emergency situations.


AMRDEC:
Avenger Trainer System Upgraded to COTS-Based Hardware and PC Software


. . . CG2, Inc. and the Aviation and Missile Research Development Engineering Center (AMRDEC), in a collaborative engineering effort, have upgraded the Avenger Training System. Together, this development team ported the existing SGI IRIX-based Avenger Table Top Trainer (TTT) to the low-cost Windows 2000 Platform running on a commercial-off-the-shelf (COTs) PC.


I mean, compere these:



However, you've answered the question that you don't believe that this will be used in media/film product -- despite the fact that its clearly one of the market SGI are aiming it at.

Um, yes. I said:

Vince said:
Just a comment on this. This is never going to replace a Renderfarm running PRman in this current incarnation

What are you trying to prove or say? ATI hardware (and for the nVidia haters - neither has nVidia) has not rendered a Pixar calibur movie, nor will it with the current parts on the market - how they had "a hand" in "rendering" the LoTR:tTT movies I'm still trying to figure out.
 
Then don't say ATI had "a hand" in rendering a movie? If you haven't caught on, he was talking about current high-end renderers that use CPUs. And you responded with a comment about how ATI had a hand in rendering LOTR - which is as false as it gets.

Christ Vince - I said they they were being used in production house more and more, which they are. OK "a hand in rendering" may have been the wrong term in this context, but I've said time and time again they are used in preproduction (which does has an effect on the final rendered scene one way or the other) but no, you be as obstanent as you laike and continue to harp on about it. :rolleyes:

Wow, you're killing me here. Tell me if these solutions have some overlap in their market, ok?

Bravo, I'm amazed you can actually see a pattern - unfortunatly it appear that you can't spot the differences as well, and understand why they are important.
 
I get the feeling I didn’t express myself clearly... I’m not saying upcoming Hollywood cg is going to be done realtime.

What I’m saying is that as we go back… a few yrs back, the bandwidth bottlenecks grow exponentially more significant, the amount of ram becomes significantly less and more troublesome, and FAR slower, the connections across the distributed solution grow far slower… the processing power also goes down exponentially…

Given the fact that the perf of what’s described in the patent is similar to modern day render farms, but without the bottlenecks, latency, or ridiculous requirements(supah high rez, supah prec, supah textures, etc..), all of these factors add up… the orders of magnitude difference in speed, perf, bandwidth, etc… add up…

This 2.8GHz Intel Xeon processor with 533MHz front-side bus speed helps deliver processing power, throughput and headroom for peak performance to your application-intensive blade servers.

533MHz system bus transfers information from the processor to the rest of the system at a rate up to four times faster than the 133MHz system bus used on Pentium III processors.

Wow that’s amazing...
Blade servers come with up to 2 40GB ATA 100 Hdds at 5400rpm(other higher perf. Hdds can be added)
Well with all that ridiculous amount of data moving around... it's good to know we have impressive specs...
The I/O expansion option adds dual-port FC connectivity at up to 1.2Gbs…
Wow impressive...
"Onyx4 UltimateVision is the emerging choice for high-end TV and film production and post-production, real-time broadcast effects, theme parks, and real-time processing of high-definition satellite images. Graphics and digital media-optimized processing, together with an architecture created for high throughput, allows users to work with high-definition uncompressed video- or film-resolution images, create multilayered 2D and 3D effects, and edit simultaneous streams of standard- or high-definition video in real time."
... sales are gonna come down once ps3 is unveiled...

As for the real-time...
Beholdeth

Say, since I've heard some frames took about an hour... let's see 60min orig aka fellowship... one order of magnitude perf increase(for the RoTK modern render farm).... 6min.... no gpus? 1-3secs... no gigantic bottlenecks, with giga-textures, giga rez.... .03333sec... hmmmm...

Let's go down to old cg... say the Toy Story 2 systems, they must've been several fold weaker than what's used for the fellowship...



Edited 2
 
Status
Not open for further replies.
Back
Top