How can Nvidia be ahead of ATI but 360 GPU is on par with RSX?

Status
Not open for further replies.
zidane1strife said:
Even if MGS4 doesn't come out next year, a demo could actually make it next year.
Late next year with the game showing up in Q1 2007. With hardware as complex as the PS3 and Xbox 360, don't expect to see epic blockbuster titles get the standard 18 month treatment of the rest of the cookie-cutter games. It's not unusual for these games to take 24-36 months to complete. Compare the gestation period of MGS2, GT3 and other games with new engines and non-existing assets. I can almost guarantee that we won't see MGS3 until 2007. I'll say the same thing about Halo 3. If you're expecting Halo 3 to show up Fall 2006, you're in for a rude awakening.
Subsurface-scattering, significant amounts of deformable self-shadowing facial skin creases, HDR, very complex high-poly model akin to that which'd be used in a fighting-gamel. Round gun nozzles, realistic smoke, high-numbers of very detailed self-shadowed characters(basically 3 or more high detail characters in some scenes. e.g. otacon, the robot, solid.). 60fps? some say it's 1080P, dunnoh if that's true either. MGS4 is not even on a final kit. The textures may be weak if you consider the environments(not the only next-gen title with such, it's unknown if they're place-holders.), but the snake model itself has very detailed textures(which can be seen when, for example, you get a close-up of the fingers, face, etc. in HD).
No A.I. and no game logic along with dramatic camera angles and depth-of-field effects to mask things like poor texture quality, etc. I liken the MGS4 movie to Burger King commercial. The burgers look damn good but if you tried to eat the burgers they show you on TV you'd die because they aren't real. They are spray-painted, use artificial materials such as plastic and all manner of chemicals to make the product look pretty and shiny. That's all you saw with MGS4. In my opinion, Killzone got me more excited. The Killzone target renders had MUCH, MUCH, more going on as far as explosions and such. So it's not the best non-playable demonstration of the Playstation 3's supposed abilities that I've seen. The demo was beautiful but there's nothing that I could point to and say, "Wow! I haven't seen that before!"
That's the thing, we're told current x360 assets weren't designed with xenos in mind, but with x800-x850 in mind. That it's easy to port/dev for the x360 platform. Yet we're seeing several dev.s seemingly aiming for 30fps, and some are said to be aiming to lower AA to 2x rather than 4x(I've not read the particular interview/article where the AA comment was said.) to increase performance. Something doesn't add up.
You act like devs can just push a magic button and make games run faster, add more polygons and add more effects. It doesn't work like that. In terms of architecture the alpha kits are 180 degrees different than the beta and final kits. Performance doesn't always scale linearly (is that a word?) with processor speed. In fact, the performance may even drop in some cases! We're talking an entire architectural shift going from alpha to beta.

It's not like PS3 devs who showed "games" developed on 6800's in SLI or 7800 GTX (in the case of the latter it, gives them 75% of PS3 performance). In fact, the most impressive Xbox 360 game at E3 was Gears of War. It looked leagues above everything else. We wondered why. We soon learned that the game wasn't developed using alpha kits, but rather using *gasp* 6800's in SLI. Epic did this because they wanted immediate results and they wanted to reach around 80% of their target performance right away. The best looking Xbox 360 games you will see are the games that started YESTERDAY...not two years ago. I think some of you guys are being unfair and just downright unreasonable. I mean, Gears of War is proof in the pudding as far as what happened.
 
Titanio said:
It's not a defense, it just happens that it can cover some concerns raised re. RSX - and then a whole lot more too. When you have a CPU sitting at the end of a very fat pipe that happens to be very good at "graphics", it's difficult to ignore. Its potential is a lot greater than meeting some checklist.

This harks back to the debate over context, and discussing these things as systems. Personally I don't think you can meaningfully discuss any one component in complete isolation in these systems. They do tend to stand tall on their own, thank goodness, but that's useless without context.

I fully agree with this
 
Alpha_Spartan said:
along with dramatic camera angles and depth-of-field effects to mask things like poor texture quality, etc.
Hey, whoa, hey! Hold your horses mister! Just skimming your post (MGS discussions are so last week IMO ;)) I have to pull you up on this one. If you have DOF, what's the point of using highres textures in the background? That's a waste of resources. And you want DOF for 'realism', even if you've got infinitely highres textures. Claiming it's an effect just to disguise low-grade graphics is kinda 'out-there'.
 
hadareud said:
That was what I was trying to say. It just seemed to me that a few people used the cell as a "defense" for RSX (not that it needs defending, it seems a very capable GPU) ...

Apart from just plain stuff the Cell helps feed the GPU with things which weren't possible with the GPU alone.It doens't have to be graphics but the little things that help in perfecting the stuffs that the GPU renders.Stuffs such as real time ray tracing,light sources calculation,transmission and dynamics.It's supposed to compliment the RSX in things not possible with the GPU not that the RSX could not do most of the job.Things that even the Xenos could never do.
 
Last edited by a moderator:
scooby_dooby said:
ya that's all there's ever been, a very vague "we have been working with..." statement.

Alot of people have taken that as proof that the RSX has been in development for 2-3 years.
It prolly means that Sony asked NVidia for help with OpenGL. Or something else similarly mundane.

Jawed
 
mckmas8808 said:
OMG!!:mad: Please don't ever, ever, ever say that again here. B3D is way to smart to actually believe that. We know that when tapped out the RSX would have been in development for 3 years!! Stop with the TXB put down Sony hype. That buck stops here.

"We Know"????? We don't know. For all intents and purposes, we know that it was a late move. The only thing that suggests otherwise is a one off snipit in a PR blurb that mentions that they had been working together for 2 years. Given the things I've been working together on with people in the past, that could mean anything from Nvidia trying to sell sony stuff for 2 years, to sony wanting to work with the NV tools, to wanting to get a patent cross liscense, to that thing that time with that guy, etc.

The bare reality is that the RSX is at the least HEAVILY based off of G70. The more likely scenario is that the RSX is pretty much a G70 with a different interface on a totally new and foreign process. Depending on process library comonality you are looking at anywhere from 6 months to 2 years just to port the design to the new process.

I know you want the PS3 to suceed and be totally kick ass and the greatest thing since sliced bread, but tone it done and take a step back.

Aaron Spink
speaking for myself inc
 
Titanio said:
I'm sorry, but you seriously think Sony is reserving certain resources for certain areas of a game? What?
I'm saying that perhaps Sony's tools have an "ideal" way of doing things and any deviation by developers would lead to unexpected or undesirable results. With the PSX developers had to use what Sony gave them.
Why would it be a majority? Why would one or two SPUs used for graphics, for example, mean the majority of its power was being spent on graphics?
I didn't make any definitive comments about Cell's implementation. The only definitive comments I made were in regards to the Emotion Engine. I don't believe Sony showed the Cell-only renders at E3 for nothing. It's still the star of the show and I really believe that Sony wants to use it for rendering like the originally planned, albeit in a reduced role.
The assumption that SPUs do one thing and never do anything else is also very prevalent and misleading btw.
Are you attributing that assertion to me, because I sure didn't make it.
 
hadareud said:
That was what I was trying to say. It just seemed to me that a few people used the cell as a "defense" for RSX (not that it needs defending, it seems a very capable GPU) ...
If Sony hadn't shown those Cell renders (i.e. The Getaway), then I doubt people would be taking this angle. I for one, don't believe Sony showed them in vain. Maybe that's why the RSX is taking a while to develop. Perhaps they have a relationship that none of us really anticipated.
 
Shifty Geezer said:
How much effort it is to do a process shrink of an existing chip? My uneducated assumption is they have the layout of transistors and this layout isn't going to change, so they only need to redo the masks. Like I can type a document in Word and then shrink it if I want, changing the font size. Surely they use computers to create their layout for the chips, then connect it to a 'printer' to create the masks needed for fabrication. That's what they should do anyway!

The design is moving from a 110nM process at one manufacturer to a 90 nM process at another manufacturer.

So lets walk through it.

What are the two standard cell libraries like? Are the aspect ratios the same, do they contain the same cells, do we have any custom cells that require porting?

How do the metal stacks compare? Are there more or less metal layers in the 90nm process, do they have better or worse RC delays, have drastrically has the process shift affected any coupling issues we have. Does our clock network still work. Are we running into any via issues related to the process change?

What performance issues are there? Have out critical paths gotten better or worse. Do we have any race conditions due to cell changes.

What do we need to do as far as pre-silicon validation? What changes in post silicon validation and DFM and DFT because of the different manufacturing process. What burn in capabilities do they have.


The easy way to think about it is thus: You want to build a house live steve. So steve hands you a copy of his architectural plans. Does this mean the actual building of your house will be faster than steve's. Does it mean you can start building immediately (he was on a hill, you aren't). Etc. While you will certainly save time in going back and forth with the architect doing the overall design, you'll still have to get site specific architecture work done and consult with a building contractor. And it will still take roughly the same time start to finish once the building process begins.

Aaron Spink
speaking for myself inc.
 
Alpha_Spartan said:
I'm saying that perhaps Sony's tools have an "ideal" way of doing things and any deviation by developers would lead to unexpected or undesirable results. With the PSX developers had to use what Sony gave them.

You made the suggestion that devs wouldn't be able to prioritise, for example, between graphics and physics or whatever. "Let's hope they have the freedom to do that". That suggests there would be a predefined allocation of resources for certain tasks. That isn't going to happen (or the probability of such happening is negligible).

Alpha_Spartan said:
I don't believe Sony showed the Cell-only renders at E3 for nothing. It's still the star of the show and I really believe that Sony wants to use it for rendering like the originally planned, albeit in a reduced role.

I think that's quite possible, but comparing such a role to that of the EE in PS2 is highly erroneous. You said that "if the Cell is implemented as a graphics helper that we will have another EE situation" - that's just crazy and wrong on so many levels.

Alpha_Spartan said:
Are you attributing that assertion to me, because I sure didn't make it.

Not specifically at you. It seems to be oft forgotten, people tend to think of specific SPUs as doing a particular thing and nothing else, ever. And that muddies the argument a bit, IMO.
 
It could be that Nvidia initially spent most of their time on their PC GPU part right until the Geforce 7800 was launched.It was not until the launch of the G70 that they dedicated most of their resources on the RSX instead.That could be the main reason for the long RSX hold up.The work on RSX will prepare them for their next gen PC GPU which is expected to be in 90nm.They knew where most of the revenues would come that's why they put priority on their PC GPU line first.

On the other hand,ATi chose to concentrate on the R500 first before the R520.Even though they had dedicated teams working on different GPUs what I meant was resources concentration.Correct me there's a mistake but wasn't the R500 taped out first?

Nvidia is rather quiet recently since they are already comfortable with their position now the RSX could actually be part of their next gen exercise/research.
 
The Cell rendering demos are great visual demos of Cell's processing capabilities. It makes for a better, more impressive demo than a few slides on FFTs.
 
It was PR. The CELL rendering demo's were to increase the CELL-related hype, nothing more. They were basically saying, look how amazing the CELL is!

And it worked, brilliantly.
 
scooby_dooby said:
It was PR. The CELL rendering demo's were to increase the CELL-related hype, nothing more. They were basically saying, look how amazing the CELL is!

And it worked, brilliantly.

Oh and you're trying to tell that the Gateway London demo and terrain scenes were mocked up?All this game trailers you've saw runs with SLIed G70s at most.The RSX isn't even involved yet it's already equivalent if not better than some Xbox 360 games?
 
CELL floating point usage distribution

Alpha_Spartan said:
I seriously believe that if the Cell is implemented as a graphics helper that we will have another EE situation where the majority of FP processing was devoted to graphics and so in real-world situations the EE fared no better than the Xbox's Celeron in terms of overall performance.

Based on EA interview:
SPE's=179.2 Gflops
Graphics=76.8 Gflops
Other=102.4 Gflops

Any one have info on the following?
Xenon=115Gflops
Graphics=?
Other=?

Regarding PS2, only half of floating point power used for graphics in nearly all games. Remaining capability mostly unused in most games.
 
Hugo-

Why put words in my mouth? Did I say they were mocked up?

I said the purpose of the demo's was clear, to build hype.

btw - i thought those demo's were actually using 2 cell processors and not one, so in a way they are fake anyways.
 
mckmas8808 said:
Still doesn't explain why Sony had a relationship with Nvidia with regards to making a GPU since late 2002.

Saying it a thousand times might work for Karl Rove but it doesn't make it anymore true. There is no data to substantiate you assertion. In fact, there is evidence that would diminish the probability of the veracity of your statement.

Aaron Spink
speaking for myself inc.
 
london-boy said:
That's what they're meant to do.

Right, I'm just sayign I don't think they were there to show the techical prowess of CELL as a GPU, so much as to add to the "OMG CELL is so amazing..." mentality for Joe Q. Public.
 
scooby_dooby said:
Hugo-

Why put words in my mouth? Did I say they were mocked up?

I said the purpose of the demo's was clear, to build hype.

btw - i thought those demo's were actually using 2 cell processors and not one, so in a way they are fake anyways.

I said you were trying to tell/convey the message that it is pure hype...I never said you said those words directly.But the Cell was capable in doing those demos weren't they?

My sincere apologies if it offended you.
 
Status
Not open for further replies.
Back
Top