MS big screw up: EDRAM

Status
Not open for further replies.
Bill said:
I would maintain the goal for a GPU is simple: put out the best possible graphics per transistor budget.

The problem is that it's a subjective criteria. For instance, if you can have anisotropic filtering, or antialasing, but not both without dropping below 30fps, which yields the best possible graphics? You will get different responses depending on who you ask. The only thing you can do is attempt to guess what things most people will feel is important and design your hardware to that end.

What do you think are some non subjective goals for this generation?

Nite_Hawk
 
I dont think most graphics goals are very subjective.

Basically I think, in consoles, AA is going to be fairly trivial.

Beyond that, better graphics is mostly objective.

AA versus AF, both are fairly minor niceties OVERALL. Applying AA to Qauke 3 wont match it up to non-AA Far Cry.
 
Bill said:
I would maintain the goal for a GPU is simple: put out the best possible graphics per transistor budget.

And by all accounts, both console's designs (from CPU to GPU) seem to be equally viable.

You mentioned Carmack earlier, he recent comments on both the PS3/XB360 reflect this:

"I make little nitpicky decisions about say, well, I prefer the symmetric approach that MS has over the asymmetric Cell approach, but you can do great games on either one of them, and I make fundamental decisions based on development tools and depth of documentation, which Microsoft has been superior on."

He has his favorite CPU, and by your the last 35 posts in this thread, you obviously prefer the RSX as a GPU...take a lesson from Carmack: even though he currently prefers one solution, he doesn't just blindly dismiss the alternative based on that.
 
Last edited by a moderator:
scooby_dooby said:
I'm not. I'm saying Xenos contains alot of new technology that is going to take a while to get used to, and it's going to take a while for game engines to really support these features.

One has to be realistic and understand developers have only had this GPU for 4 months, you have to realize that most games being released, and most game engines that are in existance, were started in 2004 or earlier, and that they can't possibly be making the best usage of these features.

With that understanding, it's clear that what we are seeing is NOT Xenos at it's best, or even close, and that you shouldn't judge the capabilities of the GPU from sub-par launch games that were developed on X800's.

Of course, that's just common sense.

The problem is that we haven't defined what "Xenos at it's best" is. What is Xenos at it's best? Is it 720p resolution with lots of AA? Is it HDR with high quality texture filtering? Is it procedural synthesis? Is it some vague combination of some/all of these?

The final question: Whatever Xenos is when it is at its best, does it match what customers really want? The same questions should be asked of RSX.

Nite_Hawk
 
Bill said:
I dont think most graphics goals are very subjective.

Basically I think, in consoles, AA is going to be fairly trivial.

Beyond that, better graphics is mostly objective.

AA versus AF, both are fairly minor niceties OVERALL. Applying AA to Qauke 3 wont match it up to non-AA Far Cry.

Say the difference is Quake3 at 720p with antialiasing and aniosotropic filtering versus farcry at 480p without either? The decision becomes a lot more difficult. Far cry at those settings would look ugly. Quake3 at those settings looks relatively nice (if outdated). It's a subjective issue.

Nite_Hawk
 
Exactly. We shouldn't muddy the waters.

The same people might say well, the PS2 can do this or that minor thing better than Xbox. Overall, Xbox was the more powerful console regardless.
 
Ok, Far Cry WITH AA looks much better.

Because the base graphics are better.

Anyway you would have to prove, say, PS3 CANT do AA.

Or heck, that Xenos even can, considering the early pics.

AA might be easier on PS3 with no EDRAM. What did ERP say? "It's not transparent on Xenos like PC GPU"

It's interesting though, R520 seems to have made the same AA tradeoffs as Xenos just with different methods.

IE, ring bus, super high memory clocks. Roughly same execution power.

I was looking at R520 benches and noticed it almost never beats 7800GTX without AA+AF. Usually loses by quite a lot.

But I still think on longer shaders, R520 might be superior even without AA.

But is that even the right design call? One might say high end cards are made to run with AA+AF, but they only do that because no games stress them now.

What about in two years? Will a 7800GTX be in much better shape? Able to run demanding games with no AA/AF while R520 is not?
 
Last edited by a moderator:
OMG I just got it!!! :D :D :D :D

Bill is posting once every minute so he can get to 100 posts. With that done, he will then try to edit his posts that he made him look like, well, you know.

The sneaky little guy.........
 
Bill said:
Ok, Far Cry WITH AA looks much better.

Because the base graphics are better.

Anyway you would have to prove, say, PS3 CANT do AA.

Or heck, that Xenos even can, considering the early pics.

AA might be easier on PS3 with no EDRAM. What did ERP say? "It's not transparent on Xenos like PC GPU"

Why do you space for each sentence? I've been reading this whole thread and thats the single question i've come up with. Are you typing out your posts in an externel text editor and it formats the words in a weird way on these boards?

*Walks away confused
 
Bill said:
Ok, Far Cry WITH AA looks much better. Because the base graphics are better. Anyway you would have to prove, say, PS3 CANT do AA. Or heck, that Xenos even can, considering the early pics. AA might be easier on PS3 with no EDRAM. What did ERP say? "It's not transparent on Xenos like PC GPU"

On equal hardware, you may only be able to play far cry without AA at low resolutions, but will be perfectly capable of playing quake3 at higher resolutions with AA. It really depends on what the hardware was designed to do.

It's interesting though, R520 seems to have made the same AA tradeoffs as Xenos just with different methods. IE, ring bus, super high memory clocks. Roughly same execution power. I was looking at R520 benches and noticed it almost never beats 7800GTX without AA+AF. Usually loses by quite a lot. But I still think on longer shaders, R520 might be superior even without AA. But is that even the right design call? One might say high end cards are made to run with AA+AF, but they only do that because no games stress them now. What about in two years? Will a 7800GTX be in much better shape? Able to run demanding games with no AA/AF while R520 is not?

Now *this* is an interesting question. In 2 years, what are the features going to be that matter? Are they AA, or having more pixel pushing power? Another question that we might want to throw in: How much will dynamic branching performance matter?

Nite_Hawk
 
Ignore the guy already, he clearly has no clue about the topic, and doesn't allow himself to be convinced that he's wrong either...
 
Bill said:
PS3 Demo's.

Underwhelming X360 games.
So you're comparing first gen XB360 games with PS3 demos, a great many of which were being sped up for movies from minimal framerates, to equate RSX, a GPU that doesn't exist in silicon yet, to a more complex GPU architecture that devs have only had a short while to play with. Do you really think that's a fair, intelligent comparison of different architectures and their pros and cons?

I'm with Laa-Yosh on this one. Bill's found his way onto my ignore list (and strangely whenever I stick someone on my ignore list they tend to get banned shortly after. It's the Black Spot of ignore lists!)
 
Nite_Hawk said:
What is Xenos at it's best?

I would say that when game engines and developers are supporting/understanding the new features in Xenos to the same degree that they are supporting the traditional GPU features found in RSX.

In other words, after they have had a while to get their hands dirty, and play with all the new tricks.
 
dukmahsik said:
i wonder which has a steeper learning curve... cell or xenos?

I think a CPU will always have a steeper learning curve than a GPU, as "new" as Xenos might be, it still is going to be supported by ATI and DX libraries and tools. And in the end it "only" does graphics.

With Cell, you can just make it do so many things for you that really, it will never reach its full potential. There will always be someone trying to do something new, and someone making it faster.
 
Not to mention, not only do they have to learn a different approach to writng code for the SPE's, they also have to deal with the issue of actually multi-threading the game engine across 7 processors.

Seems like most of the thinking has been done around extracting power from Xenos, it's just a matter of implementing these techniques into the game engines.

Whereas for CELL, alot of the thinking still has to be done. Although, this applies to the XeCPU as well, but at least that uses a traditional multi-threading approach.
 
dukmahsik said:
i wonder which has a steeper learning curve... cell or xenos?
Cell definitely. Xenos still uses shaders int he same way PC's have been using for a years, and there's very finite uses on very definite and well known data structures. Cell is so open-ended even the data structure access can be modelled differently. The interoperability between SPEs, the management of code chunks and buffered data, the creation of algorithms to work in a stream-friendly way... I'd say XeCPU has a higher learning curve than Xenos too. I'm not even sure what developers need to learn about Xenos either. The creation of a predicated engine (which isn't supposed to be too hard) seems about it, apart from thinking up uses of features like tesselation and MEMEXPORT. But for graphics rendering it's a case of write and compile your shaders the same as any other GPU AFAIK.
 
- Predicated Tiling and creative usage of the 256GB/s bandwidth to the EDRAM
- Creative uses for MEMEXPORT (GPGPU)
- Utilizing the Hardware Tesselator
- Taking advantage of the flexibility of USA (much more vertex processing power)
- Ability to enslave one of XeCPU's cores.

Off the top of my head, those are things on Xenos that will take a little while for Dev's to come to grips with.
 
Last edited by a moderator:
london-boy said:
I think a CPU will always have a steeper learning curve than a GPU, as "new" as Xenos might be, it still is going to be supported by ATI and DX libraries and tools. And in the end it "only" does graphics.

With Cell, you can just make it do so many things for you that really, it will never reach its full potential. There will always be someone trying to do something new, and someone making it faster.

I too agree cell has a steeper learning curve but it's a very double edged sword.
 
The learning curve for both processors is rather dependent on what you are trying to do on them. If MS let you code to the metal on Xenos and do lots of wacky/crazy things it would probably be more difficult (but possibly more rewarding) to code for.

Sony seems to downright encourage you to do wacky/crazy things on cell, so the learning curve is probably going to be rather steep. Still, I imagine you could probably implement a fairly easy to understand yet slow engine on cell (even maybe using the SPEs!) that would be easyish to write and understand.

Nite_Hawk
 
Status
Not open for further replies.
Back
Top