ATI - PS3 is Unrefined

Alpha_Spartan said:
Secondly, looking at Nvidia's role in this "partnership", they seem more like a vendor than a partner. If you look at MS/ATi partnership with Xenos, that's no mere vendor/buyer relationship. MS designed part of the console (the eDRAM), wrote the compilers and is handling the manufacturing. Xenos can't even be attributed to a family of ATi GPUs. Is it an R500 or an R600? It's neither. It's completely different.

The best way I can illustrate it is that MS relationship with ATi in the Xenos design was as close as Sony and IBM's relationship in the Cell design. Sony's relationship to Nvidia with the RSX is about the same as MS relationship with IBM with Xenon's design. It's basically, "What you got? We'll buy it!" rather than "Let's get together and design something from scratch."
(Bold is mine)
Your assumption is incorrect because the Xenos AA custom-memory patent (" Video graphics system that includes custom memory and supports anti-aliasing and method therefor") was filed by ATI in 2000. Also, the ATI-MS partnership materialized in 2003 and it suggests it's not as close as Sony and IBM's relationship in the Cell design which took 5 years.
 
dukmahsik said:
"Multi-support development chooses to support the lowest common denominator, therefore there shouldn't be any problems of the sort."
Other than the Tom Clancy's games, where they create diferent versions for the Xbox/PC and the PS2/GC, UBI games are tailored for the lowest common denominator.
 
geo said:
If anything, I would tend to think it likely that PS3 launch titles will be more fully realized on the graphics end than initial XB360 titles for that very reason. G70 characteristics, strengths, limitations, interfaces, etc being much better understood by devs pre-launch.
Bingo! (Graphically). They'll be starting from a higher base and its probably the case that devs will be playing around with how (or, more accurately, where) they utilise their bandwidth.

Vysez said:
That is to assume that thing are going to visually change, and tha radically, in the next 5 years in the graphical pipeline. And that would also assume that one of the two machines (CPU+GPU), we're talking about, is already capable of providing this "visual" difference.

The thing is that the only real diference I could imagine would be in the performance when doing vertex texturing, seeing that one of the machine has an advantage here.
And even that, after a talk with some Devs, it might be possible to come with some solutions close to that result using some other methods (Other than vertex texturing) when doing, for instance water/wave effects.
Also, Steep Parallax Mapping, makes VT redudant in quite a few situations.

I’m not sure there’ll be that much in the ways of visual difference, but I think there will be graduations in ease of use for developers. However, If there is no changes on dynamic branching capabilities a difference of 1024 pixels in relation to 64 probably is going to manifest itself in performance should developers be using it.
 
Vysez said:
Other than the Tom Clancy's games, where they create diferent versions for the Xbox/PC and the PS2/GC, UBI games are tailored for the lowest common denominator.
The original Splinter Cell had specific issues in the PC version alone in that they used NVIDIA's DST shadowing mechanism which, effectively, was there as a backdoor from the XBox version - they then ported it but decided not to implement an equivelent shadowing mechanism for non-NVIDIA cards.

Splinter Cell: Chaos Theory also failed to utilise the lowest common demoninator by sticking most of their feature support as SM3.0 when most DX9 cards in use were SM2.0 - ATI re-wrote the renderer to provide the full support of the SM3.0 path for ATI's SM2.0 cards.
 
Dave Baumann said:
I’m not sure there’ll be that much in the ways of visual difference, but I think there will be graduations in ease of use for developers. However, If there is no changes on dynamic branching capabilities a difference of 1024 pixels in relation to 64 probably is going to manifest itself in performance should developers be using it.
The thing is Dynamic branching is not "necessary" in order to obtain some visual effects.
It makes things easier from a programming standpoint, but it not the only solution available to developers who want to achieve certains visual effects at a fixed cost.
Dave Baumann said:
The original Splinter Cell
<snip>
Splinter Cell: Chaos Theory also failed
Ok, so the Splinter Cell serie is an example of game that do not use the lowest common denominator, since the guys over UBI Montréal did choose to rely on Nvidia only features last-gen.
But in the grand scheme of things, it's only the exception that proves the rule more than anything. Also, in the case of SC:CT, the main platform of the game was the Xbox, and therefore it SM1.1.

The SM3.0 of the PC, as you know, Dave, was the result of two things, an Nvidia TWIMTBP deal and it was also due to the work the Team was doing for the next-gen iteration of the engine.
 
AFAIK UT2007 demo was shawn on 2 GF6800U running in SLI, not with 7800GTX.

Second, what does the thing wtih 1024 vs. 64 pixels have to do with the devs? Isn't that a hardware feature (producing batches of pixels of given size for further processing)? In which way can devs influence that?

EDIT: I just see you were talking about dynamic branching with those batch sizes, nevermind.
 
Vysez said:
The thing is Dynamic branching is not "necessary" in order to obtain some visual effects.
It makes things easier from a programming standpoint, but it not the only solution available to developers who want to achieve certains visual effects at a fixed cost.
Read my paragraph again.
 
If you took them side by side, the R5900i and the Xbox 1 CPU, you would clearly choose any day of the week including the weekends the XCPU even though it lacks the MMI instructions whicha re so nice to use. Does it mean that a console that uses the R5900i is doomed to be "anything but an absolute trashing" ?

So what you are saying then is that you would take the XCPU any day of the week including weekends over Cell. So are you saying that in your estimation the XCPU has a rather large performance advantage over Cell? I thought this discussion had been about GPUs, not processors. Either way though, I'm interested to hear why you think XCPU has such a decided advantage over Cell.
 
BenSkywalker said:
So what you are saying then is that you would take the XCPU any day of the week including weekends over Cell. So are you saying that in your estimation the XCPU has a rather large performance advantage over Cell? I thought this discussion had been about GPUs, not processors. Either way though, I'm interested to hear why you think XCPU has such a decided advantage over Cell.
XCPU, not XeCPU (A.K.A. Xenon).
Panajev was doing a parallel between PS2's MIPS core and Xbox's X86 and next-gen consoles architectures.
Dave Baumann said:
Read my paragraph again.
As if I was reading your posts, anyway.
I just saw the Dynamic branching key-word and said what I wanted to say. :p

But, yeah, I know what you meant.
 
Laa-Yosh said:
I still have a hard time believing that anyone would buy a second HDTV just for gaming...

I find it hard to believe that some people will spend $500 every 10 months on a new GPU for gaming.

*hears whispers in ear*

Oh they do?:oops:
 
Is the PS3 capable of delivering 1080p@30fps to two HDTV's? I know it has two outputs for it, but I'm talking about hardware-wise, is it up to snuff for a decent looking game?
 
dukmahsik said:
because im a realist

Thats fine but, why throw the people's optimism into the ground who feel or may like to believe otherwise.

Especially if they aren't trying to pass off that optimism as fact.
 
m1nd_x said:
You guys have to keep in context with the quotes you're taking...

For instance, he was referring specifically to the GPU the entire time. He said the GPU that was in there was an SLI'd prototype of the 7800GTX. So, by him saying "high-end pc", it is quite obvious he's referring to the GPU only still. You're arguing semantics and it's pretty easy to see that is what he was referring to. Just because the wording isn't perfect does not mean that you didn't know exactly what he was talking about. Keeping it in context and not picking it apart, this is what you should've gotten: UT2007 was high-end pc GPU demo.

Seeing as there wasn't much cpu use with the demo because it was a controlled demo, it's obvious that it was there purely to show off the high-end pc gpu, where the RSX is going to be. I don't see why there is so much of an argument over this, because thats what I got when I read it without looking into it too much

I disagree, somewhat. I honestly do believe that is the point he is trying to make, but failed.

The Epic demo was running on a PC, and it was done using an early 7800 in SLI mode, so that was a high-end PC demo,

Right that it was running using a PC GPU however, it's a fact it was running on a PS3 Dev kit using Cell. Even though limited, still not running on a PC. I understand what you are saying but he clearly did not refer to the GPU before he made the "running on a PC" statement. Also, he presents two different statements here as well when he added a comma and the word and in that sentence. I think it's easy to see why people can interpret his comments this way.

I just think he goofed or maybe even misquoted, like reporters and interviewers never make mistakes.
 
Panajev was doing a parallel between PS2's MIPS core and Xbox's X86 and next-gen consoles architectures.

I failed to type XeCPU instead of XCPU in my post, I just refer to the XB1s processor as a Celery. I know what he was drawing out, he knows the point I was making serves to demonstrate how utterly absurd the notion of a custom design being by default superior is(as his counterpoint also indicates). To try and insinuate that a custom built design will be better because it is custom built has been proven wrong- actually to date it has always been wrong in the console market. Using a tweaked version of high end PC hardware has proven to be the best solution in the console space to date- a lengthy argument trying to resolve how customized a chip is going to be is utterly irrelevant in terms of how the end product is going to perform. What the chips initial target platform was is of little to no consequence in terms of end results. In fact, using what history we have as a guideline the less tailor made the solution the better off it has ended up being.

Is the PS3 capable of delivering 1080p@30fps to two HDTV's? I know it has two outputs for it, but I'm talking about hardware-wise, is it up to snuff for a decent looking game?

Are the higher clocked 7800GTXs capable of handling 1920x1080@60FPS? :)
 
Dave Baumann said:
If there is no changes on dynamic branching capabilities a difference of 1024 pixels in relation to 64 probably is going to manifest itself in performance should developers be using it.

So what you are really saying is that Sony and nV are holding back the wide scale adoption of SM 3.0.

j/k
 
  • Like
Reactions: Geo
Synergy34 said:
IRight that it was running using a PC GPU however, it's a fact it was running on a PS3 Dev kit using Cell. Even though limited, still not running on a PC. I understand what you are saying but he clearly did not refer to the GPU before he made the "running on a PC" statement. Also, he presents two different statements here as well when he added a comma and the word and in that sentence. I think it's easy to see why people can interpret his comments this way.

I just think he goofed or maybe even misquoted, like reporters and interviewers never make mistakes.

Well I think he meant to say it was running on a PC. If not he should get somebody on the net or himself to reword what he meant to say.
 
mckmas8808 said:
I find it hard to believe that some people will spend $500 every 10 months on a new GPU for gaming.

*hears whispers in ear*

Oh they do?:oops:

There is fundamental difference. The devs have to do little to nothing in order to enable the faster graphics cards. OTOH, the devs would have to devote reasonable resources into taking advantage of the second display which will be utilized by a small minority of the market (if that). Therefore the likelyhood of actual games utilizing the dual display interfaces are ~0.

Aaron Spink
speaking for myself inc.
 
Back
Top