XBox 1 Backwards compatibility

DaveBaumann said:
So why use that if you can get better performance out of units dedicated to the task that its required to to for similar or smaller die sizes?

First of all, who said it was added preformance? In our thought experiment, 16 S|APUs at ISSCC clock would have a higher output than your 48 unified ALUs in the X2 at developer clock. Also, vastly greater usable flexibility, unified ISA and computational fabric.

And the same way a GPU does it Dave, that's the point. One day I'll get an actual answer out of you on the difference between a unified Vector|Scalar pathway in an APU and one in an ALU -- and then you can justify all your, frankly, shit comments about how STI would have nothing to add, et infinitum.,
 
And the same way a GPU does it Dave, that's the point.

But its not the same way, since it isn't dedicated to a single usage scenario (or, at least, as focused); the range of instructions and capabilities within a graphics the are tuned to the scenario that it is going to be used in most of the time, unlike more "flexible" units. The are alway highly tuned towards working through the biggest areas of issue for that scenario, of which texture sample latency has been mentioned a number of times, which more "flexible" units are not going to be tuned to address.
 
DaveBaumann said:
But its not the same way, since it isn't dedicated to a single usage scenario (or, at least, as focused); the range of instructions and capabilities within a graphics the are tuned to the scenario that it is going to be used in most of the time, unlike more "flexible" units.

Dave, we're all past the "fixed" verse "flexible" mentality. Tell me a new tale, Explain to me how. We have, basically, analogous unified SIMD pathways. We have roughly similar complexes built around them. One is a highly tuned SOI processor that clocks almost 10X as high and offers full flexibility and the other doesn't.

DaveBaumann said:
The are alway highly tuned towards working through the biggest areas of issue for that scenario, of which texture sample latency has been mentioned a number of times, which more "flexible" units are not going to be tuned to address.

Which means... nothing. Again, with an SPU complex able to handle 32 concurrent contexts that we know of, what prevents this from being raised to 128? Or, we have a smart DMAC and a full fledged RISC core sitting right there which is tasked with arbitration -- you might not needed it as you would on a GPU.
 
-- you might not needed it as you would on a GPU.

And I'm taking a pretty safe bet that the inclusion of NVIDIA in this project is a surefire indication that these are the type of issues that haven't been worked around, or at least not yet (probably because die space isn't an infinite resource just yet).
 
DaveBaumann said:
And I'm taking a pretty safe bet that the inclusion of NVIDIA in this project is a surefire indication that these are the type of issues that haven't been worked around, or at least not yet (probably because die space isn't an infinite resource just yet).

And that type of thinking would be fallacious, Dave.
 
Vince said:
Exactly! See, you didn't even need to post to know the answer -- Post hoc ergo propter hoc

What?

What do you think NVIDIA have invested the majority of their engineering time on for their next architecture? When JHH steps up there and says "we've spent the last 18 months working on out next generation architecture and will form the basis of our Sony work" what do you think Sony are buying? If its not the shader core then they may as well scrap the majority of that engineering time and they can probably have just shopped around to anyone - hell, why not take a bunch of Imageon cores and strap some APU's in there, clearly, thats all thats needed :!:
 
DaveBaumann said:
Vince said:
Exactly! See, you didn't even need to post to know the answer -- Post hoc ergo propter hoc

What?

It means you can't draw a conclusion based on events which aren't known to have a casual connection just based on the sequence they happened in.

DaveBaumann said:
What do you think NVIDIA have invested the majority of their engineering time on for their next architecture? When JHH steps up there and says "we've spent the last 18 months working on out next generation architecture and will form the basis of our Sony work" what do you think Sony are buying?

Why is it that if I were to quote Ken Kutargi word for word, literally, like you just did, I'd have 4-5 people on my ass telling me not to buy into the hype and PR.

DaveBaumann said:
If its not the shader core then they may as well scrap the majority of that engineering time and they can probably have just shopped around to anyone - hell, why not take a bunch of Imageon cores and strap some APU's in there, clearly, thats all thats needed :!:

We don't know the terms of the contract, nor do we know all of Sony's options. We do know they were working with Toshiba and the bulk of that work was on raster functionality, which is amazing, huh? Once again, we come back to the problem of scaling Cell and the need for a visualization/output block. We then note that Sony, historically, sucks at producing rasterization functionality, as seen in the GS and PSP to some extent -- likely do to IP barriers.
 
It means you can't draw a conclusion based on events which aren't known to have a casual connection just based on the sequence they happened in.

Quite correct, and yet you clearly have. Funny that.

Why is it that if I were to quote ken Kutargi word for word, literally, like you just did, I'd have 4-5 people on my ass telling me not to buy into the hype and PR.

I’m not suggesting you take it literally I’m suggesting you try and look at what is occurring here and open up your mind to a slightly different scenario than the one that appears to be quite firmly embedded.

We do know they were working with Toshiba and the bulk of that work was on raster functionality, which is amazing, huh?

“Raster functionalityâ€￾ include pixel shading.

However, just try this – remove anything else from you mind and tell us where you think NVIDIA’s biggest value is right now in 3D graphics hardware knowledge and application...
 
DaveBaumann said:
“Raster functionalityâ€￾ include pixel shading.

For for the X2, "rasterization" includes Vertex shading and the entire pipeline! Awesome!

DaveBaumann said:
However, just try this – remove anything else from you mind and tell us where you think NVIDIA’s biggest value is right now in 3D graphics hardware knowledge and application...

Without a doubt, their IP portfolio.

PS. And when I have I made assumptions like you just did based off the order of events?

PPS. So, what about them unified SIMD|Scalar pathways?
 
Without a doubt, their IP portfolio.

And I would suggest you think about the primary application and development of that portfolio is now. Hint: Its what they are telling developers will exponentially increase in performance over the coming years.

PS. And when I have I made assumptions like you just did based off the order of events?

I thought it went something along the lines of “Cell BS/VS patent releasedâ€￾ --> Vince knows the answers to the computing universe! ;)

PPS. So, what about them unified SIMD|Scalar pathways?

That not going to go anywhere with anyone with the reply “what can’t be done with more die space…â€￾
 
DaveBaumann said:
And I would suggest you think about the primary application and development of that portfolio is now. Hint: Its what they are telling developers will exponentially increase in performance over the coming years.

I did and their IP portfolio doesn't equate to just your fixation on shaders, likely do in no small point because of ATI's influence. Their IP portfolio likely extends to not only their (Sony) work on APU construction and compiler techology, or the fixed-functional aspects of the rasterization, but to data flow and control to software design.

DaveBaumann said:
I thought it went something along the lines of “Cell BS/VS patent releasedâ€￾ --> Vince knows the answers to the computing universe!

Cute, I was expecting something that happened. But whatever.

DaveBaumann said:
That not going to go anywhere with anyone with the reply “what can’t be done with more die space…â€￾

That wasn't my answer, you're avoiding the question. The question is die size invarient and concerns the actual computation pathway. Your question concerning the SPU complex is self-answering as STI came up with the Synergistic Processor for a reason...
 
I did and their IP portfolio doesn't equate to just your fixation on shaders, likely do in no small point because of ATI's influence.

Vince, this is the industry's "fixation on shaders". Go and read some recent developer documation from NVIDIA and you'll see it littered everywhere - even down to elements such as steering developers away from the Doom 3 lighting model because it doesn't scale with shaders, they are telling developers to use SLI to test high shader utlisation now for performance 12 months away.
 
DaveBaumann said:
Vince, this is the industry's "fixation on shaders".

Welcome to my argument against you and Joe from over a year ago based on sheer computational importance vis-a-vis lithography ability. Now, with your little diatribe in mind, explain to me the difference between the computational ability of the unified SIMD Vector|Scalar datapath in an APU and in an ALU.

Back to step one after a few hours of you dancing around.
 
Tuttle said:
Uh no.


With NVIDIA's only experience in console hardware being the horrendously overpriced and underperforming xbox GPU, you can be certain that Sony has NVIDIA on a tight leash design-wise.

Hmm, well lets see. Overpriced? Underperforming? Compared to what? I'd wager no small amount of money that the production costs for the Nvidia GPU and associated memory is cheaper than the Sony GS. Now what MS pays may not be, but that is a different issue.

Performance would have to the the Nvidia GPU.

The point being that you don't need experience in console hardware design to design graphics and experience in console hardware design does not equal experience in graphics hardware design.

The only reason that Nvidia is designing anything for PS3 is because Sony realized that its internal designs would not be sufficient. I would suspect that the leash that Sony has on Nvidia is made out of a wet noodle.


Aaron Spink
speaking for myself inc.
 
Speaking of dodging questions vince, and bringing those to just about eveyr discussion you want to participate in. You never did answer my question from an older thread.

Vince wrote:
"They're not. They have a synergistic, additive effect on sales potential because the more Cell you own, theoretically, the more value you derive from each purchase."

How is the "more you own, the more value you derive" going to play out for the average consumer that can't afford to have "all new" appliences or devices, and can only scrounge enough money together to get a new game system once every four years? What exactly is this value they are getting?
 
aaronspink said:
Tuttle said:
Uh no.


With NVIDIA's only experience in console hardware being the horrendously overpriced and underperforming xbox GPU, you can be certain that Sony has NVIDIA on a tight leash design-wise.

Hmm, well lets see. Overpriced? Underperforming? Compared to what? I'd wager no small amount of money that the production costs for the Nvidia GPU and associated memory is cheaper than the Sony GS. Now what MS pays may not be, but that is a different issue.

Performance would have to the the Nvidia GPU.

The point being that you don't need experience in console hardware design to design graphics and experience in console hardware design does not equal experience in graphics hardware design.

The only reason that Nvidia is designing anything for PS3 is because Sony realized that its internal designs would not be sufficient. I would suspect that the leash that Sony has on Nvidia is made out of a wet noodle.


Aaron Spink
speaking for myself inc.

You don't actually believe that do you? Do you actually think Sony's GS is more costly than the xbox GPU???

After doing little more than giving MS a big fat expensive peecee video card to bolt on to the xbox, NVIDIA will have hopefully been given quite an education in console GPU desgin over the past two years from Sony.
 
Vince said:
First of all, who said it was added preformance? In our thought experiment, 16 S|APUs at ISSCC clock would have a higher output than your 48 unified ALUs in the X2 at developer clock. Also, vastly greater usable flexibility, unified ISA and computational fabric.

Here's some math:

A texture read will take on the order of 40nS.

@4 Ghz this is 160 cycles.
@.5 Ghz this is 20 cycles.

Assume 1 texture read per 8 shader ops.

ATI R500 48*.5 = 24 Giga Ops per cycle. Don't need to account to wait states because of multiple contexts provided in hardware.

Vince's PS3 on crack 16*4/160 = 400 Mega Ops per cycle.

The only way that a non graphics optimized pipeline will win is if the number of ops per texture read is extremely high or the latency of the texture read is extremely low.


And the same way a GPU does it Dave, that's the point. One day I'll get an actual answer out of you on the difference between a unified Vector|Scalar pathway in an APU and one in an ALU -- and then you can justify all your, frankly, shit comments about how STI would have nothing to add, et infinitum.,

I'll provide you your answer: a large number of hardware contexts optimized to cover the texture fetch latency. A building block in the Sony design is not optimized for this workload because it can't handle the texture fetch latency as well as various other operations (sampling, filtering, early Z reject, etc) that are designed into the GPU.

Aaron Spink
speaking for myself inc.
 
Back
Top