CONFIRMED: PS3 to use "Nvidia-based Graphics processor&

Status
Not open for further replies.
Pardon my ignorance, and apologies if it was discussed before but this is a massive thread. What are the chances that the gpu has unified shaders and anytime the load gets a little too hectic it can call upon the massive power of Cell to help out? Perhaps in that configuration its flexible enough for developers to decide if they even want the gpu to handle any vertex shaders and perhaps outsource all that data crunching to the cpu while the nvidia part does solely pixel shader work?
 
best choice:

cpu's apu doing bspline geometry processing,and that send to gpu
gpu has spline and clipping hardvare or apulets and many pixelengine
 
london-boy wrote:

It's not a disappointment for Sony, it's commendable that they went as far as admitting what was obvious to everyone, that is Ati and Nvidia are just too good at their own game than Sony, who have a lot less experience than them in graphics technology. So, what better move than embrace one of them?

Sony had ideas, they never promised anything, and Nvidia complemented those ideas obviously, so there's the reason for the partnership.

that's basicly exactly how I see things ^__^
 
oooh that's kinda scary- does it mean that we wont ever see another start-up breaking into the 3d hardware biz? i mean, if even sony can't come out with a product that can compete.... or maybe it's just something internal that was choking their R&D efforts in that regard?
 
sage: there are a TON of patents out there and perhaps it is too cost-prohibitive to come up with new methods of doing things for a company interested mostly in USING the tech (i.e. sony, ms, and nintendo).

I think a startup of sufficiently talented and experienced individuals might be able to come up with some interesting new ideas. I don't know if we'll see that happen, though.
 
I'd be offly hard though because they own a bunch of patents some of which may never get used. Design a new peice of video hardware and you never know who's patent you might end up infringing upon.
 
london-boy said:
Just because Vince (grossly ;) ) miscalculated something doesn't mean that Sony has failed to live up to what they promised

Secondly, How do we know it's not? People are speculationg too quickly and the PC people aren't making it easier with their "thoughts" -- and frankly, the thing they are saying are asinine. A month ago there was no way a PE could clock anywhere near 4GHz, and there was no way it could have that much SRAM, et al, et infinitum... give it time.

Lets wait and see before passing judgement that you can't escape from if you're wrong. :)
 
Vince, what perceptions do you have of the processing capabilities of the Toshiba part do you have?
 
DaveBaumann said:
Vince, what perceptions do you have of the processing capabilities of the Toshiba part do you have?

nAo likely knows more than I on this, he'd be the guy to ask. I'd further state that it dosn't matter and is a strawman in this conversation; the STI group is constrained by IP and hardwired methodology more than they are computation when it comes to graphics processing. The new bounds on preformance are going to be in computation and data flow, something I've been saying for quite awhile. This also happens to be something STI is well beyond the competition on; yet, as history shows, Sony (and Toshiba for that matter) has habitually been constrained and produced visualization parts that have been quite weak in rasterization functionality.

You still, as far as I can tell, have yet to answer my question concerning what an ALU in the X2 system can do that an S|APU clocked almost 10 times as high can't.
 
Vince said:
nAo likely knows more than I on this, he'd be the guy to ask. I'd further state that it dosn't matter and is a strawman in this conversation;

Given your absolute convictions about all things to do with this project, I would have assumed that you would be well clued up. And, well, I don't think its a "strawman" converstsion and is one that is probably key to their eventual decision to go with NVIDIA.

You still, as far as I can tell, have yet to answer my question concerning what an ALU in the X2 system can do that an S|APU clocked almost 10 times as high can't.

These matters are obviously not just a case of clock speed - the question is could it produce equivelent / better performance at the types of graphics instructions that developers are going to want to throw at it.
 
Dave Baumann said:
Vince said:
You still, as far as I can tell, have yet to answer my question concerning what an ALU in the X2 system can do that an S|APU clocked almost 10 times as high can't.

These matters are obviously not just a case of clock speed - the question is could it produce equivelent / better performance at the types of graphics instructions that developers are going to want to throw at it.

Dave, with all due respect, don't answer as if I was ignorant. Re-read my origional post and you'll see that my origional question addressed precisely what you've thrown back at me, so I'll reiterate without something you can falliciously pick-out:

  • What can an ALU in the X2 system do that an S|APU can't?
On an architectural level, what can one do that the other can't? And then, after you've addressed the shortcomings of the S|APU's architecture (we'll use the Gschwind patent on unified SIMD/Scalar datapaths for reference), then factor in that one runs at around 500MHz and the other at around 4600Mhz
 
Vince, would you agree that it is likely that Xenon/Xbox2 CPU will be pushing dozens of Gflops whereas the PS3 CPU is probably going to push at least hundreds of Gflops ?
 
My conviction isn't with PS3, it's with the STI architecture.

Still, I would have thought it would have been of benefit to you to understand how the graphics processing is going to work, especially seeing as you have so much faith that the architecture would be primarily tasked with large chunks of the graphis processor capabilities. What would your faith in that architecture be if it was proven that "relatively hardwired" shader functionality from a graphics company was uses as a major part of the graphics processing in the primary, initial, consumer implementation?

Dave, with all due respect, don't answer as if I was ignorant.

Well, I thought we'd both agree that this was fairly critical. The most obvious answer to that is "perform many of the required graphics functions / instructions in fewer cycles" (and, probably, be able to hide the massive chucnks of texture latency very well). The real question is whether the available processing capabilities of simple processing units running at massively high speeds is able to offset the capabilities of complex processing units running the types of instructions they were specifically designed for. We both know where each other sits on that one and while we won't find the definitive answer until Sony cares to announce more details wouldn't you at least acknowldge that adding NVIDIA to this particular mix begins to potientially point things towards my pole?
 
DaveBaumann said:
Well, I thought we'd both agree that this was fairly critical. The most obvious answer to that is "perform many of the required graphics functions / instructions in fewer cycles".

Which isn't a substantive answer. It's talk.

DaveBaumann said:
(and, probably, be able to hide the massive chucnks of texture latency very well). The real question is whether the available processing capabilities of simple processing units running at massively high speeds is able to offset the capabilities of complex processing units running the types of instructions they were specifically designed for.

Wait, when did an S|APU become a "simple" processing unit? I know we both have our respective biases, but come on.

As in the NV40's fragment processing constructs, what can't be done with an S|APU team that's operating serially, either in a spatial or temporal dimension? Nevermind the large clock differential (Thank God for Fast14, right?), on an architectural level, where's the difference between an APU and the NV40 ALU's? Granted we can build different complexes around the APU core (as in the SPU which addresses the issue you earlier raised of hiding latency), but where's the difference?

DaveBaumann said:
Wouldn't you at least acknowldge that adding NVIDIA to this particular mix begins to potientially point things towards my pole?

:) Absolutely not. I like you, but not at all. Lets see the architecture they produce in Nagasaki and then we'll talk.
 
Status
Not open for further replies.
Back
Top