CONFIRMED: PS3 to use "Nvidia-based Graphics processor&

Status
Not open for further replies.
I'm going post it here..cause that's is the most recent 'cell' thread..and there is no need for a new thead about this.
Just a little little (new patent):
Memory management in multiprocessor system
A system and a method are provided for improving memory management in a multiprocessor system. A direct memory access (DMA) operation is set up for a first processor. A DMA effective address is translated to a virtual address. The virtual address is translated to a physical address, which is used to access a memory hierarchy of the multiprocessor system.
Now we can be 99% sure that a SPU can fetch data from PU's L2 cache ;)

ciao,
Marco
 
wco81 said:
Are they only shooting for DD sound?

Because the Blu-Ray and HD-DVD specs are suppose to be supporting DD Plus and DTS+. Something beyond the current AC-3 and DTS.

Since PS3 will have Blu-Ray drive, which would mean supporting these newer audio codecs and formats, maybe they would target the new audio formats for game audio as well?

I remember seeing a gamindustry.biz article saying Sony has licenced DTS for PS3.

So all those parterships will mean that we shall see Sony, IBM, Toshiba, Nvidia, DD, DTS, Rambus logos on PS3??

Regarding why Sony abondoned internal (Sony+Toshiba) GPU could be bec'se they wanted to punish Toshiba for HD-DVD.....j/k :LOL:
 
If these flexible APUs are handling the vertex processing then one implication could be general tesselation and displacement mapping.

If the APUs are using their own instruction set then it seems like it would be difficult to have a unified shader model.
 
I'm going out on a limb here, (not really seeing how Nvidia's been officially contrated by Sony now. There is no longer a fully cell-based EE2) though it's quite obvious that the visual playing field will see basic parity with the coming generation. We may see the Revolution or PS3 with a few additional technical capabilities due to their later release ipso facto, but we're dealing with less of a recognizable, or a real pronounced graphical difference than even in this gen.

Fill-rate, texture throughput, ram speed & latency, more complex pixel & vertex ops, fps, resolutions, floating-point data models, etc, etc. Of course there will be numerical variances & advantages as expected per platform, but all systems will be more than technologically capable of basic reproductions of their competitor's best. (minus a feature here or there) It will come down to architectural efficiency, (elimination of system bottlenecks) ease of extracting performance, (a very forgiving API & flexible toolsets) & cost effectiveness. Gone are the days of budget titles & small development houses unfortunately. The end software & its sheer diversity will be the measuring stick, & not the hw capability. Which is true today by following the PS2's sustained commercial success, as it will be true tomorrow.

Momentum can shift easily in this business, 3-4 very high profile IPs is all you really need, & sometimes almost one will suffice. But plenty of supplemental good, to half-decent quality software to buoy your flagship titles. Multiple sequels continue to sell for a reason.
 
How much resources can ATi put into next Microsoft and Nintendo consoles. They'll need to share their engineering between MS and Nintedo, while still keep up the PC accelerator development. Or maybe the graphics chips between Nintendo and MS consoles share much the same technology.
How big are ATi compared to nVidia?

nVidia apparently are making the PS3 graphics solution in collaboration with Sony (and IBM???), and it might include some of the "Cell" technology already developed by STI, so on a quick glance it would look like they can put more effort and resources to the design than ATi.

Of course, if both ATi and nVidia solutions are heavily based on their next gen PC architectures, which they have started development prior the console contracts and are based on current cores, then it would not matter that much.
But if either of the companies are building an architecture that is radically different from their current PC cores, then they would need all the manpower they can get.

Seeing as nVidia has been a bit behind from Ati in PC accelerators since about two years, it would indicate they have indeed allocated a significant portion of their resources developing the PS3 chip(s), and that their next gen PC tech would be a leap from the already old GeForce basic core including tech they have developed for PS3 together with Sony.
 
Vince said:
Excellent post all-around V3. And, finally, someone uses the term DPP in the right context; not that I'd think you wouldn't.

V3 said:
On this board, people always question who are going to design that part, specifically the one with pixel engine. At first it was assumed it was just going to be another Sony graphic chips, some more patents searches gave Toshiba as possibility too. Also rumours from various and questionable sources points to NVIDIA. These rumours been around for quite sometimes too.

Exactly. This graphic solution has been stated to be utilized in Sony and Toshiba's CE based products. We know that STI is going to use Cell, which was designed to be inheriently highly area and power effecient, in CE and I'm failing to see how a seperate or monolithic NG-GPU block of logic is helpful. Actually, it's logically inconsistent with every indication seen this far. The people around here who think this is going to be a PC-GPU derivative akin to the XGPU, IMHO, are going to be proven wrong. I'm much more partial to an advanced ROP/nVidia Pixel Engine integration with IP usage scenario. Those who think this is indicative of a failure of Cell will be proven wrong (which is an absolutely asinine stance to be taken a week after the ISSCC publishings).

People need to slow down and think. This nVidia announcement has served as nothing but a lightning-rod for the PC folks to take another shot at nVidia and Cell without ever knowing a single thing about it before Tuesday.

I think you are right. It would seem logical to me that Sony cannot just buy a PC derivative chip after all the money and efforts they put in the technology department, and a more unique chip combining both nVidia AND Sony expertise in the field is more plausible. But reading of Dave, Qroach and JVD assumptions is disheartening...
 
DeanoC said:
Physics is an interesting question...
The big question is whether physics belongs on the CPU. A few years ago people talked about 'physics accelerators', in the next couple of years we will get them, its just they are named badly...

You have two ways of approaching the physics problem, one is the simple idea idea of faster CPU, the other is using a semi-general purpose chip with specialised functions for numerical operations (like interpolation and predicted reads and writes).


So the ideal situation would be a main CPU that handles serial operations, and sends certain tasks that are memory-intensive and require high-bandwidth to a PIM (Process in Memory) co-processor? If thats the case, Micron's Yukon processor would seem like a good fit. They have stated that their design was meant as a co-processor to the CPU not to replace it.

As of late their are rumours croping up about a third chip in the Next Xbox. If true, it's hard to believe a third chip would be a second IBM CPU or ATI GPU.
 
Vince said:
This graphic solution has been stated to be utilized in Sony and Toshiba's CE based products. We know that STI is going to use Cell, which was designed to be inheriently highly area and power effecient, in CE and I'm failing to see how a seperate or monolithic NG-GPU block of logic is helpful.

And what if it this is the case Vince? Given all your commentry about Cell and the lack of need for "shaders" if it turns out to have PS3.0 / PS4.0 like shader capability I would assume that this must be a failure for Cell for you.

I had a developer recently describe the Tosiba part as having "PS2.0 like capability" and its fairly obvious that NVIDIA's part will be beyond that in shader terms.
 
I seriously think people are overestimating the effect that next gen consoles will have on content creation and how much more time and staff the next gen games need.
Of course they'll require more art, more detail, larger worlds, longer and more complex code...

But it's not that the whole world around the next gen consoles stops, that only the consoles get more powerful and capable, while the content creation tools stay the same...after all they all more or less follow the same general technology curve.
The tools for modelling, animation, motion capture, programming... they all evolve too.

To overexaggarate, it would take way too much time to model and animate a character like Vin Diesel in that Riddick game with some old 3D software (ok, V Diesel was maybe not the best example of an animated character :) )

Edit: Sorry, I'm a bit off-topic again.
 
As far as the GPU goes in the PS3, it will probably be customized in at least 2 ways: XDR interface and eDRAM. Besides that I doubt there will be more than minor differences between nVidia GPU cores for the PC and PS3.
 
What has kept Ati and nVidia from putting eDRAM on PC cards? Cost?
What keeps nVidia from putting XDR and eDRAM in their next gen PC products?
 
[DREAM MODE]


Let's assume that the BE+NVGPU in PS3 will be a real monster of a duo, and that we will all see that even current PC GPU could do SO much more if only they were coupled with a decent CPU (like the BE will be, again, assuming it is THAT good), and with monstrous amount of bandwidth between them.
If that is the case, who thinks that consequently PS3 release, PC architecture will actually change for the better, providing that "jump" ranycat was mentioning, the jump that finally gets away with old stupid bottlenecks in the PC architecture?
All the manufacturers for PC parts will eventually have to rethink the way current PCs have been manufactured for the last 20 years, cause speed bumps are not gonna cut it forever.

Not sure i was very clear...
 
The GPU will have to include significant hardwired feature acceleration. No system and toolset anytime soon will have the capability to make hardware specialization for competitive graphics a drag on performance. It takes a lot of expertise in the specific field of graphics to design and implement the algorithms for all of these features under an efficient architecture; it's not like a big bag of FLOPS could take that place -- these graphics companies don't employ highly skilled engineers, huge investments, and years of R&D and patent filings for a job that a giant calculator in the hands of general developers could do just as well. Sony went with nVIDIA for their expertise in graphics all-around.

Sony's brought in nVIDIA to handle large-scale graphics specialization of the GPU chip with their experience and IP/patents. The announcement and expected revenues clearly do not indicate a simple IP deal, and Sony would never announce a company flatly as a 'partner to the development' of a monetarily significant product if it weren't true due to the impact on projected revenues it implies. The words of a press release are chosen carefully.

It's also clear that this will be more customized than the Xbox-GPU deal as Sony clearly states that they are integrating their own system solutions into the GPU and manufacturing it all themselves. However, nVIDIA has revealed frankly that their technology design does originate from a future GeForce PC part, but one that's quite unlike their previous efforts.
 
rabidrabbit said:
What has kept Ati and nVidia from putting eDRAM on PC cards? Cost?
What keeps nVidia from putting XDR and eDRAM in their next gen PC products?

Nothing. Cost is one thing and the fact that you have somewhat more fixed render target sizes on consoles makes it easier to target a particular size of eDRAM.
 
Lazy8s said:
The GPU will have to include significant hardwired feature acceleration. No system and toolset anytime soon will have the capability to make hardware specialization for competitive graphics a drag on performance. It takes a lot of expertise in the specific field of graphics to design and implement the algorithms for all of these features under an efficient architecture; it's not like a big bag of FLOPS could take that place -- these graphics companies don't employ highly skilled engineers, huge investments, and years of R&D and patent filings for a job that a giant calculator in the hands of general developers could do just as well. Sony went with nVIDIA for their expertise in graphics all-around.

Sony's brought in nVIDIA to handle large-scale graphics specialization of the GPU chip with their experience and IP/patents. The announcement and expected revenues clearly do not indicate a simple IP deal, and Sony would never announce a company flatly as a 'partner to the development' of a monetarily significant product if it weren't true due to the impact on projected revenues it implies. The words of a press release are chosen carefully.

It's also clear that this will be more customized than the Xbox-GPU deal as Sony clearly states that they are integrating their own system solutions into the GPU and manufacturing it all themselves. However, nVIDIA has revealed frankly that their technology design does originate from a future GeForce PC part, but one that's quite unlike their previous efforts.


I agree on everything, i've always said that this whole 1Tflop story sounded a bit fishy, and that maybe they needed that much power to keep up with other solutions which are designed to do certain things. That's a lot of overhead to make sure they could keep up.
But now it look like things are different, we'll have to see what coems out of it.

However, don't they always say that their new chip is unlike anything they've done before, many times more powerful than anything they've done before, and blah blah blah.... :devilish:
 
rabidrabbit said:
What has kept Ati and nVidia from putting eDRAM on PC cards? Cost?
Except if they can cram the whole VRAM on-die, having eDRAM on a PC GPU is not that important.
rabidrabbit said:
What keeps nVidia from putting XDR and eDRAM in their next gen PC products?
Here, the anwser is cost.
 
Performance isn't the critical factor in the advancement of the PC platform. Being decades newer in origin, Cell will crush it performance-wise, but that alone is not enough of an incentive to shift the world's computing standard, which encompasses the devices and tools used by just about every profession -- medicine, law, research, manufacturing, banking, design, gaming, etc. The world's computing needs are centered around that old line of PC compatibility, and it's that lineage of programs and toolsets that is key to any advancement in the space.
 
Lazy8s said:
Performance isn't the critical factor in the advancement of the PC platform. Being decades newer in origin, Cell will crush it performance-wise, but that alone is not enough of an incentive to shift the world's computing standard, which encompasses the devices and tools used by just about every profession -- medicine, law, research, manufacturing, banking, design, gaming, etc. The world's computing needs are centered around that old line of PC compatibility, and it's that lineage of programs and toolsets that is key to any advancement in the space.

oH I'M FULLY AWARE OF THAT, (sorry capslock), but one day in the future, things will have to change.
 
Lazy8s:
The GPU will have to include significant hardwired feature acceleration. No system and toolset anytime soon will have the capability to make hardware specialization for competitive graphics a drag on performance. It takes a lot of expertise in the specific field of graphics to design and implement the algorithms for all of these features under an efficient architecture; it's not like a big bag of FLOPS could take that place -- these graphics companies don't employ highly skilled engineers, huge investments, and years of R&D and patent filings for a job that a giant calculator in the hands of general developers could do just as well. Sony went with nVIDIA for their expertise in graphics all-around.

well said.
 
Status
Not open for further replies.
Back
Top