CONFIRMED: PS3 to use "Nvidia-based Graphics processor&

Status
Not open for further replies.
I think what he's trying to say is that data would have to go through the main bus from the CPU and the GPU in order to, in his example, "make sure that the waves interact with the position of the player correctly"... And that would cause a bottleneck that wouldn't be there is physics were done completely on the CPU and the graphics calculations were the only thing the GPU had to worry about.
 
Re: Cell Graphics

london-boy said:
PZ said:
Also, if the Cell is really supposed to be so super fast (1 Tflop) then why does it need a GPU (and not just another cell chip to act like a GPU)?

If the NV50 is 2x NV40 then it is ~400 Gflops (or so using nVidia flops)

Does this imply that the PS3 CPU is < 400 Gflops?

If it really could dole out more flops than an NV50 then wouldn't the NV50 become a decelerator to such a massive CPU?

I get the impression that a lot of people got this whole Cell CPU thing a bit wrong, in terms of what it might do what not, compared to what a GPU has to do.

I think this GPU will most like process data like nothing Nvidia has ever made before.

1. stream processing unlike PC's
2. linux based OS
3. cluster processing

On the first point, if cell live up to the specs what we know about current gpu based on pc (static) architecture won't apply. I think sony will have Nvidia design it gpu from a second cell chip (meaning that there will be that lease 2 cell chips in ps3) . Remembering Sony long term goal to lower fab cost while using its cell chip to gain market share from competition. It's easier to do that when the chips you fab all are built off the same framework - cell. Sony is wise in remembering developer's problem in understand ee/gs. Nvidia is keenly aware of the tools use by these programmers and their mindset. It's a great bridge between these to vastly different communities (consoles, movie production, pc, etc.)

Second point, I think IBM, Nvidia, Sony,and others are compelled for numerous reasons to bring to os to the forefront all in an effort to ween consumers and the development communities off Microsoft's OS.

Last point, cluster processing of graphics is not something that has ever been done in mainstream pc computing. Sony's gpu must all compliment this fundamental application of cell design. That's why I think Nvidia is partnered with Sony to turn a Cell chip into a gpu with Nvidia's goodies added. :!:
 
london-boy said:
I think what he's trying to say is that data would have to go through the main bus from the CPU and the GPU in order to, in his example, "make sure that the waves interact with the position of the player correctly"... And that would cause a bottleneck that wouldn't be there is physics were done completely on the CPU and the graphics calculations were the only thing the GPU had to worry about.

Thats an assumption about what a CPU and a GPU are, what if the GPU can read/write CPU cache?

In most cases even physics on the CPU would have to read/write main RAM, why should MainRAM->CPU->Main RAM->CPU be any faster than MainRAM->GPU->MainRAM->CPU?
 
No matter HOW much power any of these consoles have under the hood, the fact is that it will be time/cost prohibitive to develop ultra-highrez assets to "truly" take advantage of it for most game projects. That's where I see M$'s XNA to truly be the trump card, which of course is their plan. IMO it'll come down to initial hype building a base (are sony/nintendo really going to be able to come up with something as good as Live? Halo2 by itself quadrupled online playing and multiplayer will only continue to grow) and staying power (with consistently good titles).

I forsee a great future in CAD/art design for the truly talented.
 
DeanoC said:
In most cases even physics on the CPU would have to read/write main RAM, why should MainRAM->CPU->Main RAM->CPU be any faster than MainRAM->GPU->MainRAM->CPU?

Hey don't ask me, i was trying to explain what Guden's point was.
And I agree that if the GPU-MainRam-CPU bandwidth is all the same, things could work properly.
It looks like many physics-heavy PC games are CPU bound (obviously) and it would be nice to see what would change if part of the physics calculations were offloaded to GPU.
On the next gen consoles we won't have to worry about PCI-Express or whatever bottleneck PC have today, it will be interesting what will come out of it.
 
Qroach said:
Darn Deano ;) I was just going to ask the questions about a GPU reading from CPU cache
Working on low level engine code combined with a slow build at the moment, Used to be TomF excuse for always been first to post on various mailing lists ;-)
 
nintendo has no online structure yet and I don't know a soul who uses sony's system.

Takes time and $$ to develop that stuff. M$ has gone about online consoling properly, imo.
 
To DeanoC

So in the XB leak they say that the GPU can read(write?) from the CPU memory , that would be enough to solve the problem :?:

If so we most expect XB2 GPU to be very powerfull have a lot of new components.


OT: each day I think that with is more probably that we will see some sort of Fast14 , or even some FastMath tech.
 
DeanoC said:
You have to see the big picture and stop thinking about graphics, physics etc and see it as particular operations, then you see the corralation.

I already said in my post I see the correlation, but as we'll be introducing extra layers of complexity I feel it will be better to keep hardware doing what it's intended to do. GPUs get bogged down quickly as workload increases, there's no reason to believe that won't continue in the future. Adding more strain on them doesn't seem like a particulary smart thing, especially when many already today are very budget-conscious and can't afford to buy anything but pretty pedestrian hardware.

Then there's the feedback issue like I mentioned, but I've gone over that already.

Then there's the additional issue of GPUs not running program code either, instead the hardware execution units are abstracted away through driver layers and a compiler that's unique for each IHV, and even each IHV's hardware architecture. It means re-inventing the wheel for every new generation of hardware, doing all the bughunting and tweaking and tuning all over again each time the structure of the hardware changes. Inefficient and resource-taxing.

So we can open tin cans with V8 engine tin can openers. Still don't see any particulary compelling reason to do so...
 
DeanoC said:
Not without general purpose automatic access to main memory they don't.
According the last patent I found APUs have a general purpose mechanism to access main memory (and other patents indicate APUs can access CPU L2 cache too via the same mechanism..).
I have a doubt, what does the word 'automatic' mean in this context?
But then the question is what the difference is between a CELL based MPU and a ATI/NVIDIA GPU based design?
We really don't know. The first thing I'm thinking about is 'heavy' SMT. APUs don't have that feature as far we know..
It's really easy (too easy) to think that future GPUs designs and future CELL implementations are converging towards a very similar goal and thus architecture, but it's also too soon to make such a statement.
At the end of the day we don't know what the APUs real features are..

ciao,
Marco
 
nAo said:
DeanoC said:
Not without general purpose automatic access to main memory they don't.
According the last patent I found APUs have a general purpose mechanism to access main memory (and other patents indicate APUs can access CPU L2 cache too via the same mechanism..).
I have a doubt, what does the word 'automatic' mean in this context?

Like a simple load or store op..

CELL APUs can only speak to memory through DMA, so you have to set up source, destination and range for each memory transaction. Luckily it appears that CELL has a DMA queue, so instead of stalling, the APU can go do some other stuff until the result turns up. But it's certain to be slower (higher latency) than a simple load/store to a cache hierachy.

Cheers
Gubbi
 
Kutaragi Walked Right Into The Trap Set By Ms

The adaptation of nVIDIA's GPU for PSX3 is a major blow to PSX3's prospect, since this action forces SCEI to play the game by MS's term. Why? Because

1. MS wrote VertexShader and PixelShader specification.
2. VS and PS are optimized for DirectX/XNA, not OpenGL.
3. But PSX3 cannot use DirectX. Must use Embedded OpenGL.
4. Embedded OpenGL that SCEI hopes to use is still in early stage.
5. Compared to Embedded OpenGL, DirectX is a mature technology.
6. Developers have been coding DX shaders for four years now. Developers are unfamiliar with Embedded OpenGL shaders.
7. PSX3 no longer enjoys a rendering performance advantage over Xbox Next.

Kutaragi Ken adapting a rendering engine optimized for the rival's API because he had no choice.

Vertex and Pixel Shaders of nVIDIA GPUs perform the best with DirectX9. But Kutaragi Ken must use a less efficient API instead.
 
SegaR&D said:
Kutaragi Walked Right Into The Trap Set By Ms

The adaptation of nVIDIA's GPU for PSX3 is a major blow to PSX3's prospect, since this action forces SCEI to play the game by MS's term. Why? Because

1. MS wrote VertexShader and PixelShader specification.
2. VS and PS are optimized for DirectX/XNA, not OpenGL.
3. But PSX3 cannot use DirectX. Must use Embedded OpenGL.
4. Embedded OpenGL that SCEI hopes to use is still in early stage.
5. Compared to Embedded OpenGL, DirectX is a mature technology.
6. Developers have been coding DX shaders for four years now. Developers are unfamiliar with Embedded OpenGL shaders.
7. PSX3 no longer enjoys a rendering performance advantage over Xbox Next.

Kutaragi Ken adapting a rendering engine optimized for the rival's API because he had no choice.


:oops: :oops: *LB FREEZES WITH FEAR, AS IF HE JUST SAW A GHOST*


EDIT: And also, the whole referring Ken Kutaragi as if he's making PS3 himself, with his own hands... Added to the PSX3 thing reminds me of someone who shouldn't be here.
 
But it's certain to be slower (higher latency) than a simple load/store to a cache hierachy.
But the high latencies apply doubly so for the GPU - the question here is only how easy will it be to hide those latencies on the APU, as opposed to the mostly automated process on the GPU.
 
SegaR&D the most liar in the forum.

From my point of view I believe that PS3 will be a little more powerful than Xenon in the CPU part (in the paper a PE is more powerful than Xenon CPU) but we don´t know how powerful the two GPU will be.

The other problem is saying "1 teraflop is necesary for physics".

¿How many PC conversions will use the excelent teraflop performance for physics and AI?

¿Do you believe that the multiplatform games will be more superior in PS3 respect the competitors?

In the two questions I believe that not.
 
SegaR&D said:
Kutaragi Walked Right Into The Trap Set By Ms

The adaptation of nVIDIA's GPU for PSX3 is a major blow to PSX3's prospect, since this action forces SCEI to play the game by MS's term. Why? Because

1. MS wrote VertexShader and PixelShader specification.
2. VS and PS are optimized for DirectX/XNA, not OpenGL.
3. But PSX3 cannot use DirectX. Must use Embedded OpenGL.
4. Embedded OpenGL that SCEI hopes to use is still in early stage.
5. Compared to Embedded OpenGL, DirectX is a mature technology.
6. Developers have been coding DX shaders for four years now. Developers are unfamiliar with Embedded OpenGL shaders.
7. PSX3 no longer enjoys a rendering performance advantage over Xbox Next.

Kutaragi Ken adapting a rendering engine optimized for the rival's API because he had no choice.

Vertex and Pixel Shaders of nVIDIA GPUs perform the best with DirectX9. But Kutaragi Ken must use a less efficient API instead.

1. Clowns will eat you if you go to sleep.
2. A squirrel is named rocky but wears flight goggles.
3. Lady bugs look pretty under a microscope.
4. Though very large, the chocolate cup cake still has frosting.
5. Gloves usually have 5 fingers.
6. Elephants dream about toasters.
7. The fastest ship is twice as fast as the slowest ship when the slowest ship is going twice as fast as it was when the fastest ship was going one and a half times faster than an average ship.

Nite_Hawk
 
Oooops.

I forgot a thing.

To SegaR&D

For your information Microsoft didn´t invented the shaders.
 
Status
Not open for further replies.
Back
Top