Next gen consoles: hints?

If the hardware allowed it, isn't more desirable, to actually take those high res model that they used to generate the normal maps for poly bump character and use that in the game ?

Also aren't we suppose to be seeing per pixel accurate displacement map for VS3.0/PS3.0 ? Wouldn't that generate an enormous quantity of polygon too ?

Take a looi at the detailed bumps on something like the characters on the second 3DMArk test - something thats simple for a fragment shader, but what type of poly requirements would be needed to do that with geometry?
 
DaveBaumann said:
I’m sure this is going to be desirable in some areas but disadvantageous in others (i.e. take a per pixel lit poly bump character – “emulationâ€￾ of detail is very easy through fragment shaders, getting that same level of minute detail via poly’s will require an enormous quanity).

There are several ways to compress/encode an high resolution mesh.
At this time the only viable route is via normal maps, but in the future there will be, imho, more viable and efficient solutions (given a programmable architecture).

ciao,
Marco
 
There are several ways to compress/encode an high resolution mesh.

How would compression help in environment such as PS3's?

At this time the only viable route is via normal maps, but in the future there will be, imho, more viable and efficient solutions (given a programmable architecture).

Such as?
 
Akira said:
Faf, I get my numbers from here
Thanks. But I wasn't really questioning the numbers, but rather the other part of your comment.

aaaaa000 said:
So how does DX specify in any way how computation resources are "consolidated"?
Indirectly it does - graphics hw in PC space being designed to run DX spec as optimally as possible.

DaveBaumann said:
On the flipside - given the direction Sony appears to have taken so far is there not the possibility that they are trading off programmability at one end for quite fixed function at the other (raster) end? In which case, is one any more desirable than the other?....
...Why not a fragment shader?
This is the fundamental difference I keep harking back to ? I?m trying to find out if PS3 has any dedicated fragment processing abilities, and from what I?ve seen that doesn?t really seem to be the case.
If we are to believe the patent, Visualizer's core building blocks are APUs, just like in BE. And the way they are laid out in the pictures indicates they would be in place where you normally do what you call fragment shading.
Does that count as dedicated enough?

I am not pretending to know whether this will indeed, be a part of PS3 though, if you know different, feel free to correct me.
 
If the hardware allowed it, isn't more desirable, to actually take those high res model that they used to generate the normal maps for poly bump character and use that in the game ?
There is a danger going down this route: you can end up with polygon-level aliasing. If you have finely expressed detail that is smaller than a single pixel on screen, you effectively sample random polygons out of this detail which will create aliasing artifacts.

LOD's mitigate, but don't completely solve this: if you consider a sphere, polygons near the edge of the circle (that is the, rendered image on the screen) are always sampled at a higher resolution than those that are near the middle of the circle.
 
nAo said:
Multiresolution techniques integrated with subdivision surfaces.

How would this be more efficient?

Fafalada said:
aaaaa000 said:
So how does DX specify in any way how computation resources are "consolidated"?
Indirectly it does - graphics hw in PC space being designed to run DX spec as optimally as possible.

And given the criteria vendors are looking at any one time it may be more optimal to have separate pools of more dedicated resource or it may be deem more optimal to have pools of more generalized resource.

Fafalada said:
If we are to believe the patent, Visualizer's core building blocks are APUs, just like in BE. And the way they are laid out in the pictures indicates they would be in place where you normally do what you call fragment shading.
Does that count as dedicated enough?

Where is the Visualizer and do you have any idea of the functionality of the APU’s?
 
Where is the Visualizer and do you have any idea of the functionality of the APU’s?

I'll go one step further and show you the PS3 Sony is planning on building from the Rambus Toshiba Sony contract.

fig6.jpg


Questions. Ask.
 
I'd imagine the APUs on the Visualizer are the same as the APUs on the CPU. That is to say that they should be flexible enough to do whatever you want with them...
 
No such thing as Cell.

Cell is like X86, an architecture. The CPU based on 'Cell' that Sony and Toshiba are building is called the Broadband Engine.

In the diagram above the CPU on the left is the Broadband Engine, on the right is 4 Visualizers which combined to form a single chip, much like 4 PE form a Broadband Engine.

So you can call the 4 VS a Visualizer I suppose, or the GPU w/e you want to call it.


Edit: I understand what you mean't now, sampling takes place in the Pixel Engines.
 
Dio said:
If the hardware allowed it, isn't more desirable, to actually take those high res model that they used to generate the normal maps for poly bump character and use that in the game ?
There is a danger going down this route: you can end up with polygon-level aliasing. If you have finely expressed detail that is smaller than a single pixel on screen, you effectively sample random polygons out of this detail which will create aliasing artifacts.

LOD's mitigate, but don't completely solve this: if you consider a sphere, polygons near the edge of the circle (that is the, rendered image on the screen) are always sampled at a higher resolution than those that are near the middle of the circle.

REYES :D
 
Paul said:
In the diagram above the CPU on the left is the Broadband Engine, on the right is 4 Visualizers which combined to form a single chip, much like 4 PE form a Broadband Engine.

So you can call the 4 VS a Visualizer I suppose, or the GPU w/e you want to call it.

So there is some arbitrary split between what you are deeming the “CPUâ€￾ and the Visualiser then?

Edit: I understand what you mean't now, sampling takes place in the Pixel Engines.

So, the APU’s that are believed to be for fragment level processing are before the texture samplers?
 
So there is some arbitrary split between what you are deeming the “CPUâ€￾ and the Visualiser then?

In reality BE is not a CPU, it's a VPU basically; or will be transformed into such. Hence the reason why BE's APU's aren't general purpose, they are VU's, this is how Sony can claim such high floating point performance for the thing.

VS is PE based.

So, the APU’s that are believed to be fore fragment level processing are before the texture samplers?

Pixel Engines will most likely be doing simple operations, update and check Stencil, update and check Z, Alpha, write pixel.

The programability comes from the APU's, this is where you write your shaders.

But to answer your question... it would assume atleast from just looking at the diagram that the fragment processing is before the texture sampling.
 
functionality of the APU’s?

Well, They can do:

FP:

Vector or Scalar

or

Integer:

Vector or Scalar

They can feed themselves from the DMA-able DRAM using the DMAC present in every PE.

They can do a bit of I/O work with external devices ( they can for example set specific flags in the DRAM sandbox and in their Local Storage so that when the I/O device sends data, that data is automatically transferred into the Local Storage, etc... ).

Each APU has its RAM ( the Local Storage ), its Register File, its Program Counter and its execution units.

The PU can control the APUs through APU RPCs: these RPCs can write in the Local Storage the program to be executed, modify the Stack and the Heap and the Program Counter.

The PUs are there to run the OS, help with message passign between APUs in the same PE and in different PEs and do most of the I/O work.

The Texture sampling should be done by silicon present in that area you see the Pixel Engine and the Image Cache.
 
Paul said:
In the diagram above the CPU on the left is the Broadband Engine, on the right is 4 Visualizers which combined to form a single chip, much like 4 PE form a Broadband Engine.

Paul said:
In reality BE is not a CPU, it's a VPU basically; or will be transformed into such. Hence the reason why BE's APU's aren't general purpose, they are VU's, this is how Sony can claim such high floating point performance for the thing.

Sorry, so which is it? Is it a CPU or some kind of visual processing unit? If it’s the latter then presumably there must be a CPU as well?

Pixel Engines will most likely be doing simple operations, update and check Stencil, update and check Z, Alpha, write pixel.

Well, this would be fairly fundamental for a pixel engine to do.

The programability comes from the APU's, this is where you write your shaders

Obviously.

But to answer your question... it would assume atleast from just looking at the diagram that the fragment processing is before the texture sampling.

How does that work then? Surely you wouldn’t want to be reading texture information as the last thing you want to do in a programmable fragment processing pipeline – not least because texture information doesn’t necessarily just equate to colour values.
 
It's a Vector processing unit, this is what a VPU is(should have made it clear)

It's not general purpose so which is why I tend to refrain from calling it a CPU, although it IS a CPU but it's specialized...

I know you understand what I'm saying, it's just odd to explain.

How does that work then? Surely you wouldn’t want to be reading texture information as the last thing you want to do in a programmable fragment processing pipeline – not least because texture information doesn’t necessarily just equate to colour values.

Patent doesn't even go into any type of specifics about the VS, it mentions it's basic structure however it mentions no functions or how it works.

We'll find out how it works better later on I guess.

However, it is of my opinion that this system has REYES rendering in mind...

Anyway I gotta run hence the short response, don't wanna be late for work.
 
DaveBaumann said:
Sorry, so which is it? Is it a CPU or some kind of visual processing unit? If it’s the latter then presumably there must be a CPU as well?

Dude, don't be so thick-headed, just look at the diagram (and pretend that is the basic PS3 architecture; something we don't really know for sure yet).

The CPU is on the left. The GPU is on the right.

How does that work then? Surely you wouldn’t want to be reading texture information as the last thing you want to do in a programmable fragment processing pipeline

What makes you think that is the case?

Again, look at the diagram. Note the black arrows that are quite visible in it. They point in both directions. I guess that is to signify it's not some kind of assembly-line setup where one APU pipes into the next until a pixel pops out at the bottom.

Really, just look at the diagram. It explains just about every question you've asked so far.

*G*
 
Paul said:
It's a Vector processing unit, this is what a VPU is(should have made it clear)

It's not general purpose so which is why I tend to refrain from calling it a CPU, although it IS a CPU but it's specialized...

I know you understand what I'm saying, it's just odd to explain.

How does that work then? Surely you wouldn’t want to be reading texture information as the last thing you want to do in a programmable fragment processing pipeline – not least because texture information doesn’t necessarily just equate to colour values.

Patent doesn't even go into any type of specifics about the VS, it mentions it's basic structure however it mentions no functions or how it works.

We'll find out how it works better later on I guess.

However, it is of my opinion that this system has REYES rendering in mind...

Anyway I gotta run hence the short response, don't wanna be late for work.

Saying that the APUs are pure Vector Processors is incorrect though: I agree that it is not what we use to call General Purpose Processor as it is obviously optimized to run tasks with inherent parallelism optimally.

Still it can run any other kind of task: each PU can process 1 FP/FX Scalar Instruction or 1 FP/FX Vector Instruction... of course the Vector Processing case yelds the maximum efficiency, but the flexibility of the APU is not compromised.

I have seen the EE's VUs defined as 16 bits CPUs ( that cannot DMA data by themselves ) with a 128 bits FP SIMD Engine and Micro Memories and Registers.

The APUs seem to be on a higher level than VUs, sort of expanding the concept and we have IBM work on them that shows it: judging how Tshiba and Sony liked the VUs in the EE ( Toshiba is the author of both VUs ) it is not surprising to see the APUs used in CELL and being pushed to be basically its building block.



Interesting Link:


http://www.forbes.com/forbes/2002/0415/207_2.html

The project has been run in deep secrecy for a year. James Kahle, IBM's lead architect for the chip, says the Cell design borrowed from IBM's Power4 chip (used in high-power servers) but evolved differently. "We started with a blank sheet of paper and asked, ‘How do people interact with a machine?'" A hint: They want to talk to it rather than pound on a keyboard, and they demand undivided attention.

Although Kelly and Kahle are coy about the details, Cell will be a parallel processor on a chip, with multiple components that can be reprogrammed--on the fly--to handle any task.
 
Back
Top