pixelbox said:
1. Just in case you missed this, Ok, so i thought fillrate was based on resolution (how many pixels a GPU can draw). So what, fillrate refers to every pixel/texel a GPU can draw?
It refers to the number of pixels it can draw (peak) per time unit. In case of xenos, it's got 8 rasterizers and it's clocked at 500MHz (or close enough anyway), giving a theoretical max fillrate of 4Gpix/s. Considering the eDRAM framebuffer, I'm guessing it would probably be possible to come reasonably close to this figure when drawing simple polygons.
Likewise, from what we've
guessed regarding RSX in PS3 and given some preliminary performance claims from Sony, if it's more or less a modified PC GPU (perhaps a big if), it would have 24 rasterizers running at 550MHz, giving a highly theoretical fillrate of 13.2Gpix/s. Considering its framebuffer bandwidth, each pixel filled couldn't even consume 2 bytes of bandwidth, which is obviously impossible. Modern GPUs don't even support accelerated 8 bits per pixel display modes I might add...
This just goes to show btw that fillrate is a highly archaic method of measuring performance. If PS3 did have over 13Gpix/s fillrate, it could redraw a 1080P screen 106 times per frame at 60fps. A rediculous and uselessly high figure.
2. What is this 2-way/dual-issue stuff you speak of?
It refers to the ability of a CPU to execute two separate instructions at once. Usually with some restrictions applied, such as the data these instructions refer to having to be available at the time of execution, etc.
Also, not all CPUs are symmetrical, so that not all instructions might be executable by either unit; this is quite common actually and includes the SPUs in Cell for example where one pipe deals with (most) maths instructions and the other with loads/stores and other stuff.
3.Are VLIW/SIMD different modes of a processor or types of codes.
The latter. VLIW is when the compiler re-arranges the code and attempts to bunch up as many instructions as possible in pre-manufactured blocks that can all be executed at once in parallel in the CPU. The CPU therefore doesn't have to have any out-of-order execution hardware, as that bit has already been taken care of by the compiler. Theoretically, the transistors saved can then be spent on more parallel execution units for greater performance. In reality, VLIW has been a disappointment when used as a general purpose processor (think: Itanic).
SIMD is merely performing the Same Instruction on Multiple pieces of Data. Such as multiplying 5 with 11, 8, 23 and 7 for example.
This is different from the above in that VLIW bunch up many usually
different instructions.
4.What are the pupose of pipelines in GPU's
Well, a pipeline is merely a fancy-smanchy term for an assembly line. That's what it does really, it breaks down a big task (such as drawing a shaded pixel of a polygon into a framebuffer) into many little tasks so that it can be finished in a step-by-step fashion. Pipelining in microprocessors has been around for decades really, originally in CPUs, and then other devices as well as the microchip became more widely used. Superpipelining in CPUs was when true assembly line execution became possible; originally one instruction had to move fully through the pipeline before the next could enter. Superpipelined CPUs (now a redundant term as all chips feature this these days) feeds new instructions into the pipe as soon as an opening slot appears.
and what's their tie to fillrates i.e. the 16 pipelines in ps2? I always thought of pipelines working like pistins in an engine and the gasoline vapor as data, would i be wrong in this case?
Heh, well, the analogy is a bit crude, but essentially correct I suppose. I'd say though that data isn't really gasoline though because it's not data that drives the GPU; it's the other way around. So say it's a 16-cylinder air compressor instead.