mckmas8808
Legend
[rant] This right here guys shows the real truth in jvd and scooby. The man said its hardware and software so to competely call him a BSer or a liar is uncalled for. This is exactly what I was talking about earlier. Regardless of what Ken would have said you two guys still would be saying negative stuff about it.
Just be honest and say that you're going to wait until the console releases before you believe anything. Every interview given jvd either ignores or says that it isn't right. [/rant]
A couple of things that are interesting and need to be talk about more are these points.
The Cell has an architecture where it can do anything, although its SPE can be used to handle things such as displacement mapping.
Just be honest and say that you're going to wait until the console releases before you believe anything. Every interview given jvd either ignores or says that it isn't right. [/rant]
A couple of things that are interesting and need to be talk about more are these points.
KK: The Cell is bi-endian (has the ability to switch between usage of big endian and little endian ordering), so there are no problems.
In fact, some of the demos at E3 were running without a graphics processor, with all the renderings done with just the Cell. However, that kind of usage is a real waste.
The Cell has an architecture where it can do anything, although its SPE can be used to handle things such as displacement mapping.
Prior to PS3, real-time rendered 3D graphics might have looked real, but they weren't actually calculated in a fully 3D environment. But that was OK for screen resolutions up until now. Even as of the current time, most of the games for the Xbox 360 use that kind of 3D. [However, we want to realize fully calculated 3D graphics in fully 3D environments. In order to do that, we need to share the data between the CPU and GPU as much as possible. That's why we adopted this architecture. We want to make all the floating-point calculations including their rounded numbers the same, and we've been able to make it almost identical. So as a result, the CPU and GPU can use their calculated figures bidirectionally.
A lot of VRAM will especially be required to control two HDTV screens in full resolution (1920x1080 pixels). For that, eDRAM is no good.
If we tried to fit enough volume of eDRAM [to support two HDTV screens] onto a 200-by-300-millimeter chip, there won't be enough room for the logics, and we'd have had to cut down on the number of shaders. It's better to use the logics in full, and to add on a lot of shaders.
When fixed-pixel devices become the default, it will be the age when TVs and PCs will merge, so we want to support everything perfectly.
To me this is the only quote that could be PR. But I would hope that we could all be adults and talk about this like one. Could he or does he have a point. The debate should be about that not how he bashes MS.For example, some question where will the results from the vertex processing be placed, and how will it be sent to the shader for pixel processing. If one point gets clogged, everything is going to get stalled. Reality is different from what's painted on canvas. If we're taking a realistic look at efficiency, I think Nvidia's approach is superior.