Where is this coming from?! AFAIK there is no GPU architecture that works on some delta principle, rendering only what has changed like some funky video compressor. I guess in theory it's possible if you did a complete vertex and texture comparison before rendering a pixel, but the cost would be little different to actually rendering the pixel. And in a worst case scenario, like an FPS where you rotate the camera, every pixel is going to change meaning such tests are pure overhead and every pixel will have to be rendered from scratch. The idea of rendering only changes is a good one for certain rendering styles or when you know you are limiting changes, but it's not a sound basis for a GPU architecture having to work with every kind of game and viewpoint. And SGX doesn't do this. They have a standard TBDR. I also question that each core drives one quarter of the screen. Workload should be distributed as needed. From the IMGTEC SGX543MP gumph :The NGP GPU only updates what has changed on screen which results in a power savings and adds to the performance.
Thus is most of the screen a static wall vector, with lots of interesting stuff happening in the bottom right corner, rather than 3 cores sitting idle, they'll share the workload.no fixed allocation of given pixels to specific cores, enabling maximum processing power to be allocated to the areas of highest on-screen action
Yes, the Cell BE and RSX are being used to edit 4k
Sony has kept
maximum power consumption to within 330 W (at 100V
AC). In the case of a rack that holds forty BCU-100 units, for
example, maximum power consumption would be 13.2 kW
TBR does not enable higher resolution. TBR is a memory saving technique. It may surprise you to know that ATI's GPUs divide the screenspace into tiles.
This is common to all modern GPUs. Even RSX and Xenos have z-culling to avoid rendering triangles that aren't visible. Again it isn't something that directly increases display resolution.
It was semi-unique at the time PowerVR first launched a GPU back in the late 90's. It was a 3D rendering design path that was meant to be compatible with Microsoft's backing of the Talisman rendering push (http://en.wikipedia.org/wiki/Microsoft_Talisman ). That was to be a tile based approach to acclerated 3D rendering. However that died out in no small part due to the massive drop in pricess for memory around 1997-98.
Hence PowerVR found themselves with a very memory efficient way of rendering that was suddenly rendered relatively irrelevant to the market at large.
It remains especially attractive in the handheld (and smartphone space) due to the memory savings achieved with a full TBR.
That only indirectly helps you with power (one or two less memory chips will be fairly insignificant in overall power use) but helps significantly where space on the PCB is at a premium.
As well many of the original benefits of PowerVR's original TBR chip have been incorporated into all modern GPUs. It really isn't all that unique anymore. The only real benefit is that it allows the use of slightly less memory. And that is insignificant in anything but a handheld/smartphone where PCB space is at a premium.
Regards,
SB
Traditional architectures, like 3Dfx Voodoo2 - Riva TNT and others, work on a per polygon basis. This means that their pipeline will take a triangle render it, take the following triangle and render it, and take again the following triangle and render it,... this means that they do not know what is still to come. PowerVR uses an overview of the scene to decide what to render, traditional renderers just rush into it and do a lot of unnecessary work. The following figure shows this:
I assume you mean this:
http://pro.sony.com/bbsccms/ext/ZEGO/files/BCU-100_Whitepaper.pdf
The connection between Sony selling server cluster components for video editing, and the fact that it uses the same components as PS3 is somehow relevant according to you?
Okay, but you also need to accept you are wrong about the NGP only updating parts of the screen that have changed, which is where rpg.314 was saying no GPU works that way.And Shifty, I can make a mistake and I was wrong about cores being assigned a specific screen region, I misread.
You are talking TBR and I'm quoting TBDR. The PowerVR apparently does both.
"The heavy bandwidth savings is the key advantage of a TBDR." http://www.beyond3d.com/content/articles/38/
Bandwidth is not memory. This does help to increase efficiency.
I can be mislead by what I read so correct what I cited. This may be a perfect example of the many misunderstanding I've experienced and is why I generally cite and believe in RED to emphasize. It's also why I've become wordy.
And Shifty, I can make a mistake and I was wrong about cores being assigned a specific screen region, I misread.
Okay, but you also need to accept you are wrong about the NGP only updating parts of the screen that have changed, which is where rpg.314 was saying no GPU works that way.
I have to agree with Laa-yosh here. A lot of your technical contributions are ill informed. It's all very well wanting to learn, but I suggest taking smaller steps, and not making large claims about a GPU, say, when you don't really appreciate the underlying tech beyond what you have gathered from a Wikipedia article. Instead you should ask more questions. This thread has seen another 'jeff derailment' stemming from some simple posits by you that are so out of left field that a lot of noise has been generated in response. There was no need to bring up 4k TVs when this thread isn't about next-gen hardware, nor to go on to supposed NGP efficiences when this thread isn't about NGP's hardware.
This generation aircraft is in the current harvest, the situation has been eagerly anticipated next-generation consoles is not. There are Gemupaburissha / circumstances of the developer is also at stake. Deferred generation machine now, advances in hardware development cost is bloated title. Thus, even the major studios can not run a large number of title lines of development. Due to rising costs, market failures and costly title.
In these situations, if the aircraft appeared to freeze new architecture to study again, the engine and tools, and will repeat the cycle, such as the burden would increase further. Many Gemudeberoppa, considering the business side, it would freeze the machine you really want to leave alone.
Thread title : "Business ramifications." Not "Reasons for not releasing MS and Sony postponing next gen hardware."You apparently want to limit the scope of the discussion to "Business". Fair enough....
Thread title : "Business ramifications." Not "Reasons for not releasing MS and Sony postponing next gen hardware."
This is a business thread about how Sony, MS, and Nintendo, plus whatever other parties try to get involved, are going to do about the gaming industry over the coming years, what opportunities, how the field and players may be changing.
This is very different from the claim thatThe PowerVR does not calculate/render portions of the screen that are hidden.
The NGP GPU only updates what has changed on screen which results in a power savings and adds to the performance.
Are the editing 4k video anywhere close to 60fps?Let me ask a few questions. Can the PS3 display 4K resolution after an appropriate firmware update? Yes, the Cell BE and RSX are being used to edit 4k, the HDMI 1.4 spec supports it.
I am not aware of any other solution on the market that does TBDR without resorting to gymnastic stunts of the semantic persuasion.There has never been a fully TBDR based graphics unit that has outperformed the top performing 3D graphics units of the time, at least that I can recall. Nor are PowerVR the only company that utilises TBDR rendering techniques.
Many techniques, like hierarchical Z have been incorporated, but there would still be some gap. Which is why you often see z pre pass or outright deferred shading in sw.As well, as I've stated most of the benefits of TBDR has been incorporated into traditional GPUs.
TBH, I am not aware why exactly PVR was forced out of the market at that time.It may be possible that with enough funding and enough R&D that it could be faster than current traditional 3D GPUs, but we'll never know. It's always been speculated that it "could" be, but it never has been in the past 13+ years. Even at the time when Series 1, 2, and 3 were attempting to go head to head with the best chips available on PC.
TBH, I am not aware why exactly PVR was forced out of the market at that time.
The CE industry appears to like to double resolution, 720p to 1080P is about 2X and 1080P to 4K is again 2X but doing so increases the work of a game console by more than 2X, it's almost a geometric increase in workload. Given GPUs are no longer able to geometrically increase in performance each generation the issue we see was inevitable.
I recall that the actual PVR products have always been at least one step behind in both features and performance, so the enthusiast market never really picked them up. Why they weren't included in OEM PCs is another question, that might be related to the former issue thouh...
POWERVR’s tile-based rendering allow for great performance and image quality on any platform. Its intelligent architecture minimises external memory accesses and thus manages to break the traditional memory bandwidth barrier.
Great features like multitexturing, internal true colour, bump mapping and texture compression enable current games to reach an unrivalled image quality, even in 16 bits colour depth. This intelligent architecture is affordable thanks to the correct choice of features and a cost-effective design.
POWERVR has already solved the memory bandwidth problem when immediate mode renderers are still struggling with it. Future hardware will have to go tile-based if they want to stay competitive, as adding more pipes, memory and even chips will only succeed in dramatically increasing the cost. POWERVR is already proving today that tile-based rendering is a solution for 3D graphics; in the future it will be the only one…
However, we have seen two viable, working tech demos from none other than Polyphony Digital which suggest that Kutaragi's distributed computing dream could manifest in some form in PS3 games. Back in October 2008, prototypes of Gran Turismo 5 surfaced showing the game running at 3840x2160 on a massive, so-called "4K" display. Another demo showed GT5 running at the conventional 1080p resolution, but this time operating at a staggering 240 frames per second. The secret behind this achievement? Distributed computing: in this case, GT5 was running across four PS3s, synchronised and talking to each other using the gigabit LAN port.
In the case of the 4K resolution demo, each console updated a quarter of the screen at 1080p and outputted it at 60Hz.