Business ramifications of a 2014 MS/Sony next gen *spawn

Status
Not open for further replies.
I think if Nintendo releases a new console they will need a new idea, like the Wii motion controller or the touch screen on the DS. Then they will simply choose HW that can support this new idea.
 
The NGP GPU only updates what has changed on screen which results in a power savings and adds to the performance.
Where is this coming from?! :oops: AFAIK there is no GPU architecture that works on some delta principle, rendering only what has changed like some funky video compressor. I guess in theory it's possible if you did a complete vertex and texture comparison before rendering a pixel, but the cost would be little different to actually rendering the pixel. And in a worst case scenario, like an FPS where you rotate the camera, every pixel is going to change meaning such tests are pure overhead and every pixel will have to be rendered from scratch. The idea of rendering only changes is a good one for certain rendering styles or when you know you are limiting changes, but it's not a sound basis for a GPU architecture having to work with every kind of game and viewpoint. And SGX doesn't do this. They have a standard TBDR. I also question that each core drives one quarter of the screen. Workload should be distributed as needed. From the IMGTEC SGX543MP gumph :
no fixed allocation of given pixels to specific cores, enabling maximum processing power to be allocated to the areas of highest on-screen action
Thus is most of the screen a static wall vector, with lots of interesting stuff happening in the bottom right corner, rather than 3 cores sitting idle, they'll share the workload.
 
Yes, the Cell BE and RSX are being used to edit 4k

I assume you mean this:

http://pro.sony.com/bbsccms/ext/ZEGO/files/BCU-100_Whitepaper.pdf

The connection between Sony selling server cluster components for video editing, and the fact that it uses the same components as PS3 is somehow relevant according to you?

Sony has kept
maximum power consumption to within 330 W (at 100V
AC). In the case of a rack that holds forty BCU-100 units, for
example, maximum power consumption would be 13.2 kW

It's not like they are using your regular PS3 configuration for those tasks. I doubt some editing studio has a rack of 1 unit as their hardware...

All that is besides any relevant point anyways. The demand for 4K resolution in home applications is going to be really low for a long time and almost useless for anything but projector size screens.

I'd take a 4K projector though or atleast something over 1080P for having an excuse to build a multi gpu setup :)
 
TBR does not enable higher resolution. TBR is a memory saving technique. It may surprise you to know that ATI's GPUs divide the screenspace into tiles.

This is common to all modern GPUs. Even RSX and Xenos have z-culling to avoid rendering triangles that aren't visible. Again it isn't something that directly increases display resolution.

It was semi-unique at the time PowerVR first launched a GPU back in the late 90's. It was a 3D rendering design path that was meant to be compatible with Microsoft's backing of the Talisman rendering push (http://en.wikipedia.org/wiki/Microsoft_Talisman ). That was to be a tile based approach to acclerated 3D rendering. However that died out in no small part due to the massive drop in pricess for memory around 1997-98.

Hence PowerVR found themselves with a very memory efficient way of rendering that was suddenly rendered relatively irrelevant to the market at large.

It remains especially attractive in the handheld (and smartphone space) due to the memory savings achieved with a full TBR.

That only indirectly helps you with power (one or two less memory chips will be fairly insignificant in overall power use) but helps significantly where space on the PCB is at a premium.

As well many of the original benefits of PowerVR's original TBR chip have been incorporated into all modern GPUs. It really isn't all that unique anymore. The only real benefit is that it allows the use of slightly less memory. And that is insignificant in anything but a handheld/smartphone where PCB space is at a premium.

Regards,
SB

You are talking TBR and I'm quoting TBDR. The PowerVR apparently does both.

"The heavy bandwidth savings is the key advantage of a TBDR." http://www.beyond3d.com/content/articles/38/

Bandwidth is not memory. This does help to increase efficiency.

Traditional architectures, like 3Dfx Voodoo2 - Riva TNT and others, work on a per polygon basis. This means that their pipeline will take a triangle render it, take the following triangle and render it, and take again the following triangle and render it,... this means that they do not know what is still to come. PowerVR uses an overview of the scene to decide what to render, traditional renderers just rush into it and do a lot of unnecessary work. The following figure shows this:

Trad.gif


I can be mislead by what I read so correct what I cited. This may be a perfect example of the many misunderstanding I've experienced and is why I generally cite and believe in RED to emphasize. It's also why I've become wordy.

And Shifty, I can make a mistake and I was wrong about cores being assigned a specific screen region, I misread.
 
Last edited by a moderator:
I think your understanding of these technologies is very superficial, I recommend a lot more reading before jumping to conclusions...
 
When the average person needs to sit 5' away from a 60 inch display I'm sure the adoption of 4k will take off.

It might make it as a checkbox on some next gen consoles, but I wouldn't hold your breath on any actual support.
 
And Shifty, I can make a mistake and I was wrong about cores being assigned a specific screen region, I misread.
Okay, but you also need to accept you are wrong about the NGP only updating parts of the screen that have changed, which is where rpg.314 was saying no GPU works that way.

I have to agree with Laa-yosh here. A lot of your technical contributions are ill informed. It's all very well wanting to learn, but I suggest taking smaller steps, and not making large claims about a GPU, say, when you don't really appreciate the underlying tech beyond what you have gathered from a Wikipedia article. Instead you should ask more questions. This thread has seen another 'jeff derailment' stemming from some simple posits by you that are so out of left field that a lot of noise has been generated in response. There was no need to bring up 4k TVs when this thread isn't about next-gen hardware, nor to go on to supposed NGP efficiences when this thread isn't about NGP's hardware.
 
You are talking TBR and I'm quoting TBDR. The PowerVR apparently does both.

"The heavy bandwidth savings is the key advantage of a TBDR." http://www.beyond3d.com/content/articles/38/

Bandwidth is not memory. This does help to increase efficiency.

I can be mislead by what I read so correct what I cited. This may be a perfect example of the many misunderstanding I've experienced and is why I generally cite and believe in RED to emphasize. It's also why I've become wordy.

And Shifty, I can make a mistake and I was wrong about cores being assigned a specific screen region, I misread.

You got me, in my haste to post, I typed TBR instead of TBDR. And yes it helps with memory bandwidth more than memory required. Both situations benefit from the reduced data required, thus lowering bandwidth requirements as well as memory requirements.

And I never said it didn't increase efficiency, that's the whole schtick with TBDRs. But it does NOT directly increase either the speed of the solution nor the resolution at which it can render 3D scenes.

There has never been a fully TBDR based graphics unit that has outperformed the top performing 3D graphics units of the time, at least that I can recall. Nor are PowerVR the only company that utilises TBDR rendering techniques.

As well, as I've stated most of the benefits of TBDR has been incorporated into traditional GPUs.

It may be possible that with enough funding and enough R&D that it could be faster than current traditional 3D GPUs, but we'll never know. It's always been speculated that it "could" be, but it never has been in the past 13+ years. Even at the time when Series 1, 2, and 3 were attempting to go head to head with the best chips available on PC.

It is absolutely an efficient way of rendering, but it most certainly isn't the fastest way of rendering nor always the best way of rendering. And it's architechture most certainly isn't what helps to achieve "higher display resolution." as you bolded.

If you can still track them down I'd suggest looking at reviews and overviews throughout the early years of PVR (1998-2001) when it was all the rage.

It's quite possible that had Microsoft's Talisman initiative won out over the current 3D rendering paradigm we have now that things may have turned out differently, but it didn't. And even then it's arguable whether things would be drastically different now. As mentioned both here and in the various PVR articles (Wikipedia among them) many of the advantages that PVR had back then have been incorporated into modern GPUs or influenced key design changes in modern GPUs thus negating most of the advantages that PVR had in their TBDR chips that could have lead to a more performant chip.

Regards,
SB
 
Adding 4K upscaling through HDMI should be a one time trivial programming effort not reflected in per unit costs, so I don't see why not. There's 3D support and very few people have 3D sets. It's easy to add such features in software and I'm sure the checkbox would be worth the minimal software effort to put it in.
 
Okay, but you also need to accept you are wrong about the NGP only updating parts of the screen that have changed, which is where rpg.314 was saying no GPU works that way.

I have to agree with Laa-yosh here. A lot of your technical contributions are ill informed. It's all very well wanting to learn, but I suggest taking smaller steps, and not making large claims about a GPU, say, when you don't really appreciate the underlying tech beyond what you have gathered from a Wikipedia article. Instead you should ask more questions. This thread has seen another 'jeff derailment' stemming from some simple posits by you that are so out of left field that a lot of noise has been generated in response. There was no need to bring up 4k TVs when this thread isn't about next-gen hardware, nor to go on to supposed NGP efficiences when this thread isn't about NGP's hardware.

I'm not trying to make technical statements. Large claims? When discussing the reasons for Sony and MS to shelve the next generation game console, having to support the next generation TV is part of the issue and the search for faster cost effective GPUs to do so has failed. I.E. Larrabee.

Why?

The industry is coming up against a physics limit and improvements in GPU and CPUs is becoming linear not geometric. (I cited this)

How can this be overcome?

For the GPU, processes that were discovered years ago but dropped because Brute force worked and/or memory became cheaper may be used now. PowerVR appears to have used (TBDR) because power efficiency and brute force do not coexist in handhelds.

Is the above accurate?

Shifty, you chose to close the thread I started and the issues I was interested in that were mentioned in the interview and highlighted were:

1) There are no plans to reduce the die size of the PS3 beyond the current 45nm, perhaps waiting for the 20/22nm process. What could also be changed with 20nm available.

2) Plans are to upgrade the OS Software and features of the PS3 to make it last longer. What upgrades are coming?

3) SCEI suspended PS4 development. They were working on a PowerPC based system at IBM Rochester after the Larrabee (GPU) fall out, then shelved it. Why? The statement from the author, they are betting the farm on handhelds to be the future. See my message # 62 which both supports handhelds, higher energy efficiency and the physics reason GPUs are having problems improving.

You apparently want to limit the scope of the discussion to "Business". Fair enough....

Start with why the PS4 was shelved....GPU or market change (both mentioned). Is there any hope for cost effective GPUs in the near future? When? coming from who? Any process that might allow a more efficient GPU (TBDR).

Market change....market moving to handheld is in the article but for the most part is skipped in this thread.

From ALStrong: " The stable archticture may be just what everyone needs so devs can worry about fixing production issues rather than dealing with scaling up art budgets and readjusting again just so they can compete with everyone there."

And from the article, I post in support of Laa-Yosh's statement; "So there's absolutely no incentive on anyone's side to pursue another large jump in resolution." which I stated is probably true for developers as well as Sony and MS and is possibly why the next generation Game machines have been shelved, it's mentioned in the article cited by Shifty that started this thread.

This generation aircraft is in the current harvest, the situation has been eagerly anticipated next-generation consoles is not. There are Gemupaburissha / circumstances of the developer is also at stake. Deferred generation machine now, advances in hardware development cost is bloated title. Thus, even the major studios can not run a large number of title lines of development. Due to rising costs, market failures and costly title.

 In these situations, if the aircraft appeared to freeze new architecture to study again, the engine and tools, and will repeat the cycle, such as the burden would increase further. Many Gemudeberoppa, considering the business side, it would freeze the machine you really want to leave alone.

So while I believe 4K TV is coming and the PS4 would have to support it in some way I agree that it would be extremely expensive to do so.

And as a result, "Plans are to upgrade the OS Software and features of the PS3 to make it last longer"

The CE industry appears to like to double resolution, 720p to 1080P is about 2X and 1080P to 4K is again 2X but doing so increases the work of a game console by more than 2X, it's almost a geometric increase in workload. Given GPUs are no longer able to geometrically increase in performance each generation the issue we see was inevitable. It is no longer easily possible to keep up both from the hardware side and also with CGI as mentioned by Laa-Yosh.

Supporting OpenGL (with the same level of gameplay) and 1080P in the PS3 is not quite possible. I believe a PS3.5 rather than a new generation PS4 would be needed.
 
Last edited by a moderator:
The transistor limits for even the current generation PC GPUs are more than enough for any next-gen console and 1080p with lots of AA. Memory bandwidth is going to be a far more serious bottleneck.
 
You apparently want to limit the scope of the discussion to "Business". Fair enough....
Thread title : "Business ramifications." Not "Reasons for not releasing MS and Sony postponing next gen hardware."

This is a business thread about how Sony, MS, and Nintendo, plus whatever other parties try to get involved, are going to do about the gaming industry over the coming years, what opportunities, how the field and players may be changing.
 
Thread title : "Business ramifications." Not "Reasons for not releasing MS and Sony postponing next gen hardware."

This is a business thread about how Sony, MS, and Nintendo, plus whatever other parties try to get involved, are going to do about the gaming industry over the coming years, what opportunities, how the field and players may be changing.

Sorry, the thread I started from the same article, I assumed you closed because it was a duplicate. Your citing the same article supported that and the article you cite is only about 20% about the business and 80% about technical news.
 
Let me ask a few questions. Can the PS3 display 4K resolution after an appropriate firmware update? Yes, the Cell BE and RSX are being used to edit 4k, the HDMI 1.4 spec supports it.
Are the editing 4k video anywhere close to 60fps?

HDMI spec supporting 4k display is orthogonal to the technical challenge of efficient storage, compression, decompression and distribution of such a massive amount of data.
 
There has never been a fully TBDR based graphics unit that has outperformed the top performing 3D graphics units of the time, at least that I can recall. Nor are PowerVR the only company that utilises TBDR rendering techniques.
I am not aware of any other solution on the market that does TBDR without resorting to gymnastic stunts of the semantic persuasion.

As well, as I've stated most of the benefits of TBDR has been incorporated into traditional GPUs.
Many techniques, like hierarchical Z have been incorporated, but there would still be some gap. Which is why you often see z pre pass or outright deferred shading in sw.

It may be possible that with enough funding and enough R&D that it could be faster than current traditional 3D GPUs, but we'll never know. It's always been speculated that it "could" be, but it never has been in the past 13+ years. Even at the time when Series 1, 2, and 3 were attempting to go head to head with the best chips available on PC.
TBH, I am not aware why exactly PVR was forced out of the market at that time.
 
TBH, I am not aware why exactly PVR was forced out of the market at that time.

I recall that the actual PVR products have always been at least one step behind in both features and performance, so the enthusiast market never really picked them up. Why they weren't included in OEM PCs is another question, that might be related to the former issue thouh...
 
The CE industry appears to like to double resolution, 720p to 1080P is about 2X and 1080P to 4K is again 2X but doing so increases the work of a game console by more than 2X, it's almost a geometric increase in workload. Given GPUs are no longer able to geometrically increase in performance each generation the issue we see was inevitable.

1080p has 2.25x the pixels of 720p and 4K has 4x the pixels of 1080P.
 
I recall that the actual PVR products have always been at least one step behind in both features and performance, so the enthusiast market never really picked them up. Why they weren't included in OEM PCs is another question, that might be related to the former issue thouh...

This might be dated, Brute force vs processes and the best description of PowerVR TBDR

http://www.altsoftware.com/embedded-products/supplements/PowerVR_MBX.pdf

POWERVR’s tile-based rendering allow for great performance and image quality on any platform. Its intelligent architecture minimises external memory accesses and thus manages to break the traditional memory bandwidth barrier.

Great features like multitexturing, internal true colour, bump mapping and texture compression enable current games to reach an unrivalled image quality, even in 16 bits colour depth. This intelligent architecture is affordable thanks to the correct choice of features and a cost-effective design.

POWERVR has already solved the memory bandwidth problem when immediate mode renderers are still struggling with it. Future hardware will have to go tile-based if they want to stay competitive, as adding more pipes, memory and even chips will only succeed in dramatically increasing the cost. POWERVR is already proving today that tile-based rendering is a solution for 3D graphics; in the future it will be the only one…

4K is another animal altogether and a common working video memory might not be possible because of memory bandwidth. In that case might the screen be separated with an extension of the tiling method with more on chip memory for each GPU. Something similar was apparently done with the GT5 distributed processing demo.

http://www.kickarss.com/sony-playstation/ps3-news/142-distributed-computing-could-boost-ps3-power

However, we have seen two viable, working tech demos from none other than Polyphony Digital which suggest that Kutaragi's distributed computing dream could manifest in some form in PS3 games. Back in October 2008, prototypes of Gran Turismo 5 surfaced showing the game running at 3840x2160 on a massive, so-called "4K" display. Another demo showed GT5 running at the conventional 1080p resolution, but this time operating at a staggering 240 frames per second. The secret behind this achievement? Distributed computing: in this case, GT5 was running across four PS3s, synchronised and talking to each other using the gigabit LAN port.


In the case of the 4K resolution demo, each console updated a quarter of the screen at 1080p and outputted it at 60Hz.

Too expensive of course.
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top