Business ramifications of a 2014 MS/Sony next gen *spawn

Status
Not open for further replies.
You can't just clock any design to whatever the process will allow. You generally need to add more pipeline stages to a CPU to make it clock higher, which causes bigger branch prediction penalties, which then in turn requires making a more sophisticated branch prediction unit, all of this makes your chip bigger and now you have to worry about clock distribution and skew, etc.
Those issues apply to ALL isa's. Which amongst those are especially problematic for ARM?

Also, I'd imagine that clock distribution/skew wouldn't be their biggest headache @3GHz.

By the time ARM has reworked their architecture to run at high clock speeds, they may not be more power efficient than the competition, depending on how they resolved their issues and did the design.
Again, on what grounds do you believe that? Put another way, if x86 is less power efficient at the moment, then what changes (on x86's or arm's side) do you expect in the future that will close the gap with arm?

If anything, x86's semantics make an OoO implementation harder and by extension less power efficient as well. And it still has it's decoder's overhead.
 
2014? Really?

We are already 5 years in and ~200 million consoles sold and this gen is suppose to last another 3 years. I understand that we haven't really reach the low end price range but the market will be totally saturated before Sony/MS/Nintendo gets there.

There is no reason for Nintendo to sustain this generation if the Wii continues to see diminished sales. They didn't make a huge unfront investment with a lossy console. The longer Sony or MS tries to extend the generation the easy it for Nintendo to up the graphics performance of their console while maintaining a cost level similar to the Wii at launch.

The big mobile GPU companies are all hyping their new tech due within the next 12 months as "console level graphics" gpus. The overall gaming market is not standing still and the current gen console will look antiquated by 2014. We should see cheap media based HTPCs that can do PS3 graphics with intergrated GPUs before 2014 arrives.
 
Hardware sales might flatten out soon, but there's always software and I'd guess everyone makes more money there. And most people expect this gen to surpass the previous one because of expanding the market - ex-PC gamers and completely new audiences. We may easily see a total of 250+ million units in the end.
 
Those issues apply to ALL isa's. Which amongst those are especially problematic for ARM?
Sure they do, however, Intel's been working on solving those problems over a decade with billions in R&D every year. It's not the ARM architecture that's the problem, it's the lack of experience with high performance, high frequency designs and implementations.

Again, on what grounds do you believe that? Put another way, if x86 is less power efficient at the moment, then what changes (on x86's or arm's side) do you expect in the future that will close the gap with arm?
Intel's experience with better implementations and better process compared to all others who make ARM CPU's tells me that.

If anything, x86's semantics make an OoO implementation harder and by extension less power efficient as well. And it still has it's decoder's overhead.
You're not going to get arguments against how horrible x86 architecture is from a design point from me. However, the design still needs to be implemented on an actual chip, and I see Intel having much more experience and better implementations than others when it comes to high performance CPU's. Couple that with their process advantages and they might indeed beat ARM in that market.
 
The NGP gpu doesn't work that way. In fact, no GPU that I know of works that way.

http://en.wikipedia.org/wiki/PowerVR

The PowerVR chipset uses a method of 3D rendering known as tile-based deferred rendering (often abbreviated as TBDR). As the polygon generating program feeds triangles to the PowerVR (driver) it stores them in memory in a triangle strip or an indexed format.

Unlike other architectures, polygon rendering is (usually) not performed until all polygon information has been collated for the current frame. Furthermore, the expensive operations of texturing and shading of pixels (or fragments) is delayed, whenever possible, until the surface visible at a pixel is determined — hence rendering is deferred.
In order to render, the display is split into rectangular sections in a grid pattern. Each section is known as a tile. Associated with each tile is a list of the triangles that visibly overlap that tile. Each tile is rendered in turn to produce the final image.

Tiles are rendered using a process similar to ray-casting. Rays are cast onto the triangles associated with the tile and a pixel is rendered from the triangle closest to the camera. The PowerVR hardware typically calculates the depths associated with each polygon for one tile row in 1 cycle.

This method has the advantage that, unlike a more traditional z-buffered rendering pipeline, no calculations need to be made to determine what a polygon looks like in an area where it is obscured by other geometry. It also allows for correct rendering of partially transparent polygons, independent of the order in which they are processed by the polygon producing application. (This capability was only implemented in Series 2 and one MBX variant. It is generally not included for lack of API support and cost reasons.) More importantly, as the rendering is limited to one tile at a time, the whole tile can be in fast onchip memory, which is flushed to video memory before processing the next tile.

Under normal circumstances, each tile is visited just once per frame.
PowerVR is not the only pioneer of tile based deferred rendering, but the only one to successfully bring a TBDR solution to market. Microsoft also conceptualised the idea with their abandoned Talisman project. Gigapixel, a company that developed IP for tile-based deferred 3D graphics, was purchased by 3dfx, who were subsequently purchased by Nvidia. Nvidia currently has no official plans to pursue tile-based rendering.
Intel uses a similar concept in their integrated graphics solutions. However, their method, coined zone rendering, does not perform full hidden surface removal (HSR) and deferred texturing, therefore wasting fillrate and texture bandwidth on pixels that are not visible in the final image.
Recent advances in hierarchical Z-buffering have effectively incorporated ideas previously only used in deferred rendering, including the idea of being able to split a scene into tiles and of potentially being able to accept or reject tile sized pieces of polygon.
 
Last edited by a moderator:
Hardware sales might flatten out soon, but there's always software and I'd guess everyone makes more money there. And most people expect this gen to surpass the previous one because of expanding the market - ex-PC gamers and completely new audiences. We may easily see a total of 250+ million units in the end.

We are already at ~200 million consoles. 50+ million over the next three years and over three different consoles isn't what I call a worthwhile endeavor to stretch out profit taking.

Expecting the 360 or PS3 to hold up well until the middle or end of 2014 is like expecting the Xbox1/PS2 to have held up well until 2009.

Trying to hold up to 2014 will only gives someone else a massive opportunity in jump in 2012 and cause major market disruption.
 
We are already at ~200 million consoles. 50+ million over the next three years and over three different consoles isn't what I call a worthwhile endeavor to stretch out profit taking.

Expecting the 360 or PS3 to hold up well until the middle or end of 2014 is like expecting the Xbox1/PS2 to have held up well until 2009.

Trying to hold up to 2014 will only gives someone else a massive opportunity in jump in 2012 and cause major market disruption.

We're actually at about 180 million. 85million wii 50million 360 and 45 million ps3. 2011 should push us well past the 200 mark. And the extra time is all about selling more software, not hardware.

There's still room for some price drops in the next couple years as well.

Who is going to jump in?
 
The only system selling significantly more than 10 million per year has been the Wii and they're slowing down. 250 million total sales for this generation means they could launch in september-november of 2013. I think it's still reasonable, but prolonging for 2014 is really too much.

2012 might still see a new Halo game from 343, Doom4 from id, something from Blizzard (they're looking into porting Diablo 3), maybe GTA V (unless it's somehow released this year) so there's still going to be plenty of gaming to do ;) and also, no reason to push new hardware to the market.
 
Sure they do, however, Intel's been working on solving those problems over a decade with billions in R&D every year. It's not the ARM architecture that's the problem, it's the lack of experience with high performance, high frequency designs and implementations.

And none of it is a guaranteed deal breaker. Industry has been fabbing ~3GHz chips for almost 7 years now. There's a lot of accumulated knowledge in terms of patents and papers, not to mention personnel migration, which put an efficient high speed implementation of ARM isa within reach, especially when you consider it's practically only competitor's own handcuffs.

Intel's experience with better implementations and better process compared to all others who make ARM CPU's tells me that.
Fair enough.
 
We're actually at about 180 million. 85million wii 50million 360 and 45 million ps3. 2011 should push us well past the 200 mark. And the extra time is all about selling more software, not hardware.

There's still room for some price drops in the next couple years as well.

Who is going to jump in?

More than likely one or more of the current players. There is no sense for Nintendo to see a long sustain drop in revenue. We are currently a year off peak highs of the current gen with another 3 years of lowering revenue if no one launches until 2014. With another 2-3 years require to ramp on console production, if none of three launches before 2014, we won't see a return to 2009 levels until 2016-2017. Nintendo didn't spend billions of dollars on a lossy console. Transitioning to a new console to boost sales is the easist for Nintendo because they don't have to worry about still trying to make up for their initial investment.

Also, the 360 is MS's premier product for the EDD division. A division that MS's most promising in terms of breaking out and being a major contributor to MS's bottom line. MS share prices have been stagnant because MS has had difficulty finding revenue and profit generators outside of the Windows ecosystem. Stalling by Nintendo and/or Sony to introduce their next gen console in 2014 gives MS a better opportunity in 2012 than existed in late 2005. As the 360 will be 6 years old instead of 4. And providing MS with a two year window instead of one. Launching alone in 2012 would give MS and EDD a big boost with its strongest chance of being #1 next gen. Whats the point of passing on that chance to drag out 360 profits. MS makes 4-6 billion dollars a quarter, EDD profit or losses don't really affect MS's bottom line for them to feel a need to pass up this opportunity to scrape up the remaining profits potentially available to the 360 with no new consoles until 2014.
 
More than likely one or more of the current players. There is no sense for Nintendo to see a long sustain drop in revenue. We are currently a year off peak highs of the current gen with another 3 years of lowering revenue if no one launches until 2014. With another 2-3 years require to ramp on console production, if none of three launches before 2014, we won't see a return to 2009 levels until 2016-2017. Nintendo didn't spend billions of dollars on a lossy console. Transitioning to a new console to boost sales is the easist for Nintendo because they don't have to worry about still trying to make up for their initial investment.

Oh well I could see nintendo launching before 2014, but there's not much chance of Sony imo. I thought you were talking about someone new.

Also, the 360 is MS's premier product for the EDD division. A division that MS's most promising in terms of breaking out and being a major contributor to MS's bottom line. MS share prices have been stagnant because MS has had difficulty finding revenue and profit generators outside of the Windows ecosystem. Stalling by Nintendo and/or Sony to introduce their next gen console in 2014 gives MS a better opportunity in 2012 than existed in late 2005. As the 360 will be 6 years old instead of 4. And providing MS with a two year window instead of one. Launching alone in 2012 would give MS and EDD a big boost with its strongest chance of being #1 next gen. Whats the point of passing on that chance to drag out 360 profits. MS makes 4-6 billion dollars a quarter, EDD profit or losses don't really affect MS's bottom line for them to feel a need to pass up this opportunity to scrape up the remaining profits potentially available to the 360 with no new consoles until 2014.

The last quarter EDD made over $600 million. Profit for the current fiscal year will probably be over $2 billion and calendar year 2010 was their best year ever. The goal isn't to be #1, the goal is to be profitable. They're just now reaping the rewards after 5 years, the last thing they want to do is kill that momentum with their own new product and its 10 launch titles or whatever they could manage.

I wouldn't be surprised if Nintendo is the first to launch, but their waning sales are still making them a lot of money. I'd fully expect the other 2 to follow within a year or so, whoever launches first won't gain more than a year.
 
a) What's the point of putting up a link and then quoting the whole damn page back to me, in the same post, without specifying which bits YOU are referring to?

b) Where does that page say anything about PowerVR tracking visibility ACROSS frames to perform some kind of HSR?

Edited and Bolded the parts that apply.

I said; "The NGP GPU only updates what has changed on screen which results in a power savings and adds to the performance. It also tiles the screen and each of the 4 GPUs is responsible for a 1/4 rectangle of the whole screen. Processes for screen generation can help achieve a higher display resolution."

The PowerVR does not calculate/render portions of the screen that are hidden. It can accept or deny rendering to a tile (my read on that was, can choose to not render a portion of the screen that is hidden or has not changed). The point was: Processes for screen generation/rendering can help achieve a higher display resolution.

Also in the WiKi article, this appears to be unique at this time. It's mentioned in another news article as a "secret" method allowing the PowerVR to equal a PS3's graphics power.

The processes allow the PowerVR to be more efficient. This was the point and the only point I was trying to make. Brute power is reaching a physics limit, other methods are being explored especially for efficiency driven, battery powered platforms. See cite in message 63
 
Last edited by a moderator:
IF you have the hardware for 3-D, you just about have the hardware for 4K. The HDMI 1.4 specs reflect this.

I assume you mean 1080p in 3D? 4K is still a lot more than 2x 1080P, so I don't understand what you mean by "you just about have the hardware for 4K"

I don't want to sound rude or portray myself as any sort of expert in this topic, but I find very large parts of your arguments in this thread to be completely false or based on misinformation.

edit: your post about TBDR only calculating what has changed is wrong. It speaks about not calculating stuff that's not visible on the screen. How do you translate that as "not changed on the screen"?
 
I assume you mean 1080p in 3D? 4K is still a lot more than 2x 1080P, so I don't understand what you mean by "you just about have the hardware for 4K"

I don't want to sound rude or portray myself as any sort of expert in this topic, but I find very large parts of your arguments in this thread to be completely false or based on misinformation.

It appears to be more that he has a certain view of how things should be and then attempts to interpret the available information in such a way as to reflect that view. I don't think he's deliberately trying to spread false or misleading information.

He is correct that HDMI 1.4 can support 4k at 24 hz. So technically it's enough for a 4k Movie feed, but entirely insufficient for motion gaming.

Regards,
SB
 
Edited and Bolded the parts that apply.

I said; "The NGP GPU only updates what has changed on screen which results in a power savings and adds to the performance. It also tiles the screen and each of the 4 GPUs is responsible for a 1/4 rectangle of the whole screen. Processes for screen generation can help achieve a higher display resolution."

TBR does not enable higher resolution. TBR is a memory saving technique. It may surprise you to know that ATI's GPUs divide the screenspace into tiles.

The PowerVR does not calculate/render portions of the screen that are hidden. It can accept or deny rendering to a tile (my read on that was, can choose to not render a portion of the screen that is hidden or has not changed). The point was: Processes for screen generation/rendering can help achieve a higher display resolution.

This is common to all modern GPUs. Even RSX and Xenos have z-culling to avoid rendering triangles that aren't visible. Again it isn't something that directly increases display resolution.

Also in the WiKi article, this appears to be unique at this time. It's mentioned in another news article as a "secret" method allowing the PowerVR to equal a PS3's graphics power.

The processes allow the PowerVR to be more efficient. This was the point and the only point I was trying to make. Brute power is reaching a physics limit, other methods are being explored especially for efficiency driven, battery powered platforms.

It was semi-unique at the time PowerVR first launched a GPU back in the late 90's. It was a 3D rendering design path that was meant to be compatible with Microsoft's backing of the Talisman rendering push (http://en.wikipedia.org/wiki/Microsoft_Talisman ). That was to be a tile based approach to acclerated 3D rendering. However that died out in no small part due to the massive drop in pricess for memory around 1997-98.

Hence PowerVR found themselves with a very memory efficient way of rendering that was suddenly rendered relatively irrelevant to the market at large.

It remains especially attractive in the handheld (and smartphone space) due to the memory savings achieved with a full TBR.

That only indirectly helps you with power (one or two less memory chips will be fairly insignificant in overall power use) but helps significantly where space on the PCB is at a premium.

As well many of the original benefits of PowerVR's original TBR chip have been incorporated into all modern GPUs. It really isn't all that unique anymore. The only real benefit is that it allows the use of slightly less memory. And that is insignificant in anything but a handheld/smartphone where PCB space is at a premium.

Regards,
SB
 
I assume you mean 1080p in 3D? 4K is still a lot more than 2x 1080P, so I don't understand what you mean by "you just about have the hardware for 4K"

I don't want to sound rude or portray myself as any sort of expert in this topic, but I find very large parts of your arguments in this thread to be completely false or based on misinformation.

edit: your post about TBDR only calculating what has changed is wrong. It speaks about not calculating stuff that's not visible on the screen. How do you translate that as "not changed on the screen"?

Let me ask a few questions. Can the PS3 display 4K resolution after an appropriate firmware update? Yes, the Cell BE and RSX are being used to edit 4k, the HDMI 1.4 spec supports it.

I'm not sure a 2X blu-ray player is fast enough and you are correct it would need double the video frame buffer memory in a 3-D blu-ray player. What would be the cost for a 4K blu-ray player, $30 - $50 more than for a 3-D player?

When you compare the cost of a 4K player $180.00 (estimate) to the cost of a 4K display $10,000 the difference supports for the player; "you just about have the hardware for 4K". 4K DLP when TI releases a new mirror array chip could only be $3,000 - $5000.
 
Let me ask a few questions. Can the PS3 display 4K resolution after an appropriate firmware update? Yes, the Cell BE and RSX are being used to edit 4k, the HDMI 1.4 spec supports it.

I'm not sure a 2X blu-ray player is fast enough and you are correct it would need double the video frame buffer memory in a 3-D blu-ray player. What would be the cost for a 4K blu-ray player, $30 - $50 more than for a 3-D player?

When you compare the cost of a 4K player $180.00 (estimate) to the cost of a 4K display $10,000 the difference supports for the player; "you just about have the hardware for 4K". 4K DLP when TI releases a new mirror array chip could only be $3,000 - $5000.

Let me ask you a question. Let's assume 3m (about 10feet) viewing distance. How big screen has to be that the quality difference from 1080p to 4k becomes obvious and desirable? (not something you can spot from still images after comparing them for 2 minutes) Do you expect people to buy such big screens? There is the WAF acceptance factor which already limits people to suboptimal screen sizes with current 1080p panels (Wife Acceptance Factor). What if the viewing distance is 4m, ~13feet?

More is not instantly better in this case. It's the same as buying ferrari to commute to work and still being stuck in the traffic jams with all the toyotas. You don't get to workplace any faster but sure, you look cooler while doing the commute.

I'm willing to bet 4k is niche of niches which makes absolutely no sense for home movies for next 5 years due to screen size, distribution media, price, availability of material and so on. For games 4k makes even less sense as it's better to get better pixels rather than more pixels.

If this is anything to go by then 70" screen would start to be optimal for 10feet viewing distance with 1080p movies(again not still images but movies) http://carltonbale.com/1080p-does-matter

If one extrapolates from that article+diagram 4k would require gigantic screens before the difference becomes really desirable for masses.
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top