Post Xbox One Two Scorpio, what should Sony do next? *spawn* (oh, and Nintendo?)

I wanted to say that APU which costs twice as much (accounting yield ratio, die size, etc.) could produce only 20% increase in performance.

$400 for 4TF or $500 for 4.8TF.

Not quite following you on that first line, sorry. You seem to be saying that an APU costing twice as much could only offer a 20% increase in performance. Could you run the maths by me?

Your numbers seem to saying that doubling the power would incur nearly five times the cost on die alone. Given that AMD are currently selling an 8TF GPU ...

Edit: are you referring to something like this?

http://users.wpi.edu/~mcneill/handouts/old_handouts/iceconomics.pdf

Remember that all these consoles have redundant CUs, that only a portion of the chip is covered with CUs, and that X1 has a large area of sram.
 
Last edited:
$299 wouldn't do much imo. $249 I would be happy with. $199 slim Xbox One would probably be the biggest news out of E3.

That, compared to a 4TF machine Sony will announce, would simply mean MS gets the bottom feeders while Sony takes the core gamers. The Xbox brand, now as the low powered cheap alternative, would take a hit too.

The business has changed a bit, simply getting consoles out the door doesn't equal success. If someone buys both a PS4 and XB1, but all of their multiplatform games on PS4 because they play better, that's a win for Sony.

I simply believe that surrendering the top power-spot to Sony or putting price over performance would be repeating the mistakes of this gen.
 
In the wake of the rumours about Sony launching a system the reactions everywhere are turning into fan fictions and people are not reasoning but grounding their expectations in their sweet dreams instead of markets dynamics or analysis.
Sony does not plan on letting the PS4 behind, the PS4 Neo is to be used in pretty specific situations and looking at the rumors specs and the name, delivering higher end VR experience seems to be their goal. They have the cheapest VR headset and they may want to increase how well it fares and offering the most bang for bucks on the segment. That is a profitable segment, I raised the matter once nobody wanted to discuss outside of spouting GFLOPS figures out, Sony may ask good money for the system.
Now moving to MSFT people our spouting figures about actually something different than what SOny is doing, AKA mostly giving the fuck to XB1 owner. If MSFT were to do something akin to SOny, what would they push? they have no VR system of their own and the others are extremely costly. Say they build one, they will need it to run on the XB1 from there it makes not more sense to go with more than a doubling of the GPU power and a lesser increase of the CPU power. What would be the point?

People are getting irrational and mods may consider giving some guidelines to the frantic conversations going on here, every one can throw random GFLOPS figures around, the thing is you need a business case to risk the model which sustained consoles for eons, the more likely one for SOny is better VR experience. What would be MSFT's one? I mean if people want to throw FLOPS figures around there is the Tech section for that. Here it is the more general side of things and so... business.
People are acting as if they know Sony will risk its dominance by breaking the unicity of the PS4 experience which is far from a given. What it seems they are doing is offering a better systems to the people that have enough money to pay for Morpheus which cost quite some bucks to begin with. FLOPS have nothing to with that.
Sony imho has its Kinect moment but they do it in a different manner, they are pushing something cutting edge but I would be extremely surprised if it is aimed toward the masses, they are trying to milk (for now) what could be a pretty niche but high margins market.
MSFT has others ways to address its performances disadvantage against Sony and to try to stop the massive bleeding in market shares they are suffering from in Europe, and the significant one in US. Winning is beyond them for this round, I mean how could they? In an unprecedented move all the costumers that spent money on a system only a couple years after they bought one they like and do the job pretty greatly.
 
That, compared to a 4TF machine Sony will announce, would simply mean MS gets the bottom feeders while Sony takes the core gamers. The Xbox brand, now as the low powered cheap alternative, would take a hit too.

The business has changed a bit, simply getting consoles out the door doesn't equal success. If someone buys both a PS4 and XB1, but all of their multiplatform games on PS4 because they play better, that's a win for Sony.

I simply believe that surrendering the top power-spot to Sony or putting price over performance would be repeating the mistakes of this gen.

I agree. A price friendly slim is just not going to generate any sort of momentum towards sales. Everyone knows that it's still the same Xbox One that has the same narrative of weakness and poorest versions of multiplatform games. I don't see how it generates enough excitement to become the biggest story coming out of E3.

I'm going to go out on an limb and say MS knows better than to repeat this strategy. I know people love to underestimate consumers as uniformed, but they know enough to ask basic questions about performance specs. If MS wants to change the conversation about the Xbox brand they're going to have to start with a powerful performance gain.
 
In the wake of the rumours about Sony launching a system the reactions everywhere are turning into fan fictions and people are not reasoning but grounding their expectations in their sweet dreams instead of markets dynamics or analysis.
Sony does not plan on letting the PS4 behind, the PS4 Neo is to be used in pretty specific situations and looking at the rumors specs and the name, delivering higher end VR experience seems to be their goal. They have the cheapest VR headset and they may want to increase how well it fares and offering the most bang for bucks on the segment. That is a profitable segment, I raised the matter once nobody wanted to discuss outside of spouting GFLOPS figures out, Sony may ask good money for the system.
Now moving to MSFT people our spouting figures about actually something different than what SOny is doing, AKA mostly giving the fuck to XB1 owner. If MSFT were to do something akin to SOny, what would they push? they have no VR system of their own and the others are extremely costly. Say they build one, they will need it to run on the XB1 from there it makes not more sense to go with more than a doubling of the GPU power and a lesser increase of the CPU power. What would be the point?

People are getting irrational and mods may consider giving some guidelines to the frantic conversations going on here, every one can throw random GFLOPS figures around, the thing is you need a business case to risk the model which sustained consoles for eons, the more likely one for SOny is better VR experience. What would be MSFT's one? I mean if people want to throw FLOPS figures around there is the Tech section for that. Here it is the more general side of things and so... business.
People are acting as if they know Sony will risk its dominance by breaking the unicity of the PS4 experience which is far from a given. What it seems they are doing is offering a better systems to the people that have enough money to pay for Morpheus which cost quite some bucks to begin with. FLOPS have nothing to with that.
Sony imho has its Kinect moment but they do it in a different manner, they are pushing something cutting edge but I would be extremely surprised if it is aimed toward the masses, they are trying to milk (for now) what could be a pretty niche but high margins market.
MSFT has others ways to address its performances disadvantage against Sony and to try to stop the massive bleeding in market shares they are suffering from in Europe, and the significant one in US. Winning is beyond them for this round, I mean how could they? In an unprecedented move all the costumers that spent money on a system only a couple years after they bought one they like and do the job pretty greatly.

VR is not much more than an expensive tech demo as of right now and I don't think it is the main driving point for new consoles in 2016/2017. Weak consoles released 2.5 years ago and 14nm nodes coming online are the reason.
 
VR is not much more than an expensive tech demo as of right now and I don't think it is the main driving point for new consoles in 2016/2017. Weak consoles released 2.5 years ago and 14nm nodes coming online are the reason.
What is weak about them? Sorry I don't see, the PS4 is doing the best at running games at the screen resolution than any consoles in a long long while.
I've no next gen but from all I read both system offers solid experience. Super high end PC are still having trouble running the last Batman if you see what I mean.
Specs are a non issue, and the cumulated sales for both systems are painting a pretty lovely picture about how the systems are perceived by theirs intended costumers.

EDIT
to make the point further, both the 360 and PS3 ended up on the wrong side of technology realyl fast, with Nvidia's move to a unified architecture as well as AMD. The GHz race on the CPU also stop soon after, with manufacturer giving up on speed demon designs. Yet the longest generation ever, those two systems literally butchered the assets of countless games, etc.
 
Last edited:
Maybe it's about both? What's good for high frame rate VR is also good for chasing up the 4K ladder.
There may be the idea of future proofing the consoles a little bit but I don't think it is the main driving point. Though is PS4 neo specs even going to future proof against VR demands for long?

Either way is still believe we would see new consoles even if VR wasn't a thing now.
 
What is weak about them? Sorry I don't see, the PS4 is doing the best at running games at the screen resolution than any consoles in a long long while.
I've no next gen but from all I read both system offers solid experience. Super high end PC are still having trouble running the last Batman if you see what I mean.
Specs are a non issue, and the cumulated sales for both systems are painted a pretty lovely picture about how the systems are perceived by theirs intended costumers.

Well Xbox One in particular is arguably "weak". It runs Battlefront at 720p. PS4 runs it at 900p i believe.

With new AMD and Nvidia cards coming out next month they are only going to fall further behind.
 
Maybe it's about both? What's good for high frame rate VR is also good for chasing up the 4K ladder.
So different requirements (edit demands was not the correct word I think for what I meant) and studio in effect dealing with three different (may be four) SKU:
PS4 neo VR mode // PS4 Neo 4Kish (not reachable) mode // PS4 K // and maybe a PS4 neo PS4 mode that may require some relatively trivial tweaks.

That sounds like quite some pain and extra work.
 
Last edited:
People are getting irrational and mods may consider giving some guidelines to the frantic conversations going on here...
It's not a tech thread and I don't think anyone has real numbers to refer to, so I don't really see the need for moderation. Everyone can take ideas at face value. It's also a broad topic so far left-field and right-field are still worth considering.
 
It's not a tech thread and I don't think anyone has real numbers to refer to, so I don't really see the need for moderation. Everyone can take ideas at face value. It's also a broad topic so far left-field and right-field are still worth considering.
I'm not calling for pruning there is an button for that ;) just trying to have people to bring more arguments into their posts than an under disguised "because I want it or to buy it".
 
Maybe it's about both? What's good for high frame rate VR is also good for chasing up the 4K ladder.
The VR headsets are opting for more modest resolution screens with very high refresh rates, low persistence, and measures to reduce the screen door effect.

The higher resolution, generally reduced refresh rates, and an emphasis on clarity at a distance versus anti-screen door manipulation of the image may not play well with that.

The lower refresh rates or variable refresh rates don't get an anti-nausea benefit from time-warping. Monitors are also generally rectangular and the distortion to maximize resolution near the fovea makes less sense.
Some things like multi-resolution rendering where geometry can be transformed once and rasterized at different resolutions in less-important regions of the screen might help both, but VR leverages it with certain simplifying assumptions about how the user's vision is constrained that a monitor might not get to rely on.
 
The VR headsets are opting for more modest resolution screens with very high refresh rates, low persistence, and measures to reduce the screen door effect.

The higher resolution, generally reduced refresh rates, and an emphasis on clarity at a distance versus anti-screen door manipulation of the image may not play well with that.

The lower refresh rates or variable refresh rates don't get an anti-nausea benefit from time-warping. Monitors are also generally rectangular and the distortion to maximize resolution near the fovea makes less sense.
Some things like multi-resolution rendering where geometry can be transformed once and rasterized at different resolutions in less-important regions of the screen might help both, but VR leverages it with certain simplifying assumptions about how the user's vision is constrained that a monitor might not get to rely on.

But wouldn't the same CUs that are used for time warping, the same ROPs that are used for rendering at high refresh rates etc, just be standard GPU fare that can also be used to render more conventional graphics at higher resolutions and lower refresh rates? AMD don't seem to be designing different architectures for VR and standard displays.

While the same approaches to software might not transfer across, wouldn't faster GCN 1.3 based kit give both a huge boost?
 
Well Xbox One in particular is arguably "weak". It runs Battlefront at 720p. PS4 runs it at 900p i believe.

With new AMD and Nvidia cards coming out next month they are only going to fall further behind.
Well I concede that I strongly as dislike it (edit I mean the XB1 design) though to change that mean leaving the owner of the XB1 behind. The system is loosing momemtum (my read of the market, I may be wrong), if MSFT were to come with something better "all the time", not the "special purpose" type of hardware Sony seems to be cooking, I would not be far from thinking that the XB1 did not last as long as the first XBox, and I can only fathom about the reactions on the web and the customer base, "you got xboned..."

It is my bet and mine alone, I think their better staying clean and true to the adopter of system and go with a short generation (as close as 5 year as they can), while preserving their image.
 
Last edited:
But wouldn't the same CUs that are used for time warping, the same ROPs that are used for rendering at high refresh rates etc, just be standard GPU fare that can also be used to render more conventional graphics at higher resolutions and lower refresh rates?
The front end above those elements deals with different scenarios and could optimize more for one than the other.
Facilitating some kind of time warp is generally compromising overall efficiency. More can get done in the absence of it, and it primarily exists because of how unforgiving VR is in terms of latency and how janky the GPU pipeline is in terms of timing.

Accepting some jankiness and not letting compute partially obstruct the primary rendering process allows for more to get done, or possibly accomplishing as much more efficiently. Queues can be deeper without being negated by a hard frame rate floor, and the GPU can run ahead more to fill in occupancy slots rather than having to worry about what might need to be kicked out or blocked by a screen shift.
VR needs to prioritize response time, which has implications in terms of hurting utilization and from that the effective resource budget.
By way of weak analogy, it's like having two processors that can be designed however they want, but one must run at 6 GHz and it is allowed to provide output that is periodically nonsensical in the context of the other.

AMD don't seem to be designing different architectures for VR and standard displays.
In numerical terms, VR as a niche is almost nonexistent. The market that existing GPUs is barely profitable for AMD as-is.
AMD's best-case solution for VR currently is a dual-card system, which isn't the most efficient way.
That method does minimize disruption for a standard synchronization model that also does better with inter-frame synchronization and dependences, whereas VR has opportunities for intra-frame data sharing or scheduling since it's two view of the same time step that can have a lot of common work.

While the same approaches to software might not transfer across, wouldn't faster GCN 1.3 based kit give both a huge boost?
If it's better generally, it should help both. The more specialized optimizations can be non-transferrable, and we haven't reached a point where we have an embarrassment of riches in terms of performance that we can discount them.
 
The front end above those elements deals with different scenarios and could optimize more for one than the other.
Facilitating some kind of time warp is generally compromising overall efficiency. More can get done in the absence of it, and it primarily exists because of how unforgiving VR is in terms of latency and how janky the GPU pipeline is in terms of timing.

Accepting some jankiness and not letting compute partially obstruct the primary rendering process allows for more to get done, or possibly accomplishing as much more efficiently. Queues can be deeper without being negated by a hard frame rate floor, and the GPU can run ahead more to fill in occupancy slots rather than having to worry about what might need to be kicked out or blocked by a screen shift.
VR needs to prioritize response time, which has implications in terms of hurting utilization and from that the effective resource budget.
By way of weak analogy, it's like having two processors that can be designed however they want, but one must run at 6 GHz and it is allowed to provide output that is periodically nonsensical in the context of the other.


In numerical terms, VR as a niche is almost nonexistent. The market that existing GPUs is barely profitable for AMD as-is.
AMD's best-case solution for VR currently is a dual-card system, which isn't the most efficient way.
That method does minimize disruption for a standard synchronization model that also does better with inter-frame synchronization and dependences, whereas VR has opportunities for intra-frame data sharing or scheduling since it's two view of the same time step that can have a lot of common work.


If it's better generally, it should help both. The more specialized optimizations can be non-transferrable, and we haven't reached a point where we have an embarrassment of riches in terms of performance that we can discount them.

Thanks!

I didn't appreciate the extent to which there are VR customisations that could allow for more efficient (and better) VR performance beyond just throwing more/faster hardware at the problem (as is currently the case in the PC space).

Do you think it likely that Sony have added some such customisation to the upcoming PS4 Neo? Do you think it would be worth it, given that a big part of the push for Neo seems like it will be for enhanced versions of conventional games?
 
Thanks!

I didn't appreciate the extent to which there are VR customisations that could allow for more efficient (and better) VR performance beyond just throwing more/faster hardware at the problem (as is currently the case in the PC space).

Do you think it likely that Sony have added some such customisation to the upcoming PS4 Neo? Do you think it would be worth it, given that a big part of the push for Neo seems like it will be for enhanced versions of conventional games?

Being able to process geometry once and then use it at varying resolutions rather than submitting it again is something Nvidia offers.
That's not VR-only, but an optimization like reducing the effective resolution for geometry outside of the highest-acuity field of view works best with the assumption with current VR headsets that a user's eyes are focused on a specific point of the screen--which is more readily reinforced with a headset versus a monitor on a desk.

In the future, foveated rendering and eye tracking could reduce the amount of work being done that cannot be readily seen. I am not sure about the response time floor for tracking eye movement versus head movement. There's some interesting phenomena where the brain itself might be optimizing its perception during such transitions, which might be something useful for VR.
A screen might not be able to capture that, and unlike a headset it might have multiple viewers.

A significant amount of work done for each eye might also be similar, which if the GPU work were not split across boards might be able to communicated fast enough to be useful. There are papers about textureless rendering texturing in the API forum that show how to avoid overdraw and significantly save on bandwidth by avoiding redundant work, which at least from an efficiency view takes a step back with dual cards since a lot of work gets done 2x. If there were a way to profile a delta for each primitive to determine how much or how little it varies between views, maybe redundant rather than over draw could be avoided. It would seemingly be easier to use if on the same board, but I am speculating past where I've read much on the topic.

What Sony could have tweaked is unclear to me. Like you said, there's a legacy link to established user hardware.
 
Being able to process geometry once and then use it at varying resolutions rather than submitting it again is something Nvidia offers.

I'd assumed that having multiple render targets would allow you to do this - that you could submit geometry once and then process it through different paths with different shaders and resolutions. This was probably a mistake though.

I used to liked the Power VR way of doing things. Storing transformed geometry, allocating it to tiles (that perhaps might perhaps easily relate to per-eye FOV) then acting on it however you see fit seems like it out to have the potential to offer something to VR. Do you think tile based deferred rendering could offer anything to VR?

That's not VR-only, but an optimization like reducing the effective resolution for geometry outside of the highest-acuity field of view works best with the assumption with current VR headsets that a user's eyes are focused on a specific point of the screen--which is more readily reinforced with a headset versus a monitor on a desk.

In the future, foveated rendering and eye tracking could reduce the amount of work being done that cannot be readily seen. I am not sure about the response time floor for tracking eye movement versus head movement. There's some interesting phenomena where the brain itself might be optimizing its perception during such transitions, which might be something useful for VR.
A screen might not be able to capture that, and unlike a headset it might have multiple viewers.

I thought before Kinect 2 came out that a sufficiently advanced system might allow for tracking of both eyes and in combination with shutter glasses allow for a tv to effectively become a windows into - and out of - a game world. Never saw anything to suggest that might be practical though, unfortunately.

A significant amount of work done for each eye might also be similar, which if the GPU work were not split across boards might be able to communicated fast enough to be useful. There are papers about textureless rendering texturing in the API forum that show how to avoid overdraw and significantly save on bandwidth by avoiding redundant work, which at least from an efficiency view takes a step back with dual cards since a lot of work gets done 2x. If there were a way to profile a delta for each primitive to determine how much or how little it varies between views, maybe redundant rather than over draw could be avoided. It would seemingly be easier to use if on the same board, but I am speculating past where I've read much on the topic.

What Sony could have tweaked is unclear to me. Like you said, there's a legacy link to established user hardware.

John Carmack has touted VR as being a great use case for dual GPUs. I can see that as being true on a basic level, but single cards would seem to win out with more complex implementations once they are introduced.

I read yonks ago when looking at some ideas on vision that beyond around 50 meters the human vision system loses the ability to see in the 3D, but we don't notice because we learn all kinds of visual cues that we progressively switch to relying on to infer depth. If you think about the amount of detail in most games e.g. fps or driving or ... whatever GTA is ... that appears beyond this point, I think your comments about re-using information (perhaps directly or after a simple transformation for stuff near the limit of 3D vision) makes a great deal of sense.
 
Last edited:
I'd assumed that having multiple render targets would allow you to do this - that you could submit geometry once and then process it through different paths with different shaders and resolutions. This was probably a mistake though.
In some way Nvidia markets the accelleration of this as an element of their platform. At least in part this is based on the location on-screen and is apparently more automatic.

I used to liked the Power VR way of doing things. Storing transformed geometry, allocating it to tiles (that perhaps might perhaps easily relate to per-eye FOV) then acting on it however you see fit seems like it out to have the potential to offer something to VR. Do you think tile based deferred rendering could offer anything to VR?
One of the side effects where doing things efficiently is that deferred work that builds up an acceleration structure itself takes non-trivial time at these response scales. Several milliseconds can go into building a Gbuffer, and in comparison Sony's top PSVR frame rate gives a little over 8. I think on the balance the efficiency of avoiding waste usually wins out, but I think it could be an interesting exercise in what techniques or to what extent techniques can be budgeted due to Amdahl's law appearing. Slightly more dependences risk injecting small delays that can be shrugged off until you have 4ms to work with.

Scrambling to get optimal usage out of a GPU with 33ms at 4K+ versus 1/10 the time can lead to sacrificing what is most efficient architecturally.

I thought before Kinect 2 came out that a sufficiently advanced system might allow for tracking of both eyes and in combination with shutter glasses allow for a tv to effectively become a windows into - and out of - a game world. Never say anything to suggest that might be practical though, unfortunately.

I think I found a link to the concept I mentioned earlier:
https://en.wikipedia.org/wiki/Saccadic_masking

The brain literally stops paying attention for a fraction of a second, and it might be possible based on screen velocity to drop the frame rate down to 0-1 frame in 200ms. Although optimally, we'd need brain tracking to know when the brain really flips the switch.
That might require futuristic implants or a high-speed dedicated processing and signalling path. The lag time would otherwise be too long.

John Carmack has touted VR as being a great use case for dual GPUs. I can see that as being true on a basic level, but single cards would seem to win out with more complex implementations once they are introduced.
It definitely seems to be one of the few use cases where there aren't immediate and rampant problems with multi-GPU.
 
Back
Top