PlayStation 4 (codename Orbis) technical hardware investigation (news and rumours)

Status
Not open for further replies.
In addition to google there are opendns and level3 public DNSes. If you try to look up the sony server address via them you will get different ips. So it can in fact influence your speed, I tried.
 
In addition to google there are opendns and level3 public DNSes. If you try to look up the sony server address via them you will get different ips. So it can in fact influence your speed, I tried.

Again, it doesn't matter which IP address you are served initially, services like Akamai will refer you to a local server based on your actual IP.

If you're in Europe and use an Australian-based DNS server, it will serve you a server address local for Australia. As soon you access that IP, it'll refer you to a server more local.

I can only type this so many times :(
 
Does anyone think the PS4 video share quality could be improved in the future? What are the bottlenecks? Is it RAM, Bandwidth or processing power? I can easily imagine a future in which the Xbox One and PS4 video share quality is the butt of jokes from these Youtube stars.
 
Does anyone think the PS4 video share quality could be improved in the future? What are the bottlenecks? Is it RAM, Bandwidth or processing power? I can easily imagine a future in which the Xbox One and PS4 video share quality is the butt of jokes from these Youtube stars.

It's mostly bandwidth. Many people with DSL, myself included, have moderate downstream but much slower upstream speeds. You really need 4-6Mbit for a good quality 720p stream but the more the better.

For comparison, the bitrate of Blu-ray movies varies betweens 15Mbit and 40Mbit and lots of folks have only a fraction of that.
 
You really need 4-6Mbit for a good quality 720p stream but the more the better.

...acutually, with ~6mb you can get around HD quality (a bit less, any way) but then you'd pay it with very long (not real time) encoding time.
 
Last edited by a moderator:
...acutually, with ~6mb you can get around HD quality (a bit less, any way) but then you'd pay it with very long (not real time) encoding time.
PlayStation 4's solution is, obviously, realtime. If it was't realtime you couldn't stream at all or you'd have to buffer for minutes/hours. It does the best it can with the available upstream bandwidth but the lower it is, the less wiggle room there is. And it will always compromise video quality over audio quality.
 
They have already increased the streaming quality once before. They can probably do it again.

It is not that they cannot increase the streaming quality - it is that many users has UL speed limits with ADSL on copper cables (so the ISP saves on BW).
And also, in order to reach the same compression quality of movies (aka better movie/bitrate), you'd likely have to give up realtime compression.
 
The streaming quality is limited by the hardware encoder iirc, which is 720p and 30fps. Not sure of the bitrate limit, but it's probably around 6-8mbps, so the streaming can't get any better than that. You won't be seeing 1080p60 streaming on PS4 unless they release a new SKU with that feature.
 
It is not that they cannot increase the streaming quality - it is that many users has UL speed limits with ADSL on copper cables (so the ISP saves on BW).

Thats a bit unfair, there are physics that just make it not possible to do better than ADSL, unless you shorten the loop length or go to fiber. If an ISP could just swap out equipment and provide higher bw most would. Since they have to build out new pops and maybe even replace copper, then its just not economically viable, have they recouped the cost on their DSL deployment yet?

DSL technology with higher speeds than ADSL2+ has shorter range, so unless you live close to the CO, it wont help you.

Going to fiber, is another sport when it comes to price.
 
It is not that they cannot increase the streaming quality - it is that many users has UL speed limits with ADSL on copper cables (so the ISP saves on BW).
And also, in order to reach the same compression quality of movies (aka better movie/bitrate), you'd likely have to give up realtime compression.

How do you know that? Doesn't PS4/X1 use "standard" GCN parts? Can't PC use the VCE to do more than 720P/30?
 
The streaming quality is limited by the hardware encoder iirc, which is 720p and 30fps. Not sure of the bitrate limit, but it's probably around 6-8mbps.
But that's really good quality for 720p30! 1080p30 is good quality at those bitrates. If people were seeing 6 Mbps streams, unless the encoding was terrible, they wouldn't be grumbling about the quality. ;)
 
The streaming quality is limited by the hardware encoder iirc, which is 720p and 30fps. Not sure of the bitrate limit, but it's probably around 6-8mbps, so the streaming can't get any better than that. You won't be seeing 1080p60 streaming on PS4 unless they release a new SKU with that feature.

We do not know this for sure. Unless you have secret insider info?
 
FYI, I can choose 50Mbit/s, 60fps, 1080p on my Kaveri rig using raptr app (AMD gaming evolved thing), which I believe is using hardware encoder on Kaveri.
Edit: basically what I'm saying is PS4 should be capable of similar feat since I don't think the hardware encoder is that much different. Of course choosing the encoding setting that would provide lowest latency (for remote play or game sharing) is a bit different than for pure recording, but I think the limitation is mostly to reduce streaming bandwidth. They provide a setting that they think is acceptable for most people Internet connection.
 
Last edited:
The streaming quality is limited by the hardware encoder iirc, which is 720p and 30fps. Not sure of the bitrate limit, but it's probably around 6-8mbps, so the streaming can't get any better than that. You won't be seeing 1080p60 streaming on PS4 unless they release a new SKU with that feature.
Or people just use capture cards.
 
As we know, Xbox One has two Graphic Commands Processor. We do not have confirmation on the PS4, but apparently she has only one, although some people on the Xbox SDK thread are claiming the existence of a second one.

What are the advantages of Two Graphics Comand Processors, and how do Sony known solutions compare to them?

Well, on the Candidate features for future OpenGL 5 / Direct3D 12 hardware and beyond document, Christophe Riccio points the advantages on the use of multiple command processors:

Quoting him:

"There is a lot of room for tasks parallelism in a GPU but the idea of submitting draws from multiple threads in parallel simply doesn’t make any sense from the GPU architectures at this point. Everything will need to be serialized at some point and if applications don’t do it, the driver will have to do it. This is true until GPU architectures add support for multiple command processors which is not unrealistic in the future. For example, having multiple command processors would allow rendering shadows at the same time as filling G-Buffers or shading the previous frame. Having such drastically different tasks live on the GPU at the same time could make a better usage of the GPU as both tasks will probably have different hardware bottleneck."

The advantages seem obvious. Almost total optimization of the GPU pipelines.

Does PS4 have a similar solution?

As I said, no second GCP is confirmed. But here are some quotes from Gamasutra on the Mark Cerny words about something that looks made to achieve similar results.

Quoting from Gamasutra:

"The reason so many sources of compute work are needed is that it isn’t just game systems that will be using compute -- middleware will have a need for compute as well. And the middleware requests for work on the GPU will need to be properly blended with game requests, and then finally properly prioritized relative to the graphics on a moment-by-moment basis."

This concept grew out of the software Sony created, called SPURS, to help programmers juggle tasks on the CELL's SPUs -- but on the PS4, it's being accomplished in hardware.

The team, to put it mildly, had to think ahead. "The time frame when we were designing these features was 2009, 2010. And the timeframe in which people will use these features fully is 2015? 2017?" said Cerny.

"Our overall approach was to put in a very large number of controls about how to mix compute and graphics, and let the development community figure out which ones they want to use when they get around to the point where they're doing a lot of asynchronous compute."

Cerny expects developers to run middleware -- such as physics, for example -- on the GPU. Using the system he describes above, you can run at peak efficiency, he said.

"If you look at the portion of the GPU available to compute throughout the frame, it varies dramatically from instant to instant. For example, something like opaque shadow map rendering doesn't even use a pixel shader, it’s entirely done by vertex shaders and the rasterization hardware -- so graphics aren't using most of the 1.8 teraflops of ALU available in the CUs. Times like that during the game frame are an opportunity to say, 'Okay, all that compute you wanted to do, turn it up to 11 now.'"


After reading this, it seems to me that PS4 tries to achieve the same kind of optimization by doing something diferent.

Is one method better than the other? And most important, does that really matter after a certain level of optimization?
For you to understand my question, let me quote from "our very own" sebbbi on the DX 12 thread:

"Good example of this is shadow map rendering. It is bound by fixed function hardware (ROPs and primitive engines) and uses very small amount of ALUs (simple vertex shader) and very small amount of bandwidth (compressed depth buffer output only, reads size optimized vertices that don't have UVs or tangents). This means that all TMUs and huge majority of the ALUs and bandwidth is just idling around while shadows get rendered. If you for example execute your compute shader based lighting simultaneously to shadow map rendering, you get it practically for free. Funny thing is that if this gets common, we will see games that are throttling more than Furmark, since the current GPU cooling designs just haven't been designed for constant near 100% GPU usage (all units doing productive work all the time)."

Whats your take on this?
 
There's an image from VGLeaks that shows clearly a second graphics command processor, but it's named as a system component for use by the OS.
 
Whats your take on this?
Tomorrow's Children developer (Q-Games) is using asynchronous compute on PS4.

See their presentation: http://fumufumu.q-games.com/archives/Cascaded_Voxel_Cone_Tracing_final.pdf

Quote from Page 82:
Most of our Screen Space (and Voxel Space) shaders have been moved to Compute.
• Frame is pipelined. Post processing overlaps Gbuffer fill for the next frame.
• Massive win compared to just graphics pipe.
• ~5ms back on a 33ms frame from using Async Compute.
• Everyone should do this!
 
Status
Not open for further replies.
Back
Top