PlayStation 4 (codename Orbis) technical hardware investigation (news and rumours)

Status
Not open for further replies.
Well they did.

Or more likely they were misinterpreted and badly paraphrased by the ars reporter who understands what's going on as much as you do.

If it's achieving the maximum graphics power of 1.834 TFLOPs then there's no room for computational tasks. Period.
 
What if the GPU is 70 percent efficient doing rendering alone but 90 percent efficient with added compute jobs... 70 percent would still be dedicated to rendering as before, so there would be no loss of rendering power, but added compute resources.

Is this possible? Could this be what they are referring to?
 
I know they said that they carefully balance the two processors to make that happen, but if you think about that, why would they have to modify the APU for the CPU to have enough room for computational tasks? & why would they have to architect the CPU to take full advantage of Compute at the same time that the GPU does graphics? isn't that the way things already are?

The big advancement for Orbis and likely Durango is that they make significant strides in eliminating a number of long-standing obstacles to GPU compute, which the PC and even current APUs still struggle with. The consoles go some way towards eliminating barriers to communication and data sharing that exist because the GPU and CPU were historically physically separated.

It's likely that the next generation of APU will actually move somewhat further, and even past that there are additional steps on the horizon.
Since one of the biggest problems for compute was that communication was so expensive, the changes to make the APU more flexible and responsive make compute a cost-effective choice for the GPU.

However, a lot of the other things they are talking about besides that really are things that already exist. The concurrent execution and programmable compute focus of the GPU are not new, but we see press blurbs acting like Sony invented the wheel again.
Why? Maybe because they want people excited about buying the best thing ever?

What if you told them to get specific and provide concrete examples and show their math?
What if you asked how their press statements line up with reality? They'll tell you they were being general, or they had to simplify things, or they were using figurative language, or just restating what the consumer wants.


What if the GPU is 70 percent efficient doing rendering alone but 90 percent efficient with added compute jobs... 70 percent would still be dedicated to rendering as before, so there would be no loss of rendering power, but added compute resources.

Is this possible? Could this be what they are referring to?

They are saying nothing and everything. They say enough to cover plausible scenarios, but the literal meaning of the words could be stretched to include the impossible or unrealistic.
Why should they care if people get excited over a misconception if they're still excited?
 

No, they didn't. You are either blatantly ignoring your own quote so you can troll, or you are just blindly cherry picking what you want to see. Here, I'll bold the important part for you...

The system is also set up to run graphics and computational code synchronously, without suspending one to run the other. Norden says that Sony has worked to carefully balance the two processors to provide maximum graphics power of 1.843 teraFLOPS at an 800Mhz clock speed while still leaving enough room for computational tasks. The GPU will also be able to run arbitrary code, allowing developers to run hundreds or thousands of parallelized tasks with full access to the system's 8GB of unified memory.

Hint. If they are using maximum graphics power of 1.843 TFLOPs from the GPU then the ONLY compute that is left over is from the CPU. You know that bolded part? The one that mentions two processors?

If you are using maximum (100%) of the CUs on the GPU for graphics there is nothing (0%) left over for compute no matter what you do.

No matter how much you might wish it to be so, the CUs on the GPU cannot do more than 100% work. So if there's 10% compute being done that means only 90% (1.659 TFLOPs) can be used for graphics. If there's 20% being used for compute that's only 80% (1.474 TFLOPs) that can be used for graphics. And this assumes that they can maintain 100% utilization, which is not going to happen most of the time.

However many FLOPs are being used for compute will NOT be able to be used for graphics.

Regards,
SB
 
Gaffer rykomatsu translated the Watch Cerny interview:


Part 1:

Focusing on the “positive aspects” and Moving to the x86 Architecture
Cerney states that he started thinking about a “next generation console” in Fall 2007. SCE should have gone into basic R&D regarding next gen technologies soon after PS3’s release and this falls in-line with when Cerney started investigating.

Cerney: I had started discussions regarding the next generation following PS3 in 2007. At that time, I was investigating what should be done for next generation [technologies]. It was at that time, I wondered if we couldn’t use the x86 architecture for the next generation. I used then entirety of Thanksgiving weekend looking into this (lol). For Americans, this holiday is extremely important. But, that’s how I sacrificed (lol) the holidays to think about the future and what possibilities this might bring for our organization.

After that, I went to Phil Harrison since he was at the top of the game development division. I was also introduced to Masayuki Chatani who was SCE’s CTO at that time and was directing the next-gen project. What was surprising was that he said “yes” to me being involved with the next generation console.

By moving to the x86 architecture, this also means losing backwards compatibility with PS3. The basis of Cerney’s vision was to use this x86 architecture. This is a huge tradeoff, but SCE accepted this vision.

Cerney: We struggled with this point. As a matter of fact, this was the major point I thought about during Thanksgiving. What to do with the current CPU and x86…

We decided to focus on the “positive aspects” rising from switching to x86. X86 has instruction sets which are of significant importance for games. Multimedia instruction sets, specifically the existence of SSE 4.1 and 4.2. And of course, the existence of an APU gives us the ability to come close to the results obtained from the SPU.

The decision to move to x86 had an extremely complex set of requirements. Of course there’s issues of backwards compatibility and issues from the vendor’s side as well. But that said, I believe the biggest topic for us was how much affinity the developers would have for this change. In the past 3 years, there have been a large number of refined tools and technologies released for the x86 architecture. If another architecture had been selected, it probably would have been even more problematic. The x86 architecture is well known and development is relatively easy.

Ito: Backwards compatibility, particularly in Japan, is something that is strongly brought frequently, so we thought long and hard about this. Realistically, to support backwards compatibility with PS3, the CELL Broadband Engine would have needed to been part of the new console. Currently, it’s not possible to simulate this via software. If CELL were the only requirement, that wouldn’t have been much of an issue. We would also need to support the supporting hardware indefinitely. We can freely manufacture CELL if the decision is made that it is needed. However, that’s not the case with supporting hardware. There are parts which will become difficult to obtain since 7 years is already considered to be long in the IT industry…

Using this opportunity, we decided to stop going down this path, and as Mark said, to focus our efforts on simplifying developer efforts.

Essentially, SCE’s thinking was that when considering focusing on sustaining and maintaining PS3 (and prior) hardware long-term, they also saw the need to transition to a “more ordinary” platform. It can be interpreted that x86 offered easier development opportunities and also provided for a way to get away from a “proprietary” track, leading to this decision.
Now, SCE hasn’t come to a conclusion regarding the BC problem. It’s said that use of Cloud long-term is part of their vision, but more accurately, SCE is evaluating various content in various forms including sustaining BC.

GPU Customization with use of GPGPU in Mind. Difference in Launch Title Numbers
Use of the x86 architecture also means externally, it becomes difficult to see the difference between PC development. How does Cerney think about how to showcase the difference and value of PS4?

Cerney: Our primary target is to provide a powerful system that developers are familiar with. It goes without saying an x86 CPU has high familiarity. From a power perspective, and providing new possibilities, it will become more important to realize technologies benefiting from a GPU. GPUs increase graphics performance and have been used in that manner traditionally. But, the computing capabilities of GPU will be harnessed in various areas in manners we can’t even begin to think of now.

This essentially means, [PS4] will be a console that not only focuses on CPU performance, but also on GPU performance…essentially a realization of a console using a GPGPU. In fact, at the PS4 press conference, a physics demo using the GPGPU was shown, and PS4 has an added value proposition of having a high performance GPGPU as a core feature set of the platform. For that purpose, the PS3 CPU and GPU has a few proprietary tricks up its sleeve.

Cerney: The GPGPU for us is a feature that is of utmost importance. For that purpose, we’ve customized the existing technologies in many ways.

Just as an example…when the CPU and GPU exchange information in a generic PC, the CPU inputs information, and the GPU needs to read the information and clear the cache, initially. When returning the results, the GPU needs to clear the cache, then return the result to the CPU. We’ve created a cache bypass. The GPU can return the result using this bypass directly. By using this design, we can send data directly from the main memory to the GPU shader core. Essentially, we can bypass the GPU L1 and L2 cache. Of course, this isn’t just for data read, but also for write. Because of this, we have an extremely high bandwidth of 10GB/sec.

Also, we’ve also added a little tag to the L2 cache. We call this the VOLATILE tag. We are able to control data in the cache based on whether the data is marked with VOLATILE or not. If this tag is used, this data can be written directly to the memory. As a result, the entirety of the cache can be used efficiently for graphics processing.

This function allows for harmonization of graphics processing and computing, and allows for efficient function of both. Essentially “Harmony” in Japanese. We’re trying to replicate the SPU Runtime System (SPURS) of the PS3 by heavily customizing the cache and bus. SPURS is designed to virtualize and independently manage SPU resources. For the PS4 hardware, the GPU can also be used in an analogous manner as x86-64 to use resources at various levels. This idea has 8 pipes and each pipe(?) has 8 computation queues. Each queue can execute things such as physics computation middle ware, and other prioprietarily designed workflows. This, while simultaneously handling graphics processing.



Part 2

GPU Customization with use of GPGPU in Mind. Difference in Launch Title Numbers (cont’d)
Cerney: In the next few years, we’ll also be supporting a different approach

We have our own shader APIs, but in the future, we’ll provide functions which will allow deeper access to the hardware level and it will be possible to directly control hardware using the shader APIs. As a mid-term target, in addition to common PC APIs such as OpenGL and DirectX, we’ll provide full access to our hardware.

Regarding the CPU, we can use well known hardware, and regarding the GPU, as developers devote time to it, new possibilities which weren’t possible before will open up.

The properties of CPU and GPU are quite difference, so in the current stage, if you were to use an unified architecture such as HSA, it will be difficult to efficiently use the CPU and GPU. However, once the CPU and GPU are able to use the same APIs, development efficiency should increase exponentially. This will be rather huge. Thus, we expect to see this as somewhat of a long-term goal.

Regarding easier development, talking about the action game KNACK

Cerney: I’ve spoken with a lot of developers, but most of the developers are saying that creating a game is considerably easier.

Working on a game myself, I feel that is true. KNACK is still in development, but the PS4, compared to the PS3, really makes game development easy.

This will also lead to the main difference with the PS3 era. The main difference is, we will have many titles for launch. Because game development is easier, there shouldn’t be a barrier as there had been previously. PS3 had the image that it was difficult to develop for. Even the PS2 wasn’t that easy. PS4 has a PC CPU and a GPU that’s been enhanced from a PC so the game lineup should become very rich.

The most important difference is, it won’t take as much technical training, so developers can focus more on the game-play aspects. That’s ideal isn’t it? As a result, [gamers] should a world with a richer gaming experience.
Regarding 4KTV, he is a little passive

Cerney: hm…(lol). Personally, I’m very interested in 4K

We’re still in the initial stages of supporting 4Kx2K in games. Our focus is to provide for a solid FullHD experience. We can secure the display buffer for Game and OS separately, and can provide for independent scaling of both as well. (Regarding 4K) We can provide an extremely smooth user interface.

If we consider purely memory bandwidth, with 4K, securing 2 displays worth of display buffer requires 10GB/sec. That just for simply displaying.
This is our simple answer for why we’re focusing on just the FullHD experience.

PS4 will read CDs, but will not play back audio CD music.

Realizing Energy Efficiency and Smoothness using a Second Custom Chip with Embedded CPU
Cerny: The second custom chip is essentially the Southbridge. However, this also has an embedded CPU. This will always be powered, and even when the PS4 is powered off, it is monitoring all IO systems. The embedded CPU and Southbridge manages download processes and all HDD access. Of course, even with the power off.

Ito: The second custom chip also takes into consideration environmental problems. For background downloading, if the main CPU needs to be started every time, energy consumption increases significantly, so we run this with the second chip. Particularly in Europe, there are strict energy consumption regulations, so handling consumption in this manner is also one of our goals.

Cerney: There’s also network bandwidth considerations. Background downloading allows for smooth downloading of large files even when bandwidth is limited.

More importantly, this helps reduce the time required until a game can be played. Simultaneously, this also allows for decreased initial downloads. Only the first few GB are downloaded during the initial play session and while the game is being played, the remaining portions will be downloaded. Of course, even with the power off, the remaining download will continue. So, the primary goal is to decrease the amount of download time before initial play.

Cerney: The data is logically divided into a few chunks, and uploaded [to the server by the dev?] with a specially annotated script. Further, based on how the script is written, additional customization is possible. For example, downloading of single player portion or multiplayer portion first…Related to this…system memory has increased by 16x since PS3, but the BD drive transfer speed has only increased a few fold. Because of this, using a similar technique, it’s possible to copy just the important parts from the BD to the HDD and start the game. By doing this, it’s possible to load directly more smoothly from the faster HDD. Of course, it’s possible to stream the data from a ginormous BD and play a game as well.

Note:Commentary removed again...getting late so sorry for typos

Part 3 (last one)
Built-In Video Encoder for Video Sharing and Vita Remote-Play
Cerney: The PS4 has a dedicated encoder for video sharing and such. There are a few dedicated encoder and decoder functions which are available and use the APU minimally. This is also used for playback of compressed in-game audio in MP3 and audio chat.

When the system is fully on, the x86 CPU core controls the video sharing system. However the Southbridge has features to assist with network traffic control.

Cerny: While investigating the initial hardware design, we had been thinking about what aspects will become important in the future. All hardware components have been prepared with enveloping the gamer with a realizing a wonderful user experience.

Our team thought deeply of the concept of “computer entertainment”. People from other Sony groups participated and we investigated this from many different angles. Since we are the “game” people, we have UI specialists, and Richard Marcks(sp?) (father of SCE’s Natural UI design)was involved as well. This multi-facetted team spent a few weeks discussing how amazing of a user-experience we could realize.

So, how does this affect games?

Ito: For example, even without a PS4, this time, we can use the PlayStation App to see game details (content?). Even without a PS4, you would be able to experience “man, this game looks fun” or “this game might be pretty good”. And with this, it is our hopes that the allure of the PS4 will be evident.
Of course, if a gamer shares images on Facebook, you can see this from Facebook without the PlayStation App.

Regarding Vita Remote-Play

Cerney: Vita Remote Play is special.
Smartphones and Tablets are used to see game information regarding the PS4 in various places, experienced in various places. In addition to this, this type of content can be seen on the PC as well using a web platform.

PS4 has video encoding hardware, and this is used with Video [Miracast?-type] feature. Vita’s control inputs are sent to the PS3, and using these functionalities, with minimal overhead and no pain, it becomes possible to remotely play PS4 games. Atleast, this is what we’re aiming for, and compared to the PS3 era, we’re aiming for a significantly wider support of remote play. Of course, this function applies to games using Dualshock. Games that use the camera (such as PS Move and PS4 location recognition) cannot utilize remoteplay.
Leaving that aside, Vita’s remote play was developed to provide as close to perfect PS4 gameplay within a household as possible. This requires connection to a Wifi network and should be used in a low latency environment. The thinking here is also that even if someone else is using the TV, you can continue playing PS4 games.

Use of Real Names for Gaming without Walls, Use of a BSD base for a Rich OS Layer
Cerney: We’re investigating using the network to switch control between players. Just keep in mind, this doesn’t mean these types of features will all be present on Day 1. Please understand that we’re preparing and investigating these as features supportable on the platform.
Either which way, use of social interactions to stimulate gameplay should become a huge weapon in our arsenal.

Just my image of things but…think of it like being in a living room. When you’re playing a game with your friends, there’s no physical wall to prevent interaction. It’s kind of like having hundreds of friends gaming with you nearby, but this feeling while you’re sitting at the end of a network. We want to act as the facilitators to enable this; to enable the feeling of actually meeting and enjoying gaming with your real world friends around the world. Over the course of a few years, we’ll support all the features required to support and achieve this goal.
For this purpose, PS4’s OS layer is very rich compared to PS3 or Vita

Cerney: The OS is based on BSD. I believe this is the first game console using this architecture.
From the OS side, the PS4 will allow use of many multiple features simultaneously. Our goal is something like the following:

Send a video of the game you’re playing, and return to the game immediately to continue playing. Then, watch a game play video from your friend, then switch to video chat with that friend right away. If you see interesting DLC that your friend has, move to the store, then be able to ask your friend if it’s the right one.
In this manner, we envision it to be possible to come-and-go between and use many features. Even for multi-player games, you would be able to move the game to the background without quitting, and go through a similar routine.

The facilitation by the OS will allow for a rich set of actions.

Regarding the “real name” policy

Cerney: The concept of aliases for online games is the current paradigm. For example, in a multiplayer deathmatch, it’s better to have an alias, right? However, when coop aspects or communications leveraging social interactions are brought into the game, it’s better not to have an alias. For example, let’s say you gift an item you earn in a game to a friend. That type of interaction should evoke a completely different feeling than when you’re playing the game.

Of course, we respect the desire to use an alias for game play as well. We support that type of gaming too. Having a deathmatch using aliases is possible on the PS4 as well. But at the same time, we want to support aspects which increase the fun of gaming with real world friends. For example, wouldn’t it be wonderful to meet a college friend you haven’t seen in ages?

Cheers :)
 
No, they didn't. You are either blatantly ignoring your own quote so you can troll, or you are just blindly cherry picking what you want to see. Here, I'll bold the important part for you...



Hint. If they are using maximum graphics power of 1.843 TFLOPs from the GPU then the ONLY compute that is left over is from the CPU. You know that bolded part? The one that mentions two processors?

If you are using maximum (100%) of the CUs on the GPU for graphics there is nothing (0%) left over for compute no matter what you do.

No matter how much you might wish it to be so, the CUs on the GPU cannot do more than 100% work. So if there's 10% compute being done that means only 90% (1.659 TFLOPs) can be used for graphics. If there's 20% being used for compute that's only 80% (1.474 TFLOPs) that can be used for graphics. And this assumes that they can maintain 100% utilization, which is not going to happen most of the time.

However many FLOPs are being used for compute will NOT be able to be used for graphics.

Regards,
SB

Why would they be talking about running Compute on the CPU after all the talk about GPGPU computing?


My Speculations:

Yes the 2 processors are being used together to run Graphics & Compute on the GPGPU efficiently, the CPU & GPGPU will use the Compute pipelines in a way that's like the SPE's connected to the PPU on the PS3 to run compute tasks & it's using a asynchronous compute architecture so that it can run the compute code at the same time that the GPU is doing graphics.


Everyone is saying that everything I'm saying is wrong but none of you have said anything that matches up with what Mark Cerny or Chris Norden said.
 
His statement contradicts itself. The obvious interpretation is that it was a mistake in phrasing and we need not keep looking for mystery hardware that reconciles the statement.
 
I just hope the tech guys at SONY will get a little more specific concerning their PS4 hardware once Microsoft finally did their official reveal for the next XBOX.

Speculation has been going round in circles ever since last month's Playstation event. Time to give us some more (and especially less vague) insight to chew on :smile:
 
This is the first time I've heard about Southbridge that has its own CPU which can take control over the system when central processing chips are powered down. Very interesting.
 
Gaffer rykomatsu translated the Watch Cerny interview:

Erm, is Cerny for real?

In the past 3 years, there have been a large number of refined tools and technologies released for the x86 architecture. If another architecture had been selected, it probably would have been even more problematic. The x86 architecture is well known and development is relatively easy.

Ummmm. Yeah. Hate to break it to you Cerny, but there have been a large number of refined tools and technologies for x86 released for much longer than the most recent 3 years. :p

Regards,
SB
 
Why would they be talking about running Compute on the CPU after all the talk about GPGPU computing?


My Speculations:

Yes the 2 processors are being used together to run Graphics & Compute on the GPGPU efficiently, the CPU & GPGPU will use the Compute pipelines in a way that's like the SPE's connected to the PPU on the PS3 to run compute tasks & it's using a asynchronous compute architecture so that it can run the compute code at the same time that the GPU is doing graphics.


Everyone is saying that everything I'm saying is wrong but none of you have said anything that matches up with what Mark Cerny or Chris Norden said.

Really, I just quoted your quote of what Chris Norden said.

For the GPU you cannot get 100% = 100% + something greater than 0%. It isn't going to happen no matter how much you wish it were so. There is a maximum of 100%. If 100% is being used for graphics that leaves 0% for compute.

And you are STILL ignoring that he specifically said two processors. Let me repeat that for you again. Two processors. And in case you forgot already. Two processors.

Regards,
SB
 
Gaffer rykomatsu translated the Watch Cerny interview:

Cerney: The GPGPU for us is a feature that is of utmost importance. For that purpose, we’ve customized the existing technologies in many ways.

Just as an example…when the CPU and GPU exchange information in a generic PC, the CPU inputs information, and the GPU needs to read the information and clear the cache, initially. When returning the results, the GPU needs to clear the cache, then return the result to the CPU. We’ve created a cache bypass. The GPU can return the result using this bypass directly. By using this design, we can send data directly from the main memory to the GPU shader core. Essentially, we can bypass the GPU L1 and L2 cache. Of course, this isn’t just for data read, but also for write. Because of this, we have an extremely high bandwidth of 10GB/sec.

Also, we’ve also added a little tag to the L2 cache. We call this the VOLATILE tag. We are able to control data in the cache based on whether the data is marked with VOLATILE or not. If this tag is used, this data can be written directly to the memory. As a result, the entirety of the cache can be used efficiently for graphics processing.

This function allows for harmonization of graphics processing and computing, and allows for efficient function of both. Essentially “Harmony” in Japanese. We’re trying to replicate the SPU Runtime System (SPURS) of the PS3 by heavily customizing the cache and bus. SPURS is designed to virtualize and independently manage SPU resources. For the PS4 hardware, the GPU can also be used in an analogous manner as x86-64 to use resources at various levels. This idea has 8 pipes and each pipe(?) has 8 computation queues. Each queue can execute things such as physics computation middle ware, and other prioprietarily designed workflows. This, while simultaneously handling graphics processing.


Basically, they're making a poor man's Cell, only easier and x86 style...
 
Ummmm. Yeah. Hate to break it to you Cerny, but there have been a large number of refined tools and technologies for x86 released for much longer than the most recent 3 years. :p
I think Cerny's referring to the rise of APUs while trying to talk layperson talk for the specific target audience of the interview. Also, you're reading an interview that's been translated two times, unless Cerny's fully fluent in japanese and the interview was originally conducted in that language... Also, you may be over-analyzing what he says. These interviews are basically PR fluff for the most part, especially in this pre-release stage we're in right now. He wants to play up his baby's strengths and make it sound cool and advanced. Fully understandable. :)
 
Everyone is saying that everything I'm saying is wrong but none of you have said anything that matches up with what Mark Cerny or Chris Norden said.
Yes it does, but it doesn't match your interpretation, and you're failing to see the ambiguities of the English comments that allow for more than one meaning despite them being repeated several times.

At this point I'm calling an end to the 1.8 TF computer + graphics discussion as it's just generating noise. There are two understandings for people to choose to believe:

1) Compute (which is non-graphics processing) can be performed on the CPU and GPU, with Sony/AMD making it easier to use compute on the GPU, but that'll eat into the GPU's total graphics rendering performance by consuming logic resources.

2) Compute can be performed in parallel with the GPU rendering functions at no cost to the graphical rendering.

Full explanation of both views has been made. There's no point in repeating these ideas ad nauseum and clogging up the thread where clearly neither side is going to budge one inch.
 
The translation clears up a few things for me.. after reading the google translation I somehow thought they were referring to CPU cache bypass operations but apparently it's for the GPU side (others seemed to have already picked this up). I wonder if it'll be done by address/via MMU or if it'll involve special instructions.

Still not totally clear on what this VOLATILE tag means. Something that's part of cache wouldn't go direct to memory by definition, unless they're wasting cachelines instead of TLB attributes to mark memory as uncacheable. I think it means that the writeback will take the non-coherent bus and won't snoop the CPU's caches, and/or won't be updated by modifications from the CPU. Or maybe that it is coherent with changes to main RAM and can be updated, hence actually volatile...
 
Gaffer rykomatsu translated the Watch Cerny interview:

That actually clears up quite a few things, and the upcoming Gamasutra article should provide further detail. However, that interview, and others, makes it clear that there's a lot of discussion of "this is what we'd like to do" and not "this is day one." Vita Remote Play seems to be a lock thanks to the PS4 hardware video encoder and Vita hardware video decoder. However, on the feature, multitasking, and social integration side of things its far more ambiguous. Some of the features they're talking about now were discussed 8 years ago during the PS3 reveal. The F1 reveal with the in game video chat immediately comes to mind. What I would love to see is a publicly available road-map of planned features and target dates released before launch. Normally, I would seriously doubt that would every happen with SCE, but with the direction Cerny is taking things with the PS4, who knows. There's a community features suggestion section on the PS Blog for PS3 so maybe this time around they'll be even more open/transparent.

I'm also curious to know how robust the video encoder is. There are 4 key features that seem to rely on it.
  1. Remote Play
  2. Live Stream
  3. Last few minutes of video (for upload)
  4. PS4 Eye Video feed
Can each of those features be used simultaneously? Can a 5 minute clip be uploaded to Facebook while a Live stream is also being sent to uCast for an AR enabled game or while doing an in game video chat? Can any of those features be used while doing Remote Play? I guess its a matter of if the same video encode stream can be used for multiple purposes, while also handling a separate RAW or YUV video stream from the Eye being encoded to a more digestible (to the system) h264 stream. Can the encoder even handle separate encodes at the same time? I would thinkg the same encoded stream could be sent through WiFi to the Vita while also being sent to uStream, with the last few minutes of buffer (I'd love SCE to detail the video buffering process for this, namely where its stored) available to clip and upload but that still leaves the question of simultaneous Eye use. Although no Remote Play support for Eye enabled games has already been confirmed, I think its been said for controller reasons and not necessarily because the encoder can't handle it. Regardless, it will be interesting to see how it all shakes down.
 
Last edited by a moderator:
unless Cerny's fully fluent in japanese and the interview was originally conducted in that language...

Mark is fluent in Japanese.
But I agree with the rest of your post, it astonishes me how people look for unlikely magic technology explanations for phrases in interviews.
 
Status
Not open for further replies.
Back
Top