Xbox One's mandatory Kinect, and the PS Eye

Mandatory Kinect for XBXOne could mean the diffusion of some of its functions for PSEye in multiplat games. Mainly voice recognition, because PSEye and Kinect2 are 2 completely different beasts.
COD Ghost, a multiplat game, x example, is rumored to have the dog controlled via voice commands.

But everything depend on Sony!
Speech recognition on XBO is managed via a dedicated chip and it is completely free, i.e. no impact on GPU or CPU cores.
I suppose that the PSEye will not have this kind of chip, and, because PSEye is completely optional in the PS4 ecosystem, it is safe to assume that PS4 will not feature this kind of dedicated chip on board.
If I remember correctly, speech recognition was quite an heavy task for the first kinect.
Maybe this could mean, translated on PS4 environment, that 1CU of the 4 extra CUs could be reserved for speech recognition tasks in multiplatform games.
But in order to work, this is something that Sony must decide from the beginning.
For all the multiplat games that support Voice Commands 1 CU should be reserved for this task no matter if you have PsEye or not, no matter if you will use this feature or not.
And so, as I have said in the "VS Audio" discussion, I have strong feeling that the "14CU+ 4CU rule" is coded and already established by Sony, at least for the 3d party developers.

And speaking of this, DF some times ago received confirmation from a developers that Sony mandate some kind of reservation for remote play or camera implementation.
Well, as some fellow forumers pointed out already, the 14+4CUs might be an incorrect assumption because it would make no sense to have 4 CUs in store when they could be used freely.

Other than that, I agree with you, and I think that the OP has an excellent point, because I had never thought of it, but the universal use of Kinect could also help developers to implement certain mechanics if you have the PSEye, provided the development tools are great.

Iirc, adding voice commands for Kinect in any game is so easy, it takes 5 minutes to implement, if not less --because of the development tools.
 
Iirc, adding voice commands for Kinect in any game is so easy, it takes 5 minutes to implement, if not less --because of the development tools.

If any developer ever told me they could implement anything in 5 minutes, I'd be having a conversation with them about their future at the company after I'd finished laughing.
 
If any developer ever told me they could implement anything in 5 minutes, I'd be having a conversation with them about their future at the company after I'd finished laughing.
:eek: Okay... Perhaps I missed the world possibly? Taking into account there are some games out there which seem to have been programmed in 5 minutes or less, because they are *slightly* bad...

Perhaps I misunderstood it, but I remember reading somewhere that in order to add voice commands for Kinect developers just had to write the text associated to an individual voice command, which of course should help to speed things up.

On second thoughts, it might not be very easy after playing Skyrim with Kinect support, but we are talking about more than a hundred voice commands here.

Even so just allowing to use text to develop something can make a difference between a poky development and one that zips along.
 
Sure it's probably not difficult to add them, but IME most engineers couldn't write a program to load a file, sort it and write it back to disk in an hour, even if you allow them to use STL. Though most would claim they could.
And there is an enormous difference between writing a trivial app and working in a big code base. Where the minimum iteration time could be 10+ minutes.

Look at the stats on the number of lines of code the average engineer generates per day.

Software is hard. And people who don't do it don't get that.
 
Well, as some fellow forumers pointed out already, the 14+4CUs might be an incorrect assumption because it would make no sense to have 4 CUs in store when they could be used freely.

Off topic.

The 14+4 CUs did come from a source that provided reliable info.

Maybe its not some special CUs or some hard reservation of CUs that mandate a 14+4 setup.

Maybe its the Onion+ bus that dictates that type of setup. Its only 20 GBs worth of bandwidth and if was meant for general graphics use then why isn't it called garlic+. Onion is also known as Fusion Compute Link and its main purpose was direct data transfer when computational and graphic cores are working together. The label Onion+ does hint at a gpgpu based solution.

Maybe 4 CUs worth of gpgpu work will readily saturate the onion+ bus, so that where the 14+4 setup comes from.

On topic.

Kinect is a potential dominating control interface for home computing. I think thats why MS is so high on it, yet its not readily shown off for gaming.
 
Sure it's probably not difficult to add them, but IME most engineers couldn't write a program to load a file, sort it and write it back to disk in an hour, even if you allow them to use STL. Though most would claim they could.
And there is an enormous difference between writing a trivial app and working in a big code base. Where the minimum iteration time could be 10+ minutes.

Look at the stats on the number of lines of code the average engineer generates per day.

Software is hard. And people who don't do it don't get that.
Sorry for the noob question, but what does STL stand for? :oops: Reading your words make a lot of sense except for that part, I can't figure it out on my own. So unless you tell me what it means, my brain will just come up with even more complex explanations, words and terms I don't have a clue about.

Other than that, I am conscious for sure of the fact that you know what there is to it. It's fairly obvious you have a solid grounding in the basics and advanced knowledge of computer science.

Software has to be hard, and a clear and clean programming structure -aka great development tools- should help a lot to speed things up without breaking your code.
 
Software has to be hard, and a clear and clean programming structure -aka great development tools- should help a lot to speed things up without breaking your code.
People underestimate the effort needed to develop anything. Look on YouTube for something like Unity tutorials. You'll see people throwing together little controlled characters and interfaces and stuff in literally minutes. But to progress any of those ideas to a real game takes a helluva lot longer, and that's on a super-easy middleware engine. Pretty much every library and tool comes with some hours minimum learning how to use it, and then a specific way of doing things that doesn't necessarily fit with how you were intending to organise your game and you have to develop a host of workarounds to fit it in. There are developer forums busy with people asking questions how to use libraries and tools, when a coder has poked away for some time at what seemed an easy project only to hit a wall and need help.

Every library saves the developer having to write their own from scratch, but they all have an operation cost that is far higher than ~zero.
 
Sure it's probably not difficult to add them, but IME most engineers couldn't write a program to load a file, sort it and write it back to disk in an hour, even if you allow them to use STL.

...that's because you need to write at least 10 boring tests for that functionality :p
 
...that's because you need to write at least 10 boring tests for that functionality :p

Quite offtopic, but without those tests reading string with spaces would likely fail. Streams in c++ are a bit more tricky than just using foo << bar. Boost has nice archiving support that saves souls who don't want to know all the little details about streams in c++.
 
Last edited by a moderator:
Can you please stop with this 14+4 nonsense! You have posted this argument/statement in multiple threads already. The 18 CU can be used in any manner/configuration as seen fit.

It is just a theory, but I hope it will be true.
I mean, we have now many clues, Cerny interviews, leaks, hardware assumptions & analysis, rumors (also from this board) regarding the 14+4CU.

If it is true that PS4 will gain practically nothing or next to nothing by using the extra 4CU for rendering, then I really hope that this 4CU can be used for other tasks.
I am sure that I am not the only one that would love PS4 to feature speech recognition, some kind of camera recognition, and faboulous next-gen in-game audio.

In a sense, I am starting to believe that these 4CU are being cunningly added, to compensate the lack of some dedicated hardware regarding audio & PSEye.
Maybe Sony didn't have time and/or funds to finance this kind of R&D, or maybe they just choose to insert these extra CU because general purpose computing is a perfect fit for their needs and vision.

Sure, as I have said elsewhere, 18CU vs 12CU just for rendering, it is a far better marketing & communication tool against MS (and, in this sense, it is auspicable that Sony continue to use it...), but I suspect that this theory could be true.
 
It is just a theory, but I hope it will be true.
I mean, we have now many clues, Cerny interviews, leaks, hardware assumptions & analysis, rumors (also from this board) regarding the 14+4CU.

If it is true that PS4 will gain practically nothing or next to nothing by using the extra 4CU for rendering, then I really hope that this 4CU can be used for other tasks.
I am sure that I am not the only one that would love PS4 to feature speech recognition, some kind of camera recognition, and faboulous next-gen in-game audio.

In a sense, I am starting to believe that these 4CU are being cunningly added, to compensate the lack of some dedicated hardware regarding audio & PSEye.
Maybe Sony didn't have time and/or funds to finance this kind of R&D, or maybe they just choose to insert these extra CU because general purpose computing is a perfect fit for their needs and vision.

Sure, as I have said elsewhere, 18CU vs 12CU just for rendering, it is a far better marketing & communication tool against MS (and, in this sense, it is auspicable that Sony continue to use it...), but I suspect that this theory could be true.

I patiently wait for the days when Sony sticks "18CU vs 12CU!!" in an ad.
 
Standard Template Library, a built in collection of useful function in c++.
Ah, okay, thanks for the explanation DrJay24. Now I realise I could never try it out even if only I did it out of pure curiosity. My code wouldn't make it out of the oblivion.

Shifty, I will take a look at one of those videos about Unity, in order to know what you mean. My only "coding" experience comes from *hacking* some games by modifying the values using a very simple text editor like Notepad or Wordpad. Yup, something as simple as that could change a game dramatically -Need for Speed 3: Hot Pursuit, and EuroFighter come to mind-. :eek:

I also managed to transform the shareware version of Doom into the full version -without accessing the extra levels though- because I learnt that the only difference was a simple letter.

The very header of the full version WAD started with a pwad or something like that, while the shareware version started with swad or so -I can't recall the particular letter exactly-. The only thing that I did was changing the s in the swad word to a p, and voila.

I also learnt that the header of the executable files start with MZ because of a programmer called Mark Zibowski, who seems to be the original creator of the format. I am really bad at remembering names but I remember his name quite well.

You gave me an idea for a game anyways. :eek: I am going to create a main character who is a lag bee -I remember Shifty the laggard- called Shifty Geezer if I ever make a game.

Lag bees are awesome.
 
I thought Mark Cerny already explained it's 18CU, and that the 14CU "balance" was a badly interpreted leak?

There's a reasonable balance somewhere which keeps both the CPU and GPU busy without one of them being a bottleneck to the other. He explained the gfx pipelines have "holes" in their use of CU ALUs, making those time frames useful for GPGPU work while the gfx pipeline is busy elsewhere. The modification they made were specifically to facilitate that. When he said it's not "round" he seems to mean in terms of the ratio between CPU power versus GPU power, they knowingly made it biased toward GPU power so that GPGPU will rebalance things.

The rumored 2 CPU cores being reserved by the OS, now that's much more interesting. Could they have planned a natural UI with the camera, which would require system-wide resources? ;)
 
Last edited by a moderator:
The rumored 2 CPU cores being reserved by the OS, now that's much more interesting. Could they have planned a natural UI with the camera, which would require system-wide resources? ;)

That totally makes sense. I wonder if they'd still reserve it for now but may later give some of those cpu cycles back to devs..
 
The Vgleaks diagram show 2 extra pipes outside of the 8 compute pipes. One for VSHELL (The "desktop"), and one for mixed graphics and compute use.

I am not familiar with GCN usage and OS interaction. Is it possible that they assigned 2 cores to look after those 2 pipes part-time ? These 2 cores may also need to help in the other 8 compute pipes ?
 
If the Kinect proves to be very successful for Xbox One, could that lead to multiplats on both platforms with Microsoft footing the bill for mass adoption of the tech and Sony piggybacking off of its success? Or is my line of thinking flawed?
can't imagine any dev programming multi-plats to use the PSeye when its not bundled with the system.
 
Back
Top