How should devs handle ports between consoles? *spawn

Yeah the parity axe swings both ways, it doesn't discriminate. It can strike in subtle ways as well, like I'm sure the non hdd 360 sku caused much grief by year 6 leaving the parity axe to strike both the ps3 and 360+hdd setups. Honestly you aren't hugely concerned with it in the beginning of a gen because you are so busy just trying to skip the damn game due to the joys of console development forcing you to re-write everything from scratch when a new box comes out. It does get irritating though after that first game is out and you have more time to actually think about things you want to try and implement.

Yeah, the none HDD 360 SKUs must have been hurting everyone by the end. I'm surprised MS were able to force all but a tiny number of games to keep the none HDD SKUs supported right until the present day.
Those optical only models must have kept bringing in the customers and their dollars at the low end, especially with Kinect. Too bad when you consider what a huge upgrade even a $10 USB pen drive can be...

So yeah, the parity axe swung both ways last gen and it's going to swing both ways again this generation.
 
Wow, makes me wonder how many Wii U ports of 360/PS3 games aren't using more than 512MB of memory. Pretty sad, but when Criterion said using the high res texture was as easy as flipping a switch, it makes sense that a lot of memory was sitting their vacant. I suppose other developers just never bothered.

First gen WiiU games were basically struggling to ship, given the troubles with development kits and the CPU. Perhaps creating WiiU specific texture settings and profiling and testing those would have taken time they simply didn't have.

Unfortunately, there weren't really any second gen WiiU multiplatform titles ...

I think function made a valid point.
Last gen PS3 had the edge on sound output, being able to output 7.1 channels.
This time Xbox has the advantage on sound.

Yep. Audio often seems to be a lower priority. Binaural is awesome. Throw some of that magic out of the pad's headset sockets, plz MSSony.

In the case of audio, it's really a mixture of Sony and MS and PC users that lose out. The axe swings lots of ways ...
 
People here are talking about costs to justify parity. For a game like the new AC, is it so expensive to up the resolution, AA quality or to use more advanced effects at the PS4 version that were already developed for PC?



I mean, the PC version will, at the higher settings, use more advanced effects. Why not use those already developed higher quality to use the higher power of the PS4?



I really don´t understand.


Consoles are locked to 30 and 60 FPS where no such limitations exists on PC. The game needs to be shipped with enough frame budget for the worst case scenario. So right now being 30fps, they really need the game to be running well above that and in the worst possible scenario drop to 30.

If the feature you've enabled puts you below you've got to optimize to get it up there again. So it's not that you can't just enable these features, it's just when you do and there is no space left already for it then you have to make space. And making space takes time and money.
 
Are you sure about this?

Not a hardware lock but a goal to reach.

It's a goal to ensure there isn't screen tearing. Officially you can run unlocked 60 but that will result in the feelings of slow down and tearing will occur.

I'm worked indie so maybe my response is shallow compared to the graphics engine guys here but that's generally the rule of thumb for the guys I worked for.
 
Ultimately, the game design (NPCs in AC for example) cannot be influenced by machine power, or you provide different experiences. The experience has to be the same across machine with just eye-candy and IQ differing.

Well they did it for Dead Rising on the Wii. I mean, it's not like reducing the croud from 1000 to 800 will make the game completely different yet it'll reduce 20% of the CPU requirement for that specific task.

And while we're at it - isn't this the sort of thing the PS4 CUs are supposed to be used for?
 
If it's something that can scale without affecting the game, like from 1000 NPCs to 800, why bother with the larger number to begin with? Why not just keep it at 800 for everyone? And if it's a case where the difference is significant enough that you don't want to limit the more powerful machine, let's say a different between 200 NPCs versus 1000 and you don't want the high end PCs with only 200 NPCs when it can cope with far denser crowds, then you materially change the game - it becomes far easier to manoeuvre through the less dense crowd on the lower spec machine and changes the options available to the player.
 
If it's something that can scale without affecting the game, like from 1000 NPCs to 800, why bother with the larger number to begin with? Why not just keep it at 800 for everyone? And if it's a case where the difference is significant enough that you don't want to limit the more powerful machine, let's say a different between 200 NPCs versus 1000 and you don't want the high end PCs with only 200 NPCs when it can cope with far denser crowds, then you materially change the game - it becomes far easier to manoeuvre through the less dense crowd on the lower spec machine and changes the options available to the player.

I'm sure the 1000 NPC is a bullet point and 800 might have been a sweeter spot for the consoles.

Talking about last gen - the issue was more that the machines were equally powerful on paper meaning people had a 'reason' to moan when the PS3 wasn't as good as X360 however as a PS3 gamer I (eventually) accepted Sony messed up with Cell making the PS3 harder to dev on and would accept that some games had sacrifices vs X360.

This gen PS4 is apparently more quite a bit more powerful and easier to program on so again it seems Playstation gamers feel they have a 'genuine reason' to complain about parity.
 
Not a hardware lock but a goal to reach.

It's a goal to ensure there isn't screen tearing. Officially you can run unlocked 60 but that will result in the feelings of slow down and tearing will occur.

I'm worked indie so maybe my response is shallow compared to the graphics engine guys here but that's generally the rule of thumb for the guys I worked for.

So you are saying that if you do not reach a locked 30/60 fps you will tear?
 
however as a PS3 gamer I (eventually) accepted Sony messed up with Cell making the PS3 harder to dev on and would accept that some games had sacrifices vs X360.
Cell wasn't really the problem. RSX just wasn't all that hot, and Cell was needed to do a lot of work, which entailed crazy workload for the devs. If RSX had been Xenos, Cell could have done a lot of CPU work instead and contributed what it was supposed to, which was 'richer worlds'. Although chances are devs would have just used it it for prettier graphics. ;) That's an interesting point with async compute though. It's basically processing power that can't be used for graphics, slotting in between the graphics workloads, so it'll be used for something else. A lot of that might end up being rather visual though.
 
So you are saying that if you do not reach a locked 30/60 fps you will tear?


Has to be a multiple of 60Hz. So 120 and 240 would also be fine. Unless you have nvidia g-sync.

It has to deal with your displays refresh rate. If you are putting out frames at a time when your monitor is refreshing it will grab two frames and you get the screen tear.

It's a problem that's only come with flat panels really.

Consoles have adaptive Vsync to help with this however. It will cap at the desired number if you run above and it will let it go unlocked if it goes below. Or You can run full Vsync to remove the screen tearing but then you get weird judder and latency when the frames dip below or above the desired frequency.

Here's the wiki: http://en.m.wikipedia.org/wiki/Screen_tearing
 
Last edited by a moderator:
Is it? I thought it was the opposite.
Why? :???:

It's a problem that's only come with flat panels really.
Tearing exists in CRTs for exactly that reason - the output image changed mid-scan. It was even 'used' in the Amiga that could adjust an output value each scanline, giving rise to its famous gradient backdrops. Amiga also had a genlock to synchronise with the display signal. The only possible reason tearing was less common on older consoles running on CRTs was because didn't use it, and/or maybe the hardware locked the output buffer during synch so it couldn't be changed (?). On PC, tearing abounded, one minute from too-slow frames, and the next from too-fast, with multiple new frames in the same refresh being particular jarring.
 
Why? :???:

Tearing exists in CRTs for exactly that reason - the output image changed mid-scan. It was even 'used' in the Amiga that could adjust an output value each scanline, giving rise to its famous gradient backdrops. Amiga also had a genlock to synchronise with the display signal. The only possible reason tearing was less common on older consoles running on CRTs was because didn't use it, and/or maybe the hardware locked the output buffer during synch so it couldn't be changed (?). On PC, tearing abounded, one minute from too-slow frames, and the next from too-fast, with multiple new frames in the same refresh being particular jarring.


Yea I had a mental lapse there thinking about latency/processing time. Correct CRT has the same issue.
 

I read somewhere that Xbox One is easier develop mainly due to SDK, the API is really similar to what you would find on a PC.

I do not understand why PS4 users complain about parity, the HW is basically the same, PS4 is marginally better.
Ain´t it enough that some titles already show that difference?

One more important thing, how much effort was spent on learning a new architecture (PS3) and try to achieve parity with Xbox 360?
How much effort is being put today on Xbox One version?
 
Last edited by a moderator:
I read somewhere that Xbox One is easier develop mainly due to SDK, the API is really similar to what you would find on a PC.
I don't think Goonergaz was saying PS4 is easier to program for than XB1. But none are difficult at this point. PC and XB1 share the same general API calls as I understand it, but PS4 isn't particular hard to adapt to at that level (and even has tools to make the transition from DX easier I believe). PS4 also uses Windows PCs running Visual Studio, the same development environment as PC and XB1 if you're coding directly. OTOH XB1 has the ESRAM you have to design around, so even if the tools are more familiar, it isn't an easy platform to port to. While PC has to worry a bit about split RAM pools, so in that regards PS4 is actually easier than PC. At least on paper. CPU BW contention may actually make that a false positive.

Basically though, none is a dog, unlike PS3 last gen that needed considerable effort to get anything decent from it. All use the same processes, the same CPU code, the same general API calls even if they have different names. The issues aren't coding at this point* but engineering to the different platforms, and in that respect XB1 offers a deviation from the other two.

* Barring any SDK headaches which are quite possible.
 
I read somewhere that Xbox One is easier develop mainly due to SDK, the API is really similar to what you would find on a PC.

I do not understand why PS4 users complain about parity, the HW is basically the same, PS4 is marginally better.
Ain´t it enough that some titles already show that difference?

One more important thing, how much effort was spent on learning a new architecture (PS3) and try to achieve parity with Xbox 360?
How much effort is being put today on Xbox One version?

I would argue that the hardware in the PS4 is better then marginally better, its not orders of magnitude of better but its sure not marginally.
 
I read somewhere that Xbox One is easier develop mainly due to SDK, the API is really similar to what you would find on a PC.

I do not understand why PS4 users complain about parity, the HW is basically the same, PS4 is marginally better.
Ain´t it enough that some titles already show that difference?

One more important thing, how much effort was spent on learning a new architecture (PS3) and try to achieve parity with Xbox 360?
How much effort is being put today on Xbox One version?

XBO is a pain due to the memory setup, it has DX but then it has the 32mb RAM part which makes it complicated, if you do a few googles theres a few examples where devs have stated PS4 dev time is quicker (I think one quoted 50%).

The GPU in PS4 is substantially better than XBO. As I said before, the raw power of the boxes last gen were similar enough for one to 'expect' similar multi-plats whereas this gen due to the better gfx GPU (and quicker to program) it's fair to say users will expect PS4 multi-plats to look better.

Here's some dev quotes:
http://ps4versusxbone.wordpress.com/2013/09/24/developer-comments-timeline/

I meant to add that also (by all accounts) Sony went to devs and asked what they wanted - so again I'm struggling why the CPU is already (apparently) limiting games
 
Last edited by a moderator:
I meant to add that also (by all accounts) Sony went to devs and asked what they wanted - so again I'm struggling why the CPU is already (apparently) limiting games

Maybe they didn't say Intel when they talked about CPU but only x64... ;)
 
Back
Top