Is UE4 indicative of the sacrifices devs will have to make on consoles next gen?

1 well the proof is in the pudding , just look at video

Proof of what? Do you have the exact same video running on different hardware with a smoother frame rate? If not, then exactly what do you suppose this is proving?

2 well i said in compute tasks

And I said irrelevant unless those compute tasks need to run synchronously with gameplay and are unable to run on the CPU side of the PCI-E interface. In other words, you're speaking of advantages where they may not exist at all. It's possible I'll grant you that, but then it's also possible that the vast majority of GPGPU requirements for next gen games will agnostic to PCI-E latency (i.e. not required to run synchronously with gamplay). Or they will be light enough to run on big PC CPU SIMD units - or even IGP's.

3 much quciker acces, mucz less coping and waiting for data, more options for practical use. lets write email to nvidia architects , they are wasting resourecs on unified adress space in maxwell:D

Quicker access how? How will the GPU be able to access it's local memory faster in a unified setup than with a dedicated memory setup? Copying? How much copying is required between GPU memory and main memory in the middle of rendering a scene? What impact does this have? Ask yourself this: How much more performant is something like Llano or Trinity compared to a discrete GPU of similar specification and memory bandwidth? Your argument concludes it would be significantly more so. Lets see if you can find some benchmarks showing that ;)

4"DX11+ GPUs as standard and the ability to construct draw lists over multiple threads thus making submission/draw faster meaning more draw calls and more stuff on screen and looking better.

Okay, so this just became another draw calls argument, the same one you repeated 2 points later in a different way. As stated in my last post, draw calls is a known and clear disadvantage for the PC. But it's no worse, in fact it's a lot better thanks to DX11 despite the shortfalls you correctly mention here, than it was last generation which for the most part was stuck with the limitations of draw calls under DX9. DX11 at least improves the situations by a fairly significant amount, albeit still no-where near on par with the console situation. Lets not overstate the issue though. Unless you can point to a game from the last generation that suffered on PC in relation to it's console counterpart thanks to this limitation. Just the one will do...

5 and thats exactly why running both graphics and compute task is so slow currently

What evidence do you have of this apparent slowness? Current PC architectures are plenty capable of running both graphics rendering and GPGPU workloads on the same GPU simultaneously provided the GPGPU work isn't required to run synchronously with gameplay. The PS4, if indeed it does have more ACEs will be a little more efficient at this than first generation GCN GPU's but should be no more so than the second generation which should be prevalent by the time the console actually launches.

7partialy agree, but 2GB will be limiting not even from rendering performance but simply from targeting assets etc... And in 2005 ,512Mb vram was a lot more common than 8GB in 2013...

Not really. In 2005 1 GPU sported 512MB of memory if I recall correctly which was the 7800GTX 512MB. By the end of 2013 when these consoles launch there's no reason why we won't see 8 or even 12GB GPU's available. The only limitation today is memory chip density which by the end of this year will have doubled over what it is today. And today we have 4 and 6GB card available (albeit in limited quantities). Simply doubling the density of the chips on those cards gives us 8GB and 12GB cards.

Nevertheless, I still agree that 2GB will ultimately be a limitation of today's cards and will inevitably result in things like texture resolutions having to be reduced or loading times/instances having to be increased amongst other things.

BONUS

1 we are all know how much optimisation most pc ports have...

Yes, about the same as last generation. So we all understand that as the console generation progresses. consoles will gradually require less raw power to match PC performance thanks to platform, specific optimisations. However early on in the generation (when the likes of the 680 will still be barely relevant), those optimisation advantages are going to be fairly limited. Especially when you consider how similar the console architectures are to the PC this generation.

2 when these advanced games will ship , nvidia will have to sell next round of cards and i fell perfect excuse to leve 680 behind in performance, drivers adjusted accordingly...

Well, yes absolutely. By the time we are event worrying about whether the 680 can keep up with the consoles it will be considered a last generation GPU and GPU's with a lot more power will be available. Still, 1 generation old GPU's are still quite well supported and this still falls in the timeframe of first generation (unoptimised) consoles games which it will handle more than fine.

By the time we actually get to fully optimised console games a year or two into the generation the 680 will indeed be old and falling out of regular driver support. At this point it may even start to struggle to keep up with the latest releases at full settings due to memory size limitations and lack of driver support. But by then even mainstream gamers will be running 860Ti's sporting 4-8GB and 50% more raw performance so I guess it's not that big a deal eh?

well we heard this "similar dx level "last time too, and qucickly dx10 was needed for simple ports...

Really? And just how many of the hundreds of cross platform games required DX10 in the PC version for console parity? And how many achieved it with good old DX9?

bad performance of some cosnole games were effect of painfull learning of in order, mulithreded cpus and quick pc ports not gpu. And i remeber good stuff like graw and just cause too, with graphical features absent in pc versions running on hardware with more fillrate ,texrate etc (xtx1900...)

GRAW was a completely different game on the PC. JC I'll grant but it was one of the few exceptions. The general rule was more in line with Oblivion, COD, Quake Wars, HL2:EP3 and all the other big games of the time that ran just as well or better on G7x and R5xx hardware under DX9 as they did on PS360.
 
The way I answer that problem is to find similar PC hardware as an Xbox 360 or PS3 and try running a game like Crysis 3 or Battlefield 3 on it and compare FPS. No one today uses video cards from 2006. That's what optimization gets you.

I think the problem there is the opposite of optimisation, i.e. a complete lack of driver and developer support for the architecture. Thus looking at performance of modern games on those architectures isn't really valid.

A better solution would be to look at the performance of modern games on modern architectures that have both developer and driver support but which have similar raw power to those very old high end GPU's of half a decade ago. For example Llano.
 
You've clearly missed (or deliberately ignored) my statement: "A 680 is faster than a stock 7970 in benchmarks but being a different architecture we can't compare to Orbis on raw specifications."

If raw paper specs were directly comparable between architectures the 7970 would be vastly faster than the 680. It clearly isn't which is why your comparison is meaningless. Orbis and PC based GCN GPU's use identical architectures and thus are directly comparable in terms of peak theoretical performance. Therefore comparing Orbis to the 7970 which in turn is comparable in real world performance to the 680 is the most accurate way of comparing Orbis to the 680. Not simply by comparing raw paper specs between two incomparable architectures.
how about API? how many layers of it wasting cycles? the way it constructs, send and waits for draw calls? order of magnitude faster link beetween cpu and gpu resources? how code taken advantage of that would behave ported to pci style setup? how about those features not even exposed in directx as lothes talking about? i smell potential for gta4 like situation here...

Of course it's important. But one benchmark is not absolute proof of anything. Let me ask you this. If Orbis had exactly double everything it currently has, i.e. 16 Jaguar cores, 36 CU's, 16GB GDDR5 at 348GB/s etc... would you deny it's twice as fast as the current Orbis configuration? Of course not, so why deny that the same architecture on the PC with twice the execution units would be twice as fast if fully utilised?
Really? So exactly the same game running on Orbis and PC will stress Orbis far more than it would say a 7850? Would you care to go into more detail explaining the exact reasoning for that?

they are not directly comparable as above.they would under the same software stack and apu configuration.


So just to be clear, you expect the PS4 to demonstrate more CPU performance because despite having a weaker CPU, that CPU will be fully utilised while a more powerful PC CPU will for some reason be so underutilised that the actual end power output will be lower than the console? So for some reason, the PC game will require more CPU power, the CPU will have that power available, but it will not be used. Would you like to explain that one in more detail?

Well after seven years we have for example asasins creed 3, console port which is so optimized on cpu side that mighty ivy drops to 30fps in boston no matter what settings...Not to mention steam stats showing staggering amount of 2cores(and risnig) setups... i dont first years on cpu side that rosy either...


Despite the redundancy we're still talking about 217.6GB/s compared with 176GB/s. Far more if we account for the seeming efficiency of NV memory bandwidth as opposed to AMD. I'm still curious to understand where the advantage is coming from there?

how about waiting for data? especialy in little memory bound situation on pc side at this moment? especialy in light of heavy cpu-gpu compute juggling in new engines?
 
It will be funny when the great GTX680 will struggle to even start games in a few years..

Unless games move to DX12 exclusivity (which seems unlikely for at least a good few years), what makes you think the 680 will be unable to start games in the near future? The now 7 year old 8800GTX has only been completely unable to play a grand total of 1 game since it's launch to my knowledge and that game launched only a couple of months ago. Other than that, despite a lack of both driver and developer support it's still pushing out more performance than either the 360 or PS3. I'm not saying I expect the same from the 680 since despite its core performance advantages it will certainly be hampered by memory size but I still don't see it failing completely on next generation games in any time period were it would still continue to have any kind of market relevance.

Obviously 3 years from now (2 years after the consoles launch) the 680 will be completely irrelevant as a gaming card having been replaced by the likes of $150 950's in the same performance bracket and 980's at the high end with 2-3x the performance and 8-12GB of RAM.
 
how about API? how many layers of it wasting cycles? the way it constructs, send and waits for draw calls? order of magnitude faster link beetween cpu and gpu resources? how code taken advantage of that would behave ported to pci style setup? how about those features not even exposed in directx as lothes talking about?

Consoles have always had these advantages and no-one is denying them this generation. But that wasn't the point of the original statement. The original statement asked specifically what PC would Orbis be equivalent to accounting for the console efficiency advantages. This is clearly an impossible question to answer as it depends on far to many variables not the least of which would be the level of optimisation developers spend on both platforms. However in trying to approach the answer to this question you fist have to ask how much more peak performance do the PC GPU's have over their console counterparts. This is something I attempted to answer but to which your comments above had no relevance.

i smell potential for gta4 like situation here...

A poorly optimised port? Similar to Quake 4? Poor optimisation goes both ways but in itself does not constitute proof a a fundamental weakness of one architecture in comparison to another.

they are not directly comparable as above.they would under the same software stack and apu configuration.

You'll need to better clarify which point you're attempting to answer.

Well after seven years we have for example asasins creed 3, console port which is so optimized on cpu side that mighty ivy drops to 30fps in boston no matter what settings...Not to mention steam stats showing staggering amount of 2cores(and risnig) setups... i dont first years on cpu side that rosy either...

Really?

http://www.pcgameshardware.de/Assas...ts/Assassins-Creed-3-Test-Benchmarks-1036472/

If people are having problems due to game or driver bugs that's a different matter altogether but the above Boston benchmark clearly shows this not to the the general case meaning any instance of performance less than this must be down to bugs as opposed to general lack of performance.

how about waiting for data? especially in little memory bound situation on pc side at this moment? especial in light of heavy cpu-gpu compute juggling in new engines?

Waiting for what data? The GPU will not be waiting for data in comparison to it's console counterpart if the memory that is serving up that data is higher bandwidth. PCI-E bandwidth simply is not a bottleneck in feeding data through to the GPU and as stated several times already, the latency of that operation is only relevant if you're performing gameplay synchronous GPGPU tasks on the GPU.. If that's the case then the PC is going to have to find another approach entirely such as performing the work on the CPU, thus potentially freeing up GPU cycles for graphics rendering. Admittedly though, this introduces a potential bottleneck on the CPU side of the PCI-E interface.
 
Judging by that Infiltrator demo, it might as well have been 720p by how blurred it is after all the temporal AA techniques and post fx blur. The rez sacrifice will probably be worth it. So yeah, near the end of the gen I expect to see games running slightly above 720p to get more eye candy.
Don't need to go to 720p to get good AA.

Infiltrator is how stuff SHOULD LOOk. I'll take slightly blurry 0 temporal aliasing over oversharp insanely aliased crap like the PS4 Elemental demo any day.
 
Proof of what? Do you have the exact same video running on different hardware with a smoother frame rate? If not, then exactly what do you suppose this is proving?

it is proof that 680 might not be is not golde ticet in all this discussion about cuting down, comparison etc, especialy as we tak disscusion to optimisations avialable and other marketing forces whet these project will be shiping


And I said irrelevant unless those compute tasks need to run synchronously with gameplay and are unable to run on the CPU side of the PCI-E interface. In other words, you're speaking of advantages where they may not exist at all. It's possible I'll grant you that, but then it's also possible that the vast majority of GPGPU requirements for next gen games will agnostic to PCI-E latency (i.e. not required to run synchronously with gamplay). Or they will be light enough to run on big PC CPU SIMD units - or even IGP's.

not exist at all? entire industry(at least AAA focused) is preparing to taking advantage of that and it will be absolut neccesity to haness taht apu advantages...

Quicker access how? How will the GPU be able to access it's local memory faster in a unified setup than with a dedicated memory setup? Copying? How much copying is required between GPU memory and main memory in the middle of rendering a scene? What impact does this have? Ask yourself this: How much more performant is something like Llano or Trinity compared to a discrete GPU of similar specification and memory bandwidth? Your argument concludes it would be significantly more so. Lets see if you can find some benchmarks showing that ;)

cpu-gpu acces and algorithsm build around that as above. as for lano , if only one game would realy taken advantage apu uniqunes of it would be worth inspection , but it is the same copy/paste code under same constrined api. morover resources and memory setup is rather limiting


Okay, so this just became another draw calls argument, the same one you repeated 2 points later in a different way. As stated in my last post, draw calls is a known and clear disadvantage for the PC. But it's no worse, in fact it's a lot better thanks to DX11 despite the shortfalls you correctly mention here, than it was last generation which for the most part was stuck with the limitations of draw calls under DX9. DX11 at least improves the situations by a fairly significant amount, albeit still no-where near on par with the console situation. Lets not overstate the issue though. Unless you can point to a game from the last generation that suffered on PC in relation to it's console counterpart thanks to this limitation. Just the one will do...

these remain to be seen . I dont have a link at the moment, but recall recent twiteening of top thier devs ,still raving about issue which is almost the same despite promises... And soon typical draw call probably will jump again... f.egz I would strongly suspect gta4 and force unleashed , just look at dat shakyness on lower clocked hardware as soon as euforia is doing a liitle more, imagine all that juggling and waiting between cpu and gpu


What evidence do you have of this apparent slowness? Current PC architectures are plenty capable of running both graphics rendering and GPGPU workloads on the same GPU simultaneously provided the GPGPU work isn't required to run synchronously with gameplay. The PS4, if indeed it does have more ACEs will be a little more efficient at this than first generation GCN GPU's but should be no more so than the second generation which should be prevalent by the time the console actually launches.
they are capable in very rather limited , mostly non interactive stuff, wie will see soonhow the are capable and how much effort will be put into that


Not really. In 2005 1 GPU sported 512MB of memory if I recall correctly which was the 7800GTX 512MB. By the end of 2013 when these consoles launch there's no reason why we won't see 8 or even 12GB GPU's available. The only limitation today is memory chip density which by the end of this year will have doubled over what it is today. And today we have 4 and 6GB card available (albeit in limited quantities). Simply doubling the density of the chips on those cards gives us 8GB and 12GB cards.

Nevertheless, I still agree that 2GB will ultimately be a limitation of today's cards and will inevitably result in things like texture resolutions having to be reduced or loading times/instances having to be increased amongst other things.

there were 512MMB radeons to, and first hal of yea vs tail end of second for 8GB in 2013(and 1000$card vs 500) I think point still stands...

Yes, about the same as last generation. So we all understand that as the console generation progresses. consoles will gradually require less raw power to match PC performance thanks to platform, specific optimisations. However early on in the generation (when the likes of the 680 will still be barely relevant), those optimisation advantages are going to be fairly limited. Especially when you consider how similar the console architectures are to the PC this generation.

Seems like this time development on real,closer hardware started earlier without radical shift cpu paradigm and much more feedback with devs . rewards will com quicker.



Well, yes absolutely. By the time we are event worrying about whether the 680 can keep up with the consoles it will be considered a last generation GPU and GPU's with a lot more power will be available. Still, 1 generation old GPU's are still quite well supported and this still falls in the timeframe of first generation (unoptimised) consoles games which it will handle more than fine.

in theory yes , we will see how day to day deployment and marketing temptation on pc side will work.



By the time we actually get to fully optimised console games a year or two into the generation the 680 will indeed be old and falling out of regular driver support. At this point it may even start to struggle to keep up with the latest releases at full settings due to memory size limitations and lack of driver support. But by then even mainstream gamers will be running 860Ti's sporting 4-8GB and 50% more raw performance so I guess it's not that big a deal eh?

well tell that to pc gaf to whom 2GB is non issue till ps5 :D. It depend if nvidia will make these x500 like 7500 or 8500 and drive them to higher models on waves of awe of next gen graphic or go little higher . mainstream cards with 50% more performance than 680 now at maxwell gen? little too fast;) and nodes are slowing...

Really? And just how many of the hundreds of cross platform games required DX10 in the PC version for console parity? And how many achieved it with good old DX9?

dx10 ? many starting with jc2, ther is even a one with hard full dx11 requiment;) really crytek? gtx280 cant run crysis 3 on console settings?:D

GRAW was a completely different game on the PC. JC I'll grant but it was one of the few exceptions. The general rule was more in line with Oblivion, COD, Quake Wars, HL2:EP3 and all the other big games of the time that ran just as well or better on G7x and R5xx hardware under DX9 as they did on PS360.

diffrent and lacking most next gen features at time;) lighting esp. Others were quick pc ports at mt ppc in order architektures , targetinng 1+GB on pc side, often running at one or two threads, much diffrent this time.
 
Unless games move to DX12 exclusivity (which seems unlikely for at least a good few years), what makes you think the 680 will be unable to start games in the near future? The now 7 year old 8800GTX has only been completely unable to play a grand total of 1 game since it's launch to my knowledge and that game launched only a couple of months ago. Other than that, despite a lack of both driver and developer support it's still pushing out more performance than either the 360 or PS3. I'm not saying I expect the same from the 680 since despite its core performance advantages it will certainlyom now (2 years after the consoles launch) the 680 will be completely irrelevant as a gaming card having been replaced by the likes of $150 950's in the same performance bracket and 980's at the high end with 2-3x the performance and 8-12GB of RAM.

680 to liverpool is more like 1900xtx to xnos than to 8800gtx and it would be falling badly. support or not that is reality. As i said before I have 8800 (but 320megs only, still good egzample in today 8vs2/4GB debate )in my old box and sometimes checking how newer games perform , the only playable are those on unreal , here blops 2 @low settings :

https://dl.dropbox.com/u/18784158/New folder/shot0002.jpg
https://dl.dropbox.com/u/18784158/New folder/shot0001.jpg

2-3 times more powerfull card than xenos with richer feauterset, but liitle younger gaf sees only 1/3 more tflops and thinking the are set

Consoles have always had these advantages and no-one is denying them this generation. But that wasn't the point of the original statement. The original statement asked specifically what PC would Orbis be equivalent to accounting for the console efficiency advantages. This is clearly an impossible question to answer as it depends on far to many variables not the least of which would be the level of optimisation developers spend on both platforms. However in trying to approach the answer to this question you fist have to ask how much more peak performance do the PC GPU's have over their console counterparts. This is something I attempted to answer but to which your comments above had no relevance.

yeah hard two answer exactly, kinda read that backwards, depend case by case. but you, but as you bring orbis x 2, that carmack quotes and my liitle story about 8800gts creeps out.


A poorly optimised port? Similar to Quake 4? Poor optimisation goes both ways but in itself does not constitute proof a a fundamental weakness of one architecture in comparison to another.

it is not only about architecture per se , Something wider efficiency perciwed or from bad ports? Dont you agree there is a LITTLE to more of that on pc side? years come by but lack of proper utiliznig zing still can be nasty despite bruteforcing


You'll need to better clarify which point you're attempting to answer.

simply , list of closed arch advantages which you agreed console always had, unavialable or unpractical on pc , making them punch above their weight.


Really?

http://www.pcgameshardware.de/Assas...ts/Assassins-Creed-3-Test-Benchmarks-1036472/

If people are having problems due to game or driver bugs that's a different matter altogether but the above Boston benchmark clearly shows this not to the the general case meaning any instance of performance less than this must be down to bugs as opposed to general lack of performance.

that cpu bench is little basic, without clocks and arch;) but max 49 fps is little low for fifth iteration of the same engine running on cpus with 2 bilion trannies vs 160-250 on conoles resulting in 25-30fps , don't you agree? Bugs here, bad port here almost as some kind of normalisation or constant in these disscusion about performance comparisions.


Waiting for what data? The GPU will not be waiting for data in comparison to it's console counterpart if the memory that is serving up that data is higher bandwidth. PCI-E bandwidth simply is not a bottleneck in feeding data through to the GPU and as stated several times already, the latency of that operation is only relevant if you're performing gameplay synchronous GPGPU tasks on the GPU.. If that's the case then the PC is going to have to find another approach entirely such as performing the work on the CPU, thus potentially freeing up GPU cycles for graphics rendering. Admittedly though, this introduces a potential bottleneck on the CPU side of the PCI-E interface.

Well english is not my native, but I am pretty sure, i was clear two times that is data send from cpu two gpu which is order of magnitude quicker in apu configuration than faaar away at the other side of pci , especialy when devs will come up with crazier and crazier scenarios for it (well on site bandwith in apu vs 32 gigs pci too) an porting it properly(or gosh... not ) to pc.
 
So John Carmack with SVO + Raycasting gets more intresting.

Mega-textures and holographic discs for the win.
 
it is proof that 680 might not be is not golde ticet in all this discussion about cuting down, comparison etc, especialy as we tak disscusion to optimisations avialable and other marketing forces whet these project will be shiping

A video of the 680 running something with framerate drops is absolutely irrelevant to Orbis's performance. All we can take from that is that the same code running on Orbis would almost certainly perform even worse. But that tells us nothing about anything really since we don't know how optimised the code is at this stage for either architecture.

not exist at all? entire industry(at least AAA focused) is preparing to taking advantage of that and it will be absolut neccesity to haness taht apu advantages...

Okay, since you seem to know so much about this, please give examples of specific GPGPU based operations which will be heavily used next generation and which are heavily dependant on CPU<->GPU communication latency. Then explain how much compute capability they are going to require detailing why it would be impossible to run the operations on for example a modern quad core CPU.

Don't get me wrong, I'm sure there are some, and people like ERP, 3dilettante and Sebbbi would have no problem explaining them to us. But you're acting as though the requirement for these kinds of jobs will be a complete game changer when in fact, I'm betting you actually have no idea whatsoever how latency dependant GPGPU can or will be used and you're just using the argument as a get out of jail free card to make Liverpool appear more capable in real world terms than GPU's sporting double (or more) it's execution resources.

cpu-gpu acces and algorithsm build around that as above. as for lano , if only one game would realy taken advantage apu uniqunes of it would be worth inspection , but it is the same copy/paste code under same constrined api. morover resources and memory setup is rather limiting

Again, what algorithms are these. Explain them. Give examples of how they benefit the Xbox 360 this generation since it also sports a unified memory setup. You said "faster memory access" Rather than giving another generic one line answer with little to no substance, explain exactly what the GPU needs to access in system memory while working on a scene and why access latency time is important for these types of operations. Explain why said data cannot be stored in local graphics memory for quicker access there.

they are capable in very rather limited , mostly non interactive stuff, wie will see soonhow the are capable and how much effort will be put into that

The issue isn't that the GPU's aren't capable, GCN1.1 GPU's on the PC will likely be just as capable as Liverpool at handling all types of GPGPU operations. The issue is how fast the data can be transferred back and forth from the CPU over PCI-E. If latency needs to be low then there's simply no point in trying to send them to the GPU in a traditional PC setup at all. In which case the CPU would have to handle them (which it may be more than capable of depending on how heavy the workload is) or a local IGP may take on the work if that's something developers choose to support on systems with that capability.

One thing is for sure though, if these types of operations do become heavily integrated into games next generation then it's not going to be the PC GPU's having a hard time, it will be the CPU's having to handle those operations that will carry the burden.


there were 512MMB radeons to, and first hal of yea vs tail end of second for 8GB in 2013(and 1000$card vs 500) I think point still stands...

Yes the X1800XT had 512MB too, that makes 2 GPU's at the end of 2005 sporting 512MB. At the comparable time period this generation which is not now, but the end of this year when these consoles launch the question will be how many 8 and 12GB GPU's will be available. Well lets think about it. There are 2 models of GPU available today in 4GB configurations that I know of (670 and 680) and another 2 available in 6GB configurations (Titan and the 7970Ghz). That's 4 in total using 2Gb chips. By the end of this year and co-inciding with the launch of the next round of GPU's 4Gbchips will be available. So don't you think it's entirely possible that 8 and 12GB GPU's will be at least as prevailant at the of this year as the 2 512MB GPU's at the end of 2005?

And in fact we aren't talking about requiring 8GB anyway. These consoles reserve a couple of GB for the system so in terms of pure graphics memory the comparison is at best to 6GB and as stated about, we already have 2 6GB GPU's available on the market today, more than 6 months before the consoles launch. How many 512MB cards were available in April 2005?

Seems like this time development on real,closer hardware started earlier without radical shift cpu paradigm and much more feedback with devs . rewards will com quicker.

Fair enough. And since those games are developed on x86 based CPU's and off the shelf PC graphics hardware, those optimisation benefits will also translate over to the PC much easier.

well tell that to pc gaf to whom 2GB is non issue till ps5 :D.

Well I'm not in the business of supporting unsupportable arguments and this isn't GAF so why are you bringing peoples comments over there into this forum? After taking out the OS reservation (~2GB) and accounting for the fact that all that memory serves as both system and graphics memory I still think it's entirely possible that Orbis will eventually be pushing around 4GB of pure graphical assets in the future. That will potentially cause problems for 2GB cards. It's possible to work around this somewhat by streaming data in from main memory (bare in mind PCI-E can both read and write data at 16GB/s so in theory the entire 2GB pool could be refreshed every 1/8th of a second) but I'm not going to argue it's not going to cause some problems further down the line.

It depend if nvidia will make these x500 like 7500 or 8500 and drive them to higher models on waves of awe of next gen graphic or go little higher . mainstream cards with 50% more performance than 680 now at maxwell gen? little too fast;) and nodes are slowing...

This is irrelevant to your original point which is that the 680 will eventually fall out of driver support. While that's true, it is already significantly more powerful than the GPU portion of Liverpool so if it eventually has performance problems due to lack of driver and developer support (years from now) it doesn't matter whether the contemporary generation replacements are 10, 50 or 100% more powerful than it. Even if they are 20% slower they're still going to be a lot faster than Orbis and fully supported in the latest drivers.

And I wasn't talking about the maxwell generation which is far too early for the Kepler generation to fall out of driver support. I was talking one or two generations beyond that. Maxwell isn't a year or two into the consoles launch, it's potentially contemporary with the consoles launch. To put the discussions we are having now into context, the 680 is to the new consoles as Nvidias GeForce 6 range was to the current generation consoles. At this point in 2005 we were talking about the PS3 being as powerful or more than 2 6800 Ultras in SLI. Not quite the same this time is it?

dx10 ? many starting with jc2, ther is even a one with hard full dx11 requiment;) really crytek? gtx280 cant run crysis 3 on console settings?:D

As I said, how many of the hundreds of console ports required DX10 just to attain console parity? We aren't talking about developers choice to unify development to a single API for ease of developments sake which is a whole different situation. We are talking about games with both a DX9 and a DX10 path were the DX9 path was missing effects present in the console version which the DX10 path then implemented. Thus indicating that the DX9 was actually unable to support those features. The answer is very few, I can't think of one in fact. Many games that had a DX10 only path also implemented other features not present in the consoles version - JC2 being a prime example of that.

As for the one game requiring DX11 support, are you in complete seriousness trying to suggest that DX11 support was required in Crysis 3 to give console parity? And not in fact simply a developer choice to simply the development process and allow them to properly leverage the advantages of DX11 hardware?

I think we both know that had Crysis 3 been given a DX10 path - which Crysis 2 shows is entirely possible, it would have been running a lot better on the GTX280 than on current generation consoles. Or to put it another way, look how much better Crysis 2 looks at Very High details under DX9 compared with the console versions.

diffrent and lacking most next gen features at time;) lighting esp. Others were quick pc ports at mt ppc in order architektures , targetinng 1+GB on pc side, often running at one or two threads, much diffrent this time.

I'm not even going to get into a childish argument over which version of GRAW was better in 2005. They were different and each could equally be argued to be better than the other, lets leave it at that.

And no, virtually every cross platform game for the first 18 months of the current generations lives were not "quick PC ports". Because that's what we're talking about here. There were virtually no cross platform games in that era which did not run as well or better on G7x and R5xx based hardware of the time.

Maybe you're right and console games will be better optimised from start of this generation than they were last generation but as I said above, the fact that these games are running on x86 CPU's and off the shelf PC graphics hardware means those optimisations will also carry through to the PC to a much greater extent.
 
Don't need to go to 720p to get good AA.

Infiltrator is how stuff SHOULD LOOk. I'll take slightly blurry 0 temporal aliasing over oversharp insanely aliased crap like the PS4 Elemental demo any day.

Yeah that's what I mean. The demo still looks good despite it being blurry. So lowering the rez (more blurry) to get more horsepower later on is probably something 'alright'.

Still think they went overboard though. For anyone with FFDshow, activate the video decoder tool, enable Sharpness and Unsharp Mask, setting at 80. Watch the vid again and see all the details come alive! It's glasses on vs off basically.
 
680 to liverpool is more like 1900xtx to xnos than to 8800gtx and it would be falling badly. support or not that is reality. As i said before I have 8800 (but 320megs only, still good egzample in today 8vs2/4GB debate )in my old box and sometimes checking how newer games perform , the only playable are those on unreal , here blops 2 @low settings :

https://dl.dropbox.com/u/18784158/New folder/shot0002.jpg
https://dl.dropbox.com/u/18784158/New folder/shot0001.jpg

2-3 times more powerfull card than xenos with richer feauterset, but liitle younger gaf sees only 1/3 more tflops and thinking the are set

This is complete rubbish. You are talking about a 7 years old GPU that is utterly out of all driver and developer support. It's performance in modern games is nothing whatsoever to do with it's actual performance potential and everything to do with games and modern drivers being hopelessly unoptimised for it. Yes, 7 years into this new consoles generation the 680 may also be in a similar state but who on earth would still actually be using one for PC gaming at that point?

And yes, while the 680 in comparison to Liverpool in raw performance terms may be more akin to the X1900XTX than the 8800GTX in comparison to Xenos you also have to take account of the 1900 not having a unified shader array while the 8800 did and so in that respect, the 680 actually has more in common with the 8800.

And by the way your "1/3 more tflops" statement above is completely wrong. Firstly it/s 2/3rds and secondly it's a different architecture and so not comparable in that way. The 680 performs similarly to a 7970 which sports double the flops of Liverpool and so that would be the most accurate way to state the performance advantage in shader terms.

yeah hard two answer exactly, kinda read that backwards, depend case by case. but you, but as you bring orbis x 2, that carmack quotes and my liitle story about 8800gts creeps out.

But as I've already stated, the 2x advantage is very late into the generation when all possible console advantages have been exploited. And it's also based on the less efficient DX9 GPU. Add to that the consoles similarity to PC architecture this generation making optimisations more relevant to both platforms and all this comes together to make the 2x figure you quote completely unrealistic within the timeframe that the 680 will be relevant.

Ultimately, Orbis may perform on par with a PC GPU near to double it's theoretical performance but that won't be until long after the 680 has faded from relevance and has been replaced with much more powerful hardware, even at the mainstream level of the market.

And as stated above, GPU's falling out of driver and developer support like the 8800 is a different argument again and is not representative of console optimisation performance benefits. Better to look at a modern architecture like Llano that sports similar theoretical performance advantages but is still in current support and see how that performs in comparison.

it is not only about architecture per se , Something wider efficiency perciwed or from bad ports? Dont you agree there is a LITTLE to more of that on pc side? years come by but lack of proper utiliznig zing still can be nasty despite bruteforcing

simply , list of closed arch advantages which you agreed console always had, unavialable or unpractical on pc , making them punch above their weight.[/quote]

Yes consoles will punch above their weight, that's not in dispute. What is in dispute is by how much. You've already quoted Carmacks 2x advantage. Well that falls pretty nicely in line with the theoretical peak performance advantage of the 680 over Liverpool (taking the 7970 as a performance proxy).

Then we have to account for 2x being late in a consoles life when all optimisations have been exploited, the comparison being made based on DX9, and the similarity of console and PC architectures this generation and all that suggests that Orbis won't be getting anywhere near 680 performance during the period when a 680 will be relevant. 2 or 3 years after launch, yes it may get close and in some ways it may move past the 680 due to the memory size advantage but that will be long after the 680 has fallen from relevance.

that cpu bench is little basic, without clocks and arch;) but max 49 fps is little low for fifth iteration of the same engine running on cpus with 2 bilion trannies vs 160-250 on conoles resulting in 25-30fps , don't you agree? Bugs here, bad port here almost as some kind of normalisation or constant in these disscusion about performance comparisions.

All of the information is in there at the bottom of the graph. But the point is that you said an Ivy was dropping to 30fps which it clearly isn't. Also, did you miss the part where it said "maxed out details"? What makes you think Assassins Creed 3 is doing the same work on PC at maximum as the consoles you are comparing to? Are there any CPU benchmarks of AC3 at minimum settings which is more likely to be comparable to the console settings?

Well english is not my native, but I am pretty sure, i was clear two times that is data send from cpu two gpu which is order of magnitude quicker in apu configuration than faaar away at the other side of pci , especialy when devs will come up with crazier and crazier scenarios for it (well on site bandwith in apu vs 32 gigs pci too) an porting it properly(or gosh... not ) to pc.

Okay, so again, can you point to examples of where this has been heavily advantageous to the 360 this generation which also has a unified memory space and doesn't suffer the CPU to GPU memory copy latency of the PC? You're simply picking up on one feature and claiming it will be a game changer without any specific knowledge of how or why.
 
@pjbliverpool

I honestly admire the amount of text you are able to produce for the sole purpose of convincing others to your POV. Yet I don't feel convinced at all... And having in mind that as long as the matter is discussed by bunch of armchair experts like us, it is pontless discussion, I would like to add one more thing:
I understand the following when comparing PS4 to modern high end PC:

it is closed box against multiple configs
it is tailored, close to the metal API against DirectX
it is plenty of fast, easy accesible GDDR5 against two pools of noneficiently managed PC RAM
it is 66% of raw performance for 1/3 the price.
and exclusives

As a person who is playing video games since 1985, and talking from a perspective, I am certain that GeForce 680 will be not enough to come anywhere near the quality of next gen games.
(for the record: I am proud owner of i5/ Radeon 7970 setup and I am enjoying the hell out of it)
 
Last edited by a moderator:
I've spent a lot of time looking at images from the new techdemo, and - at least on the main character - it doesn't seem to use tessellation and displacement that much. Almost all his clothing and gear seem to be created with poly models and normal maps, it's just that there's more of everything - more detailed geometry, higher resolution normal maps and so on. I wouldn't be surprised if the model was 100K+ triangles with not one but multiple 4K texture layouts.
 
I understand the following when comparing PS4 to modern high end PC:

it is closed box against multiple configs
it is tailored, close to the metal API against DirectX
it is plenty of fast, easy accesible GDDR5 against two pools of noneficiently managed PC RAM

So pretty much the same as the current generation of consoles. And no-one is denying any of this. Consoles punch above their weight, that's a fact. The question is by how much and it's pretty reasonable to put an upper limit on this of around 2x when a console is well into it's life cycle. Expecting that within a year of launch (by which time the 680 will be a 3 year old GPU) is little more than wishful thinking.

it is 66% of raw performance for 1/3 the price.

You're calculations are wrong. Compared with a 680 it has 80% the fill rate, 45% the texture throughput, 40% the geometry setup performance, 81% the memory bandwidth (if we account for CPU bandwidth requirements at 20GB/s) and 60% of the shader throughput. However since you can't really compare specs like that across architectures (especially shader performance which is known to be more efficient on NV GPU's) a better comparison would be to use the 7970 as a performance proxy for the 680. Thats directly comparable to Orbis and Orbis features between 49% - 86% of the 7970's performance depending on the metric.

and exclusives

Why is that relevant to the discussion? Both platforms have advantages outside of pure hardware comparisons but there's no need for us to get into them here.

As a person who is playing video games since 1985, and talking from a perspective, I am certain that GeForce 680 will be not enough to come anywhere near the quality of next gen games.

In what timeframe? Because if it's in the timeframe of the 680's relevancy in the PC market (say a year from console launch) then I'll happily take that bet. Or put it another way. The A8-3870K (Llano) is roughly to RSX in the PS3 as the 7970 is to Liverpool in PS4. But even with the PS3's advantages of Cell helping with graphics tasks and much higher memory bandwidth, Llano is generally able to keep up with or exceed the PS3's performance in current generation cross platform games. That's after 7 years of optimisation on the PS3 side.

So what makes you think the 680 with similar performance to the 7970 would not be enough to keep up with the PS4 for the first year or so of it's life?

I'll grant, once developers stop targeting it's architecture and Nvidia stops optimising the drivers for it, real world performance will drop off relative to it's potential. And at some point down the line the memory limitation may also come into play. But as I said above, this won't be until after that performance bracket has moved down into the mainstream anyway and no-ones using 680's anymore.
 
If they keep improving explosion, I'm afraid it's only a matter of time before Michael Bay is making video games. I don't want that :(
 
But as I've already stated, the 2x advantage is very late into the generation when all possible console advantages have been exploited. And it's also based on the less efficient DX9 GPU. Add to that the consoles similarity to PC architecture this generation making optimisations more relevant to both platforms and all this comes together to make the 2x figure you quote completely unrealistic within the timeframe that the 680 will be relevant.

Ultimately, Orbis may perform on par with a PC GPU near to double it's theoretical performance but that won't be until long after the 680 has faded from relevance and has been replaced with much more powerful hardware, even at the mainstream level of the market.

Can we have a conclusion? Since PS4 may reach 2x of its theoretical performance then we may see Unreal 4 Infiltrator runs on PS4 very close to PC demo. Someday, yes?
 
Back
Top