Is UE4 indicative of the sacrifices devs will have to make on consoles next gen?

Most spread videocard vram is 1024 MB and there's not any significantly better gpu in the top 10 gpu steam stats compare to what's in Orbis (1st is even an Intel HD 3000..). That's the PC gaming right now, not some limited editions of 500$ gpu with double amount of memory.
Sony's choice of GDDR5 will certainly fuel some evolution for the standard. 8gb chips by 2015-2016 seems likely now.

IMO PC gaming isn't defined by what the average gamer uses, it's defined by what's available to every gamer. The games will just be console ports with enhanced image quality options regardless so there's little argument to be made about how average hardware power effects PC game development.

The fact is that if you're a PC gamer and you want more power, you buy it (if you want it enough of course). If you're a console gamer and you want more power your only option is to become a PC gamer. That's why we focus on the high end systems when making comparisons. Because those high end systems are open to anyone that wants to spend the money on them. Looking at lower end systems is obviously relevant as well since the barriers to entry are lower at those price points but looking at what people are already using makes no sense to me. Those PC gamers all have the option of moving to more powerful hardware and if they choose not to then that means they are likely already satfisied with the hardware they have, in which case, why should that concern us? It obviously doesn't concern them.

Or to put it another way, as a gamer if I want a system that will play all next generation games at 1080p/60fps in full 3D, of what relevance to me is the fact that 99.9% of all the other gaming systems out there (console or PC) are unable to achieve this? My only concern is whether such as system exists and if so, what is that system. If it's a console, then I can look to console gaming, if it's a PC, I need to look to PC gaming.
 
Fully dynamic lighting actually uses less memory than static. Static lighting for big levels needs baked lightmap data for every surface and/or large grids of precomputed light probes. Fully dynamic lighting on the other hand only calculates the lighting for the currently visible viewport. The goal here is to only process things that affect the surfaces seen in the single image (since lighting is recalculated every frame). A good fully dynamic lighting solution thus doesn't want to generate/use huge data structures (and couldn't even afford spending clock cycles and memory bandwidth generating them over and over again every frame).

Better post effects and more transparencies are mainly consuming clock cycles and memory bandwidth. You don't need more memory for those. You need better performance.

I agree that less static environments would consume more memory. Especially if the environment changes are permanent, and especially if we have more fine grained control over the environment (can break/modify terrain/objects in various ways). Current gen games tend to remove all marks of destruction/bodies/shrapnel very fast. But the biggest thing that more memory brings is ease of development. With small memory you need to stream in/out everything rapidly (just in time). It's hard to create algorithms that predict things properly in all scenarios (since HDD data seeking is slow, and popping isn't desirable). Of course the bigger memory allows more variety in the game world (less copies of the same model/texture are required, more unique models/textures are possible).

Sebbi, It´s always good to hear you talk, but my case talks specifically about Rage, I know your engine uses both Virtual textures and fully real time lighting, but my point was specifically about Rage. Their baked Lighting and GI does not require an aditional lightmap, as the lighting is baked into the diffuse textures themselves, since with their tech even when repeating a texture it is stored as a duplicate in their Global Atlas, every texel is indeed unique, so there is no need to separate lighting from it. Regarding their lightprobes, if they are not too sparse, then they just must be very bad, because they just seem to be very low frequecy to me, and I see lighting on dynamic objects change very little as they move about in space. If they wanna have many shadowed lights for example, that does not require tons of memory, but if you don´t have to worry about it, there are many optimisations that can be done by reusing shadowmpas from the past frame that are impossible if you are reusing the same texture for all shadowmaps for example. If we are talking next gen, we could expect some sort of dynamic GI maybe, Precomputed or Voxel Based, both would take memory if they were to be chosen.
What I´m trying to say is Rage´s engine can do a lot of stuff with more memory. Not just better texel density, which is far from ideal at this point, but through more complex materials too. How about multi-layered stuff like wet surfaces or specular coating over glossy surfaces with two normal and specular maps per texel, or coloured RGB specular, heighmaps for POM or Tesselation, SSS maps, animated textures, and again, transparancies which are bottlenecked by fillrate and bandwith way before by memory, but sill, they destroy any constant texel/pixel ratio... Jeez there is a lot of places to spend those Gbs on. Good PostProcessing, although pushing processing time way more than memory, will still require aditional data in your G-buffer if you really wanna be fancy. That still won´t need Gigabytes worth of memory, but at 1080p, it can add up. Sure, all that could be sorted out with 4Gigs or less anyway, but it comes down to ease of implementation.
 
Why did the devs ask for 8GB RAM ? I remember reading the headline of some developers championing/requesting for 8GB.
It makes life easier and allows for more stuff yada yada. I seriously doubt that the devs asked for more GDDR5 though, and they'd be just as happy with RAM amount if it were 4+4 GDDR5 + DDR3. Of course, unified memory also makes life that little bit easier.
 
How does Titan use its 6GB GDDR5 ?

The traditional PC GPUs don't really use the GPU for synchronous physics, so they may only provide a partial picture.
 
8 gigs is nice (7 GBs accessible, maybe. We've been told the OS is in fenced off memory, so that's RAM the game can't touch). It'll give the platform legs. It won't make a crazy difference versus PCs with 3+ GBs VRAM and 16+ GBs system RAM, especially when that VRAM is higher BW and the GPU is faster. In truth the nature of the APU and compute might deliver better results than the additional 4 GBs RAM, as developers can really design for that hardware synergy now.
The 8 gigs and unified memory is going to lead to an interesting conundrum for PCs that remain on 32-bit Windows.

I think large RAM pools aren't problem right now because the OS doesn't see the whole memory space of the separate device memory, but that doesn't look like it will fly with an HSA-enabled system.
It seems like it could be worked around with some extra effort, unless the game is fundamentally dependent on low-latency usage of a unified pool of >4GB.
Devs could keep that in mind and leave some performance on the table and accept some additional headaches in porting, but if they do not, the PC market starts with an implicit reduction in importance for high-end games.

A 4 GB PS4 or Durango would have played more nicely. It could push a bunch of 32-bit holdouts to update, or further diminish the point of even caring about the PC, even as the flexibility of PC hardware appears in consoles.
 
The traditional PC GPUs don't really use the GPU for synchronous physics, so they may only provide a partial picture.
In principle that's true, but they also have honking great CPUs to do physics too, working from DDR in parallel to the GPU. And PS4 is still going to be principally limited by cross-platform games not maxxing out the hardware utilisation. UE4 as a cross platform engine is going to work with split pool 8GBs and eDRAM-augmented 8GBs. As ever, a few first party titles might extract some fabulous extras, but compared to PC, 8 GBs won't net PS4 any obvious killer improvements that'll deal a crushing blow to those 3 GB, far faster GPUs in the gaming rigs. PC users aren't going to be looking at PS4's incredible high res textures (1080p max, meh!) and varied assets and shed tears over their weaker PC versions. 8 GBs is mostly going to be a boon to devs who won't care where in RAM assets are or how to handle delays in copying data from here to there and back again.
 
The 8 gigs and unified memory is going to lead to an interesting conundrum for PCs that remain on 32-bit Windows.
Where that's an issue for cross-platform games, I expect games to eventually become 64 bit only. It happened in the days of 3D; eventually if you wanted to game on PC, you had to get a graphics card and couldn't hold out expecting devs to cater to your outdated machine.

I thought I had heard that the consoles were still running 32 bit OSes, but I can't think where. Maybe MS could introduce Extended Memory for Windows. :yep2:
 
Id hope devs simply make 64bit a minimum requirement to force PC gamers to upgrade. According to steam 64bit os's make up around 70% of the market.
 
Where that's an issue for cross-platform games, I expect games to eventually become 64 bit only. It happened in the days of 3D; eventually if you wanted to game on PC, you had to get a graphics card and couldn't hold out expecting devs to cater to your outdated machine.

I thought I had heard that the consoles were still running 32 bit OSes, but I can't think where. Maybe MS could introduce Extended Memory for Windows. :yep2:

It could go that way, as long as the addressable market for PC gamers that remain maintains the volumes needed to justify the validation and porting effort.
I'm curious where the inflection point to a negative feedback loop may be.

This may only apply to high-end games that can push memory consumption high enough, and maybe AMD is hoping to get its next generation of high-bandwidth APUs out to catch a potential upgrade wave.
However, any bifurcation of the PC segment makes it punch below its weight.
 
It could go that way, as long as the addressable market for PC gamers that remain maintains the volumes needed to justify the validation and porting effort.
I'm operating on the assumption that porting costs next gen are going to be minimal, especially on a middleware like UE4. Perhaps a dev can correct me, but I expect the game code and assets to drop straight in (from console to PC and vice versa), and the only added costs being optional to refine a game to the specifics of its platform. So something like Unreal Tournament or Fat Princess on next gen UE4 will pretty much build for the three platforms at the press of a button. Okay, maybe not so on Durango if there's some fiddly-doodling with memory usage, but certainly on PS4 and PC. Thus a high level game can be targeted at 10 million PS4s and whichever PC gamers care to get with the times and go 64 bit with 2+8 GBs minimum. ;) No need to chase after the median average of Steam users to justify development costs of the PC version.
 
I'm operating on the assumption that porting costs next gen are going to be minimal, especially on a middleware like UE4. Perhaps a dev can correct me, but I expect the game code and assets to drop straight in (from console to PC and vice versa), and the only added costs being optional to refine a game to the specifics of its platform. So something like Unreal Tournament or Fat Princess on next gen UE4 will pretty much build for the three platforms at the press of a button. Okay, maybe not so on Durango if there's some fiddly-doodling with memory usage, but certainly on PS4 and PC. Thus a high level game can be targeted at 10 million PS4s and whichever PC gamers care to get with the times and go 64 bit with 2+8 GBs minimum. ;) No need to chase after the median average of Steam users to justify development costs of the PC version.

At least between the PS4 and Durango, the graphics and CPU manufacturer is the same. For all the weirdness, the GPUs do share a lot of their DNA.
The PC is going to be Intel and Nvidia, with many of the AMD GPUs not being GCN.
This is a variable development cost increase, but the QA and support requirements are a cost adder as well.

It may still be worthwhile after that, unless the the expected sales are on the threshold.
 
Even my parents' new system I've just gave them (my old dual AMD) runs win7 64-bit now, no reason to keep 32-bit support, especially for games...
 
My old system was replaced before Starcraft 2.

I had to check wikipedia to learn that it happened in 2010, about 3 years ago.
 
That is very interesting to read. For everyone else interested in steam hardware survey:

http://store.steampowered.com/hwsurvey

Only 10% have 2GB VRAM. More VRAM is even lower, with 3GB at 0.6%

While I'm on the other side of this argument, I will say "steam survey" isn't always a good argument to me.

If you go buy a decent gaming GPU today it'll probably have 2GB (or the 3GB AMD cards). 1GB isn't enough really and 4GB is rare. That's where I put the current "standard".

The "current enthusiast" market is different and more topical than "steam masses" imo. Generally triple A games recommended specs cater more to the former.
 
I'm operating on the assumption that porting costs next gen are going to be minimal, especially on a middleware like UE4. Perhaps a dev can correct me, but I expect the game code and assets to drop straight in (from console to PC and vice versa), and the only added costs being optional to refine a game to the specifics of its platform. So something like Unreal Tournament or Fat Princess on next gen UE4 will pretty much build for the three platforms at the press of a button. Okay, maybe not so on Durango if there's some fiddly-doodling with memory usage, but certainly on PS4 and PC. Thus a high level game can be targeted at 10 million PS4s and whichever PC gamers care to get with the times and go 64 bit with 2+8 GBs minimum. ;) No need to chase after the median average of Steam users to justify development costs of the PC version.

Despite the focus on the high end, I really hope this next gen has more games that are more funded than indie games which use Kickstarter and AAA titles which tend to be games that get lost in the pursuit of photorealism.
 
Andrew Lauritzen said:
And no, Uncharted and God of War 3, while impressive on the PS3, do not look anywhere close to a modern high end PC game. If you disagree on that, let's just stop talking now because we're beyond the point where facts are relevant.
Unless you're just referring to computational load there's no facts being discussed anyway - there are modern high-end games that are on the ugly-stick side of things regardless of what math they crunch through.
 
Unless you're just referring to computational load there's no facts being discussed anyway - there are modern high-end games that are on the ugly-stick side of things regardless of what math they crunch through.

I tend to agree, the visuals in a game owe as much or more to the artwork and art direction as they do to the flops applied to a pixel. Or for that matter the game engine that pushes the polygons through the pipeline.
One of the primary problems with PC titles is outside of a very few exceptions, they simply simply don't have the budget to compete with the production values of high end console titles.
 
Back
Top