How do next-gen consoles stack up against PCs for VFM? *spawn


In fact it's a situation very comparable to the PS3 launch (with it's 78xx based GPU right before the 8800GTX launched) except the new PC GPU this time will be packing power more akin to an 8800Ultra (at least), while the consoles far from having a monster Cell to make up the difference will have a fairly anaemic tablet based CPU.
 
This is only relevant when comparing equal box volumes and equal processing power.

Why? My PC sits under a desk, the size of the case is largely irrelivent. It's performance is far in excess of either new console and it's certainly as quiet as either current generation console (no idea how loud the new ones will be obviously).

I'm using all stock components although my GPU is known to be a very quiet model.
 
Why? My PC sits under a desk, the size of the case is largely irrelivent. It's performance is far in excess of either new console and it's certainly as quiet as either current generation console (no idea how loud the new ones will be obviously).

I'm using all stock components although my GPU is known to be a very quiet model.

Dude the size of the case is not irrelevant, it's called thermodynamics...look it up. If it wasn't relevant every computer on the planet would be tiny and silent.
 
Dude the size of the case is not irrelevant, it's called thermodynamics...look it up. If it wasn't relevant every computer on the planet would be tiny and silent.

My point wasn't to question the physics of volume vs heat... obviously.

It was to question why the size of the case is something I should be concerned with (within reason). For a console that has to sit under your TV then yeah, size is important, for a PC that generally sits under a desk then clearly no so much. Power and noise are far more important factors in that scenario.
 
My point was if you want to make a fair comparison you have to factor in box volume. Not everyone wants a big clunky tower for a PC...I personally don't...and since consoles aren't big and clunky relatively speaking, comparing fan noise should be with equal internal volume.
 
^I don't buy it. For the most part you can still play current console ports with >console settings and framerate at 720p on a 512MB 8800GT + Core 2 Duo.

If you're strictly comparing to 2005-early2006 GPUs then yes, because those were on the wrong end of a GPU revolution. 2006-2007 GPUs however continue to hold up well.

Well you guys already reliveing future rps article correction :LOL: . I'm talkig precisely about gpus 7790 and 7850 and closest from 2005 (x1800 and castrated 7800). thus wrong side of evolution ?

No you can't I have old box wolfdale @3.3and 8800gts downstairs and this is blops 2

https://dl.dropboxusercontent.com/u/18784158/New folder/shot0002.jpg
https://dl.dropboxusercontent.com/u/18784158/New folder/shot0001.jpg

Trying things sometimes i will tell you, only unreal 3 games are somewahat better than on console on this setup.



Late 2014... 1 year after the consoles launch? Lets see, 1 year after the PS360 launch in late 2006, GPU's that were directly comparable to Xenos and RSX were going head to head with those consoles and winning (79xx and X19xx) while GPU's significantly more powerful (8800) were wiping the floor with them.

Do you have a good reason for expecting things to be different this time around? Even given that PC's are using a lighter more efficient API now and the consoles have a more PC like architecture meaning more scope for cross platform optimisations?



Here's a news flash for you, the Xbox 360 had a unified memory setup aswell. And both last generation consoles had superior intergpu-cpu communication than PC's (at that point - and arguably still today).

And while the memory situation is indeed different to what to what it was, a 7950 has 1/2 the total memory of the new consoles (after system reservation) not 1/4.



That's an awful lot of creative thinking, perhaps you should write a book. Warning though, fiction often requires some basis in fact so you may want to work on that side of things a little.


they were winning in some games , mostly pc ports and probably due to learning curve of alien CPUs. this mopinng the flor was using fraction of fillrate and bandwidthe , leaving fancy new feauters and even texrate completly untuched. what a pity.

7950 already? I thought we are talking about 7790/7850, still even with 1/2 its a little scary. From many generations for comfortable pc ports, gpu with equal mem as console is necessity

Seeing performance on that "mighty" 8800, sometimes i think about setting up a blog with daily pointing out of coloured reality , double standards and dissembled truths of some pc fans . I bet it would generate significant clicks from them, console gamers don't care anyway:LOL::devilish:
.







Precisely. While this time it's the consoles that are on the wrong end of a GPU revolution. Using (by the time they launch) a 2 year old design that is within a couple of months of being replaced by a new generation.

Wrong side of evolution again? I will politely remind that it doesn't matter as any real high end development of commercial software has ended on pc, laptops , tablets deepens thats the reality. Unless you are content with order of magnitude better hardware doing simple bandwidth related tasks and occasionaly few effects with even less impact . Especially you guys should be vocal about it, maybe someone would see the niche and done something special.

In fact it's a situation very comparable to the PS3 launch (with it's 78xx based GPU right before the 8800GTX launched) except the new PC GPU this time will be packing power more akin to an 8800Ultra (at least), while the consoles far from having a monster Cell to make up the difference will have a fairly anaemic tablet based CPU.

Cell advantages didn present themselves till 2008 titles , and it was somehow irrelevant by the time dont you agree? I bet those tiny ,straight cores will do much better jobs with bringing the goods early when it matters, and you can see this even now with dark sorcerer and quantum break which are years ahead anything commercialy avialable , just as timothy lottes predicted.
 
Last edited by a moderator:
Well you guys already reliveing future rps article correction :LOL: . I'm talkig precisely about gpus 7790 and 7850 and closest from 2005 (x1800 and castrated 7800). thus wrong side of evolution ?

No you can't I have old box wolfdale @3.3and 8800gts downstairs and this is blops 2

https://dl.dropboxusercontent.com/u/18784158/New%20folder/shot0002.jpg
https://dl.dropboxusercontent.com/u/18784158/New%20folder/shot0001.jpg

Trying things sometimes i will tell you, only unreal 3 games are somewahat better than on console on this setup.

This means nothing in terms of general performance. Very old PC GPU's like the 7(!) year old 8800GTS fall out of both driver and developer support. So regardless of how much power they have they may fail to make a good showing in modern games. Compare them to a modern GPU of similar theoretical throughput like the GTX 640 that's still in driver and developer support and you'll see them easily exceeding console performance just as you'd expect given the resources they have available.

Game performance is a combination of hardware and software. If the 8800 was still the fastest GPU available on the PC today fully supported by developers and drivers, it would still be running rings around the consoles. The reason it doesn't anymore is because no-one uses such an old design anymore so why would the vendors or developers want to spend time supporting them?

Yes, the GTX 780 and HD 7970 may suffer a similar fate 5 years from now but no-one buys a GPU like that expecting to keep it for 5 years when by that time you can buy something twice as powerfull for £100 that sucks down half the power and likely has a more advanced featureset. That's simply how PC gaming works.

they were winning in some games , mostly pc ports and probably due to learning curve of alien CPUs.

Show me one benchmark, just one from late 2006 where a console was outperforming a 79xx or x19xx class GPU. I don't want to get into digging up old benchmark results but there are plenty out there if it comes to it.

this mopinng the flor was using fraction of fillrate and bandwidthe , leaving fancy new feauters and even texrate completly untuched. what a pity.

I'm not quite sure what you you're getting at with this statement. Yes the PC GPU's wiped the floor with the console GPU'ds because they had far greater resources. Why else would this happen?

The point about "texrate" particularly confuses me. The console GPU's back in 2005 were at as much of a texture throughput disadvantage compared to 2006 PC GPU's as they were in every other metric. What was your point?

7950 already? I thought we are talking about 7790/7850, still even with 1/2 its a little scary. Form many generations for comfortable pc ports, gpu with equal mem as console is necessity

I only ever mentioned the 7950, you may want to re-read the thread. And I'm not sure why it's scary. A PC with that GPU has 50% of the total available memory of the new consoles. Memory on the consoles that must be shared between both system and graphics. A PC with said GPU would have another 8-16GB for the system whoch would also serve to stream data to the graphics memory. It's not the perfect win as say a full 6GB GPU would be but it's more than enough to maintain graphical superiority (in combination with a far more powerful graphics core) for the next few years - i.e. until these GPU's start falling out of driver/developer support.

Seeing performance on that "mighty" 8800, sometimes i think about setting up a blog with daily pointing out of coloured reality , double standards and dissembled truths of some pc fans . I bet it would generate significant clicks from them, console gamers doesn't care anyway:LOL::devilish:

The 8800 was vastly superior to Xenos/RSX in virtually every way, ask any developer. And it showed for many years (in many titles it still does). But the simple fact is that PC GPU's only remain in support for so long and 7 years is well beyond the threshhold of either driver or deveoper support for a PC GPU.

As I said above, compare BLOPS 2 performance to a modern GPU with similar theoretical performance to the 8800GTX and see how well the consoles fair.

wrong side of evolution again? I will politely remind that it doesn't matter as any real high end development of commercial software has ended on pc, laptops , tablets deepens thats the reality. Unless you are content with order of magnitude better hardware doing simple bandwidth related tasks and occasionaly few effects with even less impact . Especially you guys should be vocal about it, maybe someone would see the niche and done something special.

It's good to see you're now accepting the technical argument and thus moving your own argument to a business stance. Unfortunately your pretty off base on that as well. Future games development will almost certainly be more focussed on leveraging both the PC and console platforms than previous generations due to the fact that games development will be more expensive, thus requiring as large a market as possible, and cross development for the PC will be easier than ever thanks to the similar architecures and relatively low performance bar that PC's need to meet to run the console games.

As for tablet and mobile developement. Do you honestly think that's more of a threat to PC gaming than it is to home console gaming?


Cell advantages didn present themselves till 2008 titles , and it was somehow irrelevant by the time dont you agree?

No I don't. Cell was advantageous from day one. The advantage it brought grew over time of course but why on earth you'd think that no longer mattered in 2008 when it's GPU was clearly in the weakest position it had ever experienced I don't know. Cell's importance in keeping PS3 relevant as a graphical competitor has only got more important as the console has aged.

I bet those tiny ,straight cores will do much better jobs with bringing the goods early when it matters, and you can see this even now with dark sorcerer and quantum break which are years ahead anything commercialy avialable , just as timothy lottes predicted.

Yes, unlike Cell those little cores will likely bring most of what they have to the table almost straight away. And it's still vastly less than what a £100 lower mid range PC CPU can offer... today. This isn't something I'd personally be bragging about. At least with Cell you had the "untapped potentials" argument which comically turned out to be partially true. What do you have when your hitting 100% of your potential from day one and it's no-where near the competition?

Well I guess you have crazy claims of decent looking next gen games being "years ahead anything commercialy avialable" but I'm asking what you have that's actually real (as opposed to candydrop dreams infused with rainbows).
 
Most people that have a console still need a cheap laptop or PC to do all the stuff you can't do on a console....

So I reality it's £400 for the console plus an extra £200-300 for a decent laptop/PC giving you at least £700-800 for a decent PC build.

And I've sold my phase change case and moved over to mini ITX form factor and my computer is small and very powerful.
 
Question about cache size, and its benefit for 3d rendering

How is it OT? Discussing the kinds of PC components that would be equivalent to the One, and their pricing, and the technical comparison to the One seems on topic to a technical hardware investigation thread.

I'll stop now, but I figured this was the best thread to put it on, since none of the other seemed to fit.
Well it is not that point to that article, as you did, was OT but the follow up had to stir away from the original topic and investigate current and up coming PC, which I did and others too.
Anyway the Shifty vouched for a new thread so the discussion can go on :)
--------------------------------------------------------------------------------------

Anyway I agree with you putting together with off the shelves a PC parts that is a match for the PS4 and/or xbox1 (damned I missed the "ps360", laziness...). Though being in the performances ballpark is doable. Form factor, power consumption, etc. are out of touch... for now.

I will elaborate further on my previous post before I ask the resident tech head here a couples of questions.

For example Kaveri, could be an interesting case if AMD as some rumors hinted ir could use GDDR5.
Everybody mostly knows what Kaveri consists in, just the sake of the conversation:
2 modules / 4 cores, 8 Cus 8 ROPs and 128 bit bus.
With GDDR5 that thing could be a "little beast" that would make quite some budget gamers and people in emergent market happy.
Not up there with the up-coming gen of console, it should not support more than 4GB of GDDR5, but head and shoulder above the ps360 (from processing power to the amount of RAM) by such a margin that it could almost qualified as in the ball park of next gen performance. Definitely not "last geny" even if not completely next geny, in anyway way closer to the later than the former.
Kaveri should ship mostly as the same time as the next generation of console and (sadly for AMD) I expect them to be quite affordable chip, GDDR5 (if used) should spiced up the price but it should definitely offer incredible value (as a PC part).
For the ref, some estimate the die size of Kaveri ~220mm^2 so it is slightly tinier than Trinity (TSMC process is really dense :oops: ).

Whereas Kaveri is yet to be released Intel just released Haswell and more relevant to my rant: Crystal Web.
Crystal Web seems to hold on its promise, Intel claims that to match-up there set-up it would take between 100-130GB/s to a GDDR5 interface.
If the review I read are nay hint, putting aside what could be lacking in current Intel GPU they may not be over doing it, at least the bottom of the spread sounds solid (and that is quiet impressive already imho).
Courtesy of Anandtech:
There’s only a single size of eDRAM offered this generation: 128MB. Since it’s a cache and not a buffer (and a giant one at that), Intel found that hit rate rarely dropped below 95%. It turns out that for current workloads, Intel didn’t see much benefit beyond a 32MB eDRAM
I think that data quiet interesting and actually I wish Intel had let info about how cache hit rates was affected by going even tinier than 32MB.

Now to the 22nm process and AMD next APUs.
Estimates have Kaveri at ~220mm^2 on TSMC 28nm process. Assuming perfect scaling (good enough as a ballpark figure) AMD could fit within the same silicon footprint: 4 modules / 8 cores, 16CUs 16 ROPs.
Whereas such a chip "from the distance" looks pretty close to say the PS4 it should fall quite short at rendering (at best using GDDR5 half the bandwidth and half the ROPs). The matter of the point is that AMD is unlikely to produce such a chip.
Actually I don't expect AMD to go further than 4 cores (/2 modules). For the iGPU, I don't expect eithert AMD to stuff 1024 Stream Processor (16 CUs) in their next APU:
1) it could hurt some of their discrete GPU sales.
2) they would not be able to feed properly (/bandwidth constrain).
3) AMD said it were to leverage gaming, they have discrete part for higher end set-up, APU just has to be in the ball park of the up coming generation of consoles (/lower end).

Overall I would not be too surprised if AMD vouches for a configuration really close to Durango aka 768 SP. It sounds "right", it would provide a neat improvement from Kaveri and allow AMD to clock the iGPU low in mobile part saving quiet some power.
I read a few guesses/rumors about AMD making change to its CUs and I though it could make sense (looking at what Nvidia did), I could see AMD make some improvements in compute density (even normalized to the process used) by lowering their TEX/ALU ratio. It seems to me that AMD has not radically changed its good old SIMD moving from Cayman/VLIW4 to GCN. A Compute Unit / SIMMD array is still compromised of 4 Stream Cores, each Stream Core compromised of 16 Stream Processors ( I hope I get their naming policies right).
It would not surprise me too much if AMD were to beef up the front end of their SIMD/CU and go with 6 groups of 16 Stream Processors instead of 4 (8 sounds like pushing without increasing the texturing power). I would hope that would further increase their compute density (may be not much but like from VLIW4 to GCN enough to cover the cost of others improvements, it seems they have not touched their ROPs much in a while for example) while achieving perfect or close to perfect scaling.

So long story made short, a 4 cores, 8 "new/wider" Compute Units follow up to Kaveri could end way below Kaveri as far as die size is concerned which lead me (finally) to my question to our dear resident tech heads :)

It seems that an Intel like solution (ala Crystal Web) is out of reach (for multiple reasons) though AMD just put together a chip (Durango) that includes 32MB of eSRAM (not a cache though). It got me to wonder about the usefulness of "big" cache for rendering. I would not expect AMD to be able to fit 32MB of L3 on die in its upcoming APUs though I wonder about the benefits of a lesser amount of cache say 16MB.

I can't wrap my head around it, I think there are data available that could allow some people here to make some guesstimates about the benefits.
For example that type of data courtesy of the TechReport. there are also interesting posts in that thread, especially sebbbi's ones (about the amount of texture data accessed per frame, the size of shadow maps, etc. and how that could fit in high end Intel CPU L3 cache).
I can't my-self wrap my head around it, not even close but I think that as silicon budget are growing and exotic memory solutions are yet to come I wonder about the extend of the hit "big caches" that used to be reserved to server CPUs type of device could put a hit in the "bandwidth wall".

PS: I think that is relevant to the thread as it is related to how good (and competitive with the upcoming consoles) pretty affordable piece of kit (200-250mm^2 piece of silicon, with a dual memory channel set-up using DDR3/4) could get in really near future.
 
Most people that have a console still need a cheap laptop or PC to do all the stuff you can't do on a console....
That comparison always comes up and always has to be knocked down. I already have a laptop. Everyone already has a PC. So I'm not in the market for a new PC. The cost to me to get a machine to play Tom Clancy's Division is either a £350 PS4 or a £500+ PC. Only if I'm looking to buy a PC at the same time can I save money. If I have a budget for a £500 PC and a £350 console, I could instead buy an £850 PC and get a machine that plays games better than a PS4, or even a £700 PC and get a slightly better gaming machine and save some money. But that's only true for those who are replacing their existing PC.

If your budget is ~£400 for playing the latest games, and you're not after anything else, PC's need to be compared at direct price.
 
That comparison always comes up and always has to be knocked down. I already have a laptop. Everyone already has a PC. So I'm not in the market for a new PC. The cost to me to get a machine to play Tom Clancy's Division is either a £350 PS4 or a £500+ PC. Only if I'm looking to buy a PC at the same time can I save money. If I have a budget for a £500 PC and a £350 console, I could instead buy an £850 PC and get a machine that plays games better than a PS4, or even a £700 PC and get a slightly better gaming machine and save some money. But that's only true for those who are replacing their existing PC.

If your budget is ~£400 for playing the latest games, and you're not after anything else, PC's need to be compared at direct price.

I could still upgrade your PC for £400 and make it faster then consoles...

Factor in a few games which are always cheaper on PC and the PC easily stacks up to a console.
 
Anyone factoring in games massive price difference in favour of PC ?
Because I usually buy a gaming machine for games... So if it costs a lot more to acquire games later on, it might very well cancel the cheaper "barrier of entry"...
 
They don't dude.... People in this forum seem to ignore used hardware upgrades and the cheaper games on PC because consoles don't have that ability.

Im sure if they could upgrade your PS3 to a PS4 for a cheaper price using cheap upgrades they would soon change there tune.
 
They don't dude.... People in this forum seem to ignore used hardware upgrades and the cheaper games on PC because consoles don't have that ability.

And the PC people consistently ignore form factor, power consumption, noise and ease of use.

Also: Lots of cheap games for consoles.

Cheers
 
Just... Don't.

The PC platform means for the most part:

*Laptop/Tablet
*Intel GPU or low end external
*4GB of RAM
*?? Integrated audio
*768P Screen

A console is a defined hardware spec with effective DRM. PC is an undefined specification, but if you want the most common denominator I gave it to you. Just because people can have the most badass rigs in the world; the world doesn't revolve around how badass those rigs are. The majority of people don't want a desktop PC so you can't just say 'add $300' to a PC you would already own. The PC people already own are a laptop and that $300 maybe gives you 1/2 the console performance.

Consoles are good hardware, as surprising as that may be to some that may be. The PS4 is $399 which is probably more comparable to what you have there. $399 gets you some pretty nice hardware: 8GB GDDR5, 8 core processor, HD 7850 GPU, audio processor, nice form factor, consistent experience and you don't have to pile on a bunch of hours in your spare time figuring out what to buy, where to buy it and how to get the best bang for you buck. You just bang your bucks on the table and walk out with a console and some games.

So either you want a gaming PC and therefore don't really care about the value of consoles, or you want a console and don't want to know about gaming PCs.

#Dealwithit
 
I read a few guesses/rumors about AMD making change to its CUs and I though it could make sense (looking at what Nvidia did), I could see AMD make some improvements in compute density (even normalized to the process used) by lowering their TEX/ALU ratio. It seems to me that AMD has not radically changed its good old SIMD moving from Cayman/VLIW4 to GCN. A Compute Unit / SIMMD array is still compromised of 4 Stream Cores, each Stream Core compromised of 16 Stream Processors ( I hope I get their naming policies right).
It would not surprise me too much if AMD were to beef up the front end of their SIMD/CU and go with 6 groups of 16 Stream Processors instead of 4 (8 sounds like pushing without increasing the texturing power). I would hope that would further increase their compute density (may be not much but like from VLIW4 to GCN enough to cover the cost of others improvements, it seems they have not touched their ROPs much in a while for example) while achieving perfect or close to perfect scaling.
I though a bit more to realize that the part in is really unlikely / do not go well with the way GCN works. GCN issues one instruction per cycle to one of those block of 16 SP (working in fact on wavefront of 64 elements) that execute at 1/4 the speed (or something like that my terminology failures aside). So it more likely that AMD doubles the number of those cores and make the front dual issue. That means a pretty radical change in the tex/alu ratio.
I think it may also mean that AMD needs to double the throughput of its cache (l1 and texture cache), LDS, etc. It would also impact the scalar unit.
Overall ... I don't if it is doable, a good idea /let see what their top engineers comes with it should definitely be better than my rant :LOL:

Anyway ignore that part.
 
Last edited by a moderator:
While isn't so hard to match the computational power of both console, the RAM part is really tricky. Both consoles provide an unified space, averagely around 6 Gigabyte of RAM. To be "safely" next-gen proof, a computer should at least provide 4-8 GB of system-RAM (easy) and 4 Gb of video-RAM (impossible). Any less amount of video-memory will chock next-gen ports.
 
And the PC people consistently ignore form factor, power consumption, noise and ease of use.

I'm not sure that's fair, noise is a very real concern for many PC gamers which is why most GPU reviews measure it. And quietness and performance are not mutually exclusive or necessarily costly.

Power consumption and form factor are indeed often largely ignored but only because those just as often don't matter in the environments PC's are used in (well form factor anyway).

Ease of use really only matters to people who aren't already PC gamers. If you're already a PC gamer then PC's are easy to use. They are far more configurable which certainly to me is one of the big advantages. It's similar to the arguments between Android and iOS, configurability vs ease of use. I started on iPhone but switched to Android ;)

Also: Lots of cheap games for consoles.

In my experience even second hand console games in the shops around here are at best similarly priced and quite often still more expensive than the same PC game bought online (provided you wait until after the initial launch price drops to reasonable levels). I've been pretty suprised tbh at the still high price of second hand Wii games (I own a Wii) when I've looked in the past.
 
Last edited by a moderator:
Consoles are good hardware, as surprising as that may be to some that may be. The PS4 is $399 which is probably more comparable to what you have there. $399 gets you some pretty nice hardware: 8GB GDDR5, 8 core processor, HD 7850 GPU, audio processor, nice form factor, consistent experience and you don't have to pile on a bunch of hours in your spare time figuring out what to buy, where to buy it and how to get the best bang for you buck. You just bang your bucks on the table and walk out with a console and some games.

There's no doubt that consoles are excellent value for what you get hardware wise. You simply cannot match the performance/£ of either console by buying a PC from scratch, the baseline cost of a PC is simply too high.

But there's still an argument to be made. Perhaps I can't get PS4 power for £350 but what if I can get 2x PS4 power for £700? Then things get interesting. It certainly doesn't take long to price up such a system if you know what your doing (I did it last night in 30 mins from a single web site just for kicks), obviously if you have no clue about PC technology then it's going to take a lot longer.

Below is what I put together last night but having thought about it more, I think this was unfair to the PC platform. Afterall, why am I trying to match technology that won't be available for another 4-5 months and clearly (based on E3) isn't ready yet with technology commercially available today? In fact I would take this...

CPU- AMD FX6300 - £93
Mobo - MSI 760GM-P34 (mATX) - £36
RAM - 16GB DDR3 1600Mhz - £99
GPU - Radeon 7950 3GB - £222
HDD - 1TB - £48
OPT - Blu-Ray - £27
OS -Win8 64bit OEM - £70
Case - mATX = 500w ps - £24
Wireless 360 Pad - £26
Wireless Key/Mouse - £20
TOTAL = £665

And wait 6 months until roughly the consoles launch window for a Maxwell or Volcanic Islands based GPU instead of that 7950. Being conservative I'll bet you can pick up something that sports 4GB GDDR5 and has performance more akin to a 7970 for the same price. If we loosen the budget up to £700 (a round 2x PS4) then by the end of this year we might be able to go with that newer GPU as well as an 8 core AMD processor (perhaps even Piledriver based rather than Bulldozer in the example above).

Whether paying twice as much for 2x the performance is the right decision for you is a matter of preference obviously but there can be no denying that the performance/£ of that system would equal the PS4 which I think is incredibly impressive. Try doing that in late 2005/early 2006!
 
While isn't so hard to match the computational power of both console, the RAM part is really tricky. Both consoles provide an unified space, averagely around 6 Gigabyte of RAM. To be "safely" next-gen proof, a computer should at least provide 4-8 GB of system-RAM (easy) and 4 Gb of video-RAM (impossible). Any less amount of video-memory will chock next-gen ports.

It's tricky now several months before the consoles launch but we're on the crux of doubling GDDR5 densities (which is the exact and only reason why PS4 can pack 8GB GDDR5). When that happens then double todays memory densities should become failry common. The same market segments that feature 2 and 3GB today could easily feature 4 and 6GB at the end of this year/early next. And that means the high end configurations of those same GPU's which today feature 4 and 6 GB could be packing as much as 8 and 12GB for a similar cost.

While pricing up PC's today that can overpower the yet to be launched next gen consoles is a fun academic excersise, it wouldn't be very wise to actually try to build one now since your not going to have anything to compete with for another 4-5 months and by then you'll have better options available to put into that PC.
 
Back
Top