pc gaming after nextgen consoles

How much of that will be directly dedicated to the GPU at any given time?

Original rumour was 512MB reserved for OS and rest is for games to use. Maybe sony expanded memory used by OS when the main ram was extended to 8GB or maybe not. Or it could be even the original 512MB rumour was just nonsense.
 
That's a good feeling :) Another advantage to add to the pile, and to slow the obsolescence rate.

To be fair those discussions were generally in relation to the 8GB on the new xbox causing a problem while the 4GB on the PS4 wasn't going to be much a problem. Now they'll both be a problem but the problem itself hasn't got any bigger.

If anything it likely means the memory size on PC graphics cards will simply increase faster. You have to remember, in that regard, there are no significant technical limits. If there's a market for 8 or 12GB GPU's, they will be made available. It may be a while before they are needed though.
 
I agree, it will probably be a while. Between the relatively high amount of existing 32-bit OSs and unlikelihood of multiplat devs willing to utilize all 8gb on PS4, I don't think the PC side will be in a hurry to catch up to anyone. I'd say it'll end up being a pretty even pace of advancement between the ram amount of mid range mass market GPUs and console ram pool usage.
 
Were you expecting a $500 console to outdo a $1200 PC? :rolleyes:

The best anyone could ever hope for was comparable quality at a lower resolution. Anyone expecting 4K gaming or a $500 GTX 680 inside a gaming console needs to be sent to the psych ward.

I was hoping for a GTX 680 in the PS4. I think it would have guaranteed that Sony would win the console war. Also, a GTX 680 would not have cost 500 dollars in bulk.

What we have now is a weak GPU with a weak CPU and tons of RAM.

I'm just hoping there is some significant secret sauce in the GPU we don't know about.
 
Reduce resolution will be the secret sauce I think. That plus some very clean AA techniques and sharpening filter. Wouldn't really mind actually.. my 720p plasma still looks good, downsampling higher res stuff or otherwise.
 
Reduce resolution will be the secret sauce I think. That plus some very clean AA techniques and sharpening filter. Wouldn't really mind actually.. my 720p plasma still looks good, downsampling higher res stuff or otherwise.

IMO, 2-4x MSAA + some form of compute/shader based AA should be mandatory. :p Not like it's going to happen. But that would do far more for graphics IQ this gen than a bump up to 1080p.

Regards,
SB
 
It would be silly to forgo some form of AA these days with the simple-to-implement, and fast, shader options available. These shader techniques were essentially developed for the current wimpy consoles after all.
 
I guess that's a good thing as long as it doesn't impact the DX11 path. I'd have thought a game based on DX11 technology from the ground up would have been very difficult to scale down to DX9 but you obviously know a lot better than me!

I also think a next-gen console engine written to take full advantage of DX11 and Compute would be very hard (and pointless) to port to DX9. Of course you can go and cut tons of features but then the chances are the result will look like an up-port from current gen.
 
I agree, it will probably be a while. Between the relatively high amount of existing 32-bit OSs and unlikelihood of multiplat devs willing to utilize all 8gb on PS4, I don't think the PC side will be in a hurry to catch up to anyone. I'd say it'll end up being a pretty even pace of advancement between the ram amount of mid range mass market GPUs and console ram pool usage.


The people who are going to buy $60 pc games based on ps4 games are those willing to upgrade their os and both 7 and 8 32bit codes allow you to install 64bit verisons of the os.


Once games requiring 64bit appear the change over will be rapid.
 
I was hoping for a GTX 680 in the PS4. I think it would have guaranteed that Sony would win the console war. Also, a GTX 680 would not have cost 500 dollars in bulk.

What we have now is a weak GPU with a weak CPU and tons of RAM.

I'm just hoping there is some significant secret sauce in the GPU we don't know about.

The last time the most powerful console "won" a generation as measured by console units sold was the SNES back in the early 90's. The N64/Dreamcast, Xbox, and PS3 did not sell the most units.

And have you seen the size of a GTX680? The thing draws 250W alone, more power than entire console power supplies. It was never, ever, ever going to happen.
 
To be fair those discussions were generally in relation to the 8GB on the new xbox causing a problem while the 4GB on the PS4 wasn't going to be much a problem. Now they'll both be a problem but the problem itself hasn't got any bigger.

If anything it likely means the memory size on PC graphics cards will simply increase faster. You have to remember, in that regard, there are no significant technical limits. If there's a market for 8 or 12GB GPU's, they will be made available. It may be a while before they are needed though.

It could be argued that the problem is bigger now since both consoles is likely to have alot of memory and all of that memory (minus os) is fast, while the PC world can have alot of "cpu memory" and a tab less "gpu memory". I think it's going to play a part in how long the consoles can stay relevant.

One of the ever returning problems with this generation of consoles was limited media space (360) limited memory (PS3) it was a annoying limitations on texture resolution and quality of assets. Both ng consoles address these issues and imho solves them. And at least from the start the multi platform games will have to scale down to the PC and not the other way around. And since the most profitable market has been consoles they will be the target for the developers..

This is great news for the PC platform, bad news for economy i am gonna need a bigger boat :)
 
And at least from the start the multi platform games will have to scale down to the PC and not the other way around. And since the most profitable market has been consoles they will be the target for the developers..

They'll no doubt scale down for slower PC's but I'd be surprised if the PC version isn't the equal or even better of the console versions at maximum settings. There are certainly GPU's out there that could handle it today - memory requirements and all, and even this generation developers have on occasion offered graphics settings in PC games that are beyond the capacity of current PC graphics memory. Doom 3 and GTA 4 spring to mind.
 
I was hoping for a GTX 680 in the PS4. I think it would have guaranteed that Sony would win the console war. Also, a GTX 680 would not have cost 500 dollars in bulk.

What we have now is a weak GPU with a weak CPU and tons of RAM.

I'm just hoping there is some significant secret sauce in the GPU we don't know about.

discreet cpu and gpu is not economical for mass production .and it does not enjoy the huge bandwidth of the universal memory for the apu (cpu and gpu on one die)
 
discreet cpu and gpu is not economical for mass production .and it does not enjoy the huge bandwidth of the universal memory for the apu (cpu and gpu on one die)

It doesn't enjoy the CPU to GPU bandwidth but there's no reason why a discrete setup can't have significantly greater aggregate memory bandwidth to those chips. A high end single GPU PC for example can sport over 300GB/s combined bandwidth to it's CPU and GPU.
 
discreet cpu and gpu is not economical for mass production .and it does not enjoy the huge bandwidth of the universal memory for the apu (cpu and gpu on one die)

A discrete system in a PC has more bandwidth...

200Gb/s+ for the GPU and touching 30Gb/s for the CPU.

PCIEX bandwidth is not a bottleneck so that's a none issue.
 
A discrete system in a PC has more bandwidth...

200Gb/s+ for the GPU and touching 30Gb/s for the CPU.

PCIEX bandwidth is not a bottleneck so that's a none issue.

From what I gather the potentially big issue for CPU <-> GPU communication in discrete systems is latency rather than bandwidth. So if that can't be improved then the only solution is to keep the work that's latency sensitive on the CPU/APU.

That's why I keep banging on about running GPGPU on integrated GPU's at every opportunity lol.
 
From what I gather the potentially big issue for CPU <-> GPU communication in discrete systems is latency rather than bandwidth. So if that can't be improved then the only solution is to keep the work that's latency sensitive on the CPU/APU.

That's why I keep banging on about running GPGPU on integrated GPU's at every opportunity lol.

Latency is fine else it would of been bought up in the PC world ages ago...... PCIEX3.0 though also reduces latency while offering double the bandwidth.

Latency isn't really an issues when you have masses of VRAM, RAM and memory bandwidth.
 
Latency is fine else it would of been bought up in the PC world ages ago...... PCIEX3.0 though also reduces latency while offering double the bandwidth.

Latency isn't really an issues when you have masses of VRAM, RAM and memory bandwidth.

Well no latency is what's preventing gameplay effecting GPGPU algorithms from being run on discrete GPU's on the PC. That's why the only GPGPU we see in games is aesthetic like waving flags and smashing glass.

The new consoles should be able to achieve this because of the very low latency communication between the CPU and GPU as well as the shared memory space. So PC's need a technical solution to deal with that. AMD has suggested previously that their HSA APU's could be used as dedicated GPGPU processors in combination with discrete GPU's for graphics which is just what the doctor ordered from a technical point of view as far as I can see. The problem is developer support and how much existing hardware would also allow this. HD2000 for example doesn't support GPGPU at all with it being emulated in software.
 
Latency is fine else it would of been bought up in the PC world ages ago...... PCIEX3.0 though also reduces latency while offering double the bandwidth.

Latency isn't really an issues when you have masses of VRAM, RAM and memory bandwidth.
Latency and not having unified memory is an issue on the PC. Solving this is touted as one of the main advantages of HSA. PC apps today don't have tight interoperability between the CPU and GPU as the feedback loop is too long. This thread is about games, not general applications, so it's debatable how much of an issue this is for games. There's obviously a lot of rendering that can be done with the current PCIE limits.
 
Back
Top