pc gaming after nextgen consoles

It could be argued that the problem is bigger now since both consoles is likely to have alot of memory and all of that memory (minus os) is fast, while the PC world can have alot of "cpu memory" and a tab less "gpu memory". I think it's going to play a part in how long the consoles can stay relevant.

One of the ever returning problems with this generation of consoles was limited media space (360) limited memory (PS3) it was a annoying limitations on texture resolution and quality of assets. Both ng consoles address these issues and imho solves them. And at least from the start the multi platform games will have to scale down to the PC and not the other way around. And since the most profitable market has been consoles they will be the target for the developers..

This is great news for the PC platform, bad news for economy i am gonna need a bigger boat :)

I doubt it , a console is a snapshot in time , it wont ever get better (aside from maybe storage space and size of the unit itself). PCs continue to march forward and what we have today is not what we will have in the fall when the new consoles launch.

DDR 3 will fall to DDR 4 for cpus , Super high clocked quad cores will fall to 6/8/10/12 cores , 2-6 gigs of ram will fall to even faster and larger memory types for gpus and so on and so forth.

Today with a realtively modest budget you can throw together a system with a quad core cpu , 16 gigs of ram , and 3 gigs of gddr ram each and come in under 1.5k You can buy a gpu like the titan.

If anything I think this gen of consoles will be surpassed way more quickly than last gen esp with things like kickstarter out there.
 
Yeah if I was being budget conscious while still wanting to keep up with consoles I'd probably pick up an i5 3330. It's still a quad core Ivy at 3Ghz but only costs ~£135.
 
Last edited by a moderator:
Yeah if I was being bidget conscious while still wanting to keep up with consoles I'd probably pick up an i5 3330. It's still a quad core Ivy at 3Ghz but only costs ~£135.

Were as I would put the extra £25 and buy a 3570k then overclock the shit out of it and blow the hell out of that 3330 :cool:
 
I rhink the point stands that the hardware is there and if devs take advantage of it pc games would move beyond console games.
 
It would be silly to forgo some form of AA these days with the simple-to-implement, and fast, shader options available. These shader techniques were essentially developed for the current wimpy consoles after all.

It's not the shader based AA that I fear won't become standard, but the MSAA. None of the purely shader based AA has been able to do anything even remotely respectable with regards to edge aliasing which remains one the easiest to spot forms of aliasing, especially in motion. And when it is beefed up enough to be noticeable, then texture quality takes a hit.

Hence, some form of 2-4x MSAA + shader/compute based AA is what I hope becomes the minimum standard. Something like SMAA (although 2x MSAA really isn't enough) which combines the two.

It's the lack of proper MSAA that's been a killer on consoles, IMO. Hopefully it'll become standard next gen, but I'm not holding my breath.

Regards,
SB
 
There are some interesting forms of SMAA:
SMAA 1x is enhanced MLAA (this is all the injector can implement)
SMAA T2x is temporal SSAA 2x + MLAA.
SMAA S2x is MSAA 2x + MLAA.
SMAA 4x is MLAA + TSSAA 2x + MSAA 2x.

I am just really happy that we're mostly done with that early MLAA and FXAA that obliterates texture detail and is so weak at motion.
 
I doubt it , a console is a snapshot in time , it wont ever get better (aside from maybe storage space and size of the unit itself). PCs continue to march forward and what we have today is not what we will have in the fall when the new consoles launch.

DDR 3 will fall to DDR 4 for cpus , Super high clocked quad cores will fall to 6/8/10/12 cores , 2-6 gigs of ram will fall to even faster and larger memory types for gpus and so on and so forth.

Today with a realtively modest budget you can throw together a system with a quad core cpu , 16 gigs of ram , and 3 gigs of gddr ram each and come in under 1.5k You can buy a gpu like the titan.

If anything I think this gen of consoles will be surpassed way more quickly than last gen esp with things like kickstarter out there.

I agree that it's a snapshot, my argument is (from the discussion about GDDR5 / potential 8GB VRAM) that this snapshot my last longer than the last one. In that it could be able to look good even compared to PC's for a longer period simply because it will be able to have assets at a much higher quality than we have seen on the PC's so far, and that it will take some time before 4-6GB of ram is standard on mid to high range cards. You can have the fastest GPU and CPU but if your graphics card can't handle the large textures/assets your hitting a brick wall..

Maybe i got it wrong, but that is how i understood it. My example was GTA4 that had it's issues on the pc, among others with Graphic Cards with only 512MB ram.
 
I agree that it's a snapshot, my argument is (from the discussion about GDDR5 / potential 8GB VRAM) that this snapshot my last longer than the last one. In that it could be able to look good even compared to PC's for a longer period simply because it will be able to have assets at a much higher quality than we have seen on the PC's so far, and that it will take some time before 4-6GB of ram is standard on mid to high range cards. You can have the fastest GPU and CPU but if your graphics card can't handle the large textures/assets your hitting a brick wall..

Maybe i got it wrong, but that is how i understood it. My example was GTA4 that had it's issues on the pc, among others with Graphic Cards with only 512MB ram.

PS3 and 360 had more VRAM then all mid range PC's and it never caused a problem then.

And GTA4 was just a shit port... Perhaps the worst port of this generation.
 
Maybe this post is better off here.


I also don't see ppl diving into discussion about how the eSRAM, DME's and RAM in Durango work as a whole either unfortunately.

Yeah, I would like to see this talk about more. I have no ideal what the general consenus here around this particular aspect of Durango.

Is DME basically a fpga configured as a DMA for a gpu? How old is CudaDMA, which seems to be a software based solution?

http://research.microsoft.com/en-us/projects/fpga_apps/

This paper was released by MS just last August.
http://research.microsoft.com/pubs/172728/20120628 Speedy PCIe FPL Final.pdf

This paper discusses difficulties and insights related to the implementation of the PCIe protocol on the PC platform in the form of the Speedy PCIe core and offers it as a solution or starting point for future research. The Speedy PCIe core delivers a general purpose solution that solves the problems of high speed Direct Memory Access (DMA) while offering an interface that is generic and adaptable for a large number of applications.
 
I agree that it's a snapshot, my argument is (from the discussion about GDDR5 / potential 8GB VRAM) that this snapshot my last longer than the last one. In that it could be able to look good even compared to PC's for a longer period simply because it will be able to have assets at a much higher quality than we have seen on the PC's so far, and that it will take some time before 4-6GB of ram is standard on mid to high range cards. You can have the fastest GPU and CPU but if your graphics card can't handle the large textures/assets your hitting a brick wall.

I'm not sure how you come to that conclusion. RAM is pretty much the easiest thing to upgrade on a graphics card. If the market demands 4GB to make games playable then the vendors can deliver it almost immediately. That's completely different from last generation where the market demanded GPU's with far more horse power. That's no-where near as easy to provide.

The situation this time around is far more skewed in the favour of PC's. There are GPU's available today with literally 3x the raw horsepower of the consoles which are due 9 months from now. Last generation the situation 9 months before console launch was that the consoles would be able to outperform the most bleeding edge SLI systems of the day. How do you expect the PS4 to compare with a Titan SLI system? Do you really think 8GB of RAM which will likely be available on high end GPU's by the time these consoles launch is going to have some kind of game changing impact on that kind of insane power difference? For example a dual Titan system is almost as far above Durango as Durango is above Xenos in raw specs.

Maybe i got it wrong, but that is how i understood it. My example was GTA4 that had it's issues on the pc, among others with Graphic Cards with only 512MB ram.

GTA4 scaled to graphics cards with far more than 512MB (it scaled to about 1.5GB) but that doesn't mean it actually had problems on 512MB cards at appropriate settings. For some reason people seem to think that if you turn the settings up to maximum on a PC game, that's what you should compare to the console version. Certainly in the case of GTA IV, they are quite wrong.
 
Titan probably won't have any impact on game development. It costs too much. It'll make some benchmark people excited for awhile though. Nice for niche 3D users too maybe. And Davros' multi monitor setup....
 
Maybe this post is better off here.




Yeah, I would like to see this talk about more. I have no ideal what the general consenus here around this particular aspect of Durango.

Is DME basically a fpga configured as a DMA for a gpu? How old is CudaDMA, which seems to be a software based solution?

http://research.microsoft.com/en-us/projects/fpga_apps/

This paper was released by MS just last August.
http://research.microsoft.com/pubs/172728/20120628 Speedy PCIe FPL Final.pdf

This paper discusses difficulties and insights related to the implementation of the PCIe protocol on the PC platform in the form of the Speedy PCIe core and offers it as a solution or starting point for future research. The Speedy PCIe core delivers a general purpose solution that solves the problems of high speed Direct Memory Access (DMA) while offering an interface that is generic and adaptable for a large number of applications.

FPGA's aren't cheap, just how much do you think 4 of them would cost?.
 
From what I remember part of that latency is preventing CPUs to be much used on PC at all. Games that are CPU limited or even attempt to make good use of multiple cores are still fairly rare on PC, and calculations that would sometimes be a fair bit more efficient on CPU are done on GPU because of the too high latency.
 
Titan probably won't have any impact on game development. It costs too much. It'll make some benchmark people excited for awhile though. Nice for niche 3D users too maybe. And Davros' multi monitor setup....

I agree it won't have any impact at all on development. I point above was just to illustrate the amount of power available (at the extreme end) on the PC platform in answer to the implication that PC's may be unable to match the console experience regardless of CPU/GPU power because of memory constraints.

It's true that 680's with only 2GB of memory - hell even 690's with only 2x2 may indeed struggle to match the console experience once all that console memory starts getting put to good use but I don't think that will result in developers not offering the higher quality assets as an option in the PC versions of games. And if they do, it's simply down to the PC gamer to ensure they have sufficient hardware to enable them. And sufficent hardware (in the form of Titan, 6GB 7970's and possibly even 4GB 670/680's) exists today.

I'm quite certain though that the next generation of PC graphics cards will balloon in terms of onboard memory size specifically to deal with this problem. So talking in terms of what's available now in terms of how the PC as a platform will be able to compete with consoles 9 months from now is probably a bit pointless anyway. I'm fairly certain we'll be seeing 4GB variants of 8770's and 760Ti's for example while at the high end 6, and 8GB configs will likely be available. Maybe even more.
 
From what I remember part of that latency is preventing CPUs to be much used on PC at all. Games that are CPU limited or even attempt to make good use of multiple cores are still fairly rare on PC, and calculations that would sometimes be a fair bit more efficient on CPU are done on GPU because of the too high latency.

And vice versa! That's why I'm interested in your previous comments about improvements being made to the interface (for discrete systems). Do you have any suggestions on where I could find further detail on that?
 
Is DME basically a fpga configured as a DMA for a gpu? How old is CudaDMA, which seems to be a software based solution?

http://research.microsoft.com/en-us/projects/fpga_apps/

This paper was released by MS just last August.
http://research.microsoft.com/pubs/172728/20120628 Speedy PCIe FPL Final.pdf

There isn't much in the rumors concerning the DMEs that appear relevant to the contents of that paper.
The paper was concerning a scheme to make it easier for FPGA designers to use PCIe, since the protocol itself is complex and affected by the complex software and hardware environment in a PC.
Why would PCIe even be involved for Durango, when it's all on one chip?

Nothing about the description of the DMEs indicates they are as flexible as an FPGA, and at least one rumor flat out says they are fixed-function. DME bandwidth is an order of magnitude higher than the best bandwidth for this research paper, and the latencies reported look very bad for what an APU should be capable of doing.

edit:
Just to clarify something, I suspect from the information given so far that the DMEs are an elaboration of the DMA engines already present in GPUs. These are used for handing data transfers over the PCIe bus in GPUs, but they look to be useful for data movement even without the actual bus. Durango's rumored first two DMEs appear to be just that, and the remaining two with compression/decompression capability have extra hardware added to them. The limited data path that is shared amongst the DMEs and the video decode block makes me thing they are hanging off of the low-bandwidth hub all AMD GPUs have. This hub exists for ease of adding hardware that can function without a direct feed into the high-bandwidth cache system.
 
Last edited by a moderator:
Back
Top