The Next-gen Situation discussion *spawn

Cuz the pc has twice the cpu and gpu grunt? You're trying to make graphics memory space the key because that's all you've got.

another unsatisfying answer I will add to my list of "failed answers", or "failed apologies", how twice the CPU grunt has anything to do with making up for insufficient GPU RAM ? same thing with your wrong estimation of twice GPU grunt, ps4 1.84 Tflops, GTX 680 have only 3 Tflops but anyway....

Considering a current laptop covers most of the specs of the rumored orbis (which doesn't ship for 10months, not going to take long to surpass these consoles if someone tries(targeting min specs can hold pc back). It's not going to take as long as it took to surpass the 360 which was very cutting edge.

I agree with this, thats why I said 6 months - 1 year console game graphics superiority, by the time PC developers would start to target more powerful PCs than consoles things would change as has been the case with xbox360-ps3 (except for the anomaly of crysis lol).
 
this scenario pointed out first by Shifty Geezers (credit where credit is due), seems less and less plausible over time now that we have rumors suggesting PS4 being a more powerful console than xboxnext.

Imo it's too soon to make that call, even if there are some clear signs pointing in to that direction.
 
another unsatisfying answer I will add to my list of "failed answers", or "failed apologies", how twice the CPU grunt has anything to do with making up for insufficient GPU RAM ? same thing with your wrong estimation of twice GPU grunt, ps4 1.84 Tflops, GTX 680 have only 3 Tflops but anyway....

The 7970 has over 4 TFLOPS and winds up about even with the 680 in gaming performance. If you want to compare flops at least the 7970 is more in line with what orbis is packing. And you can't dismiss the other advantages that PC's have as they free up resources that Orbis may need to find from that gpu.
 
ERP tells DoctorFouad that a high end PC will be able to handle the same things a console can thanks to its massive raw power, and he quite incredibly responds as if he's supporting his argument that the PC won't be able to match PS4 graphics...?

I posted all my responses, and I didnt get in return any convincing response to my initial question : how the hell a PC game running on a limited 2 Gb GPU could look any better in terms of core gaming assets than a game designed with 4 Gb of GPU RAM in mind ?

- The (rumoured) PS4 doesn't have 4GB of "GPU ram".
- Its already been explained that the PC can use main ram to store "assets"
- ... and has done for years

1- we should not compare PS4 to GTX680, we should compare PS4 to even a higher end future GPU (consuming by itself 500 watt power, costing 700$)...(strangely enough, those same poeple are saying that PS4 specifications are basic, nothing special and no match for even older PC configurations 2 years earlier....but they dont have the courage to admit that 4 Gb of GGDR5 at 192 Gb/s is really something special and was unexpected for nextgen consoles) whatever...

- YOU were the one that compared the 360 to a 512 MB X1800 XT.
- YOU were the person that then compared this to the PS4 vs current cards
- YOU said there were no 4 and 6 GB GPUs

If you don't like the PS4 being put up against top end graphics cards from the time of a consoles launch then YOU shouldn't have compared the 360 to the ABSOLUTELY VERY HIGHEST END graphics card of the time of the 360's launch and used it as a benchmark that meant something.

2- PS4 games wont use a higher amount of GPU RAM for graphical assets than a 2 Gb GTX680 would use. really ? developers will just use the other 1.5-2 Gb ramaining for doing AI ? seriously....

Where did I say that? Quote it.

I also notice you've now quietly dropped your talk about 3GB graphics cards not being able to handle super awesome PS4 3.5 GB assets.

3- (your false answer that neglects bandwidth requirements), the GTX 680 would just stream or even better use the shared slow DDR3 RAM of the PC like any low budget PC GPU do...but maybe this answer is just a joke...anyway....

Awful.

You said that a PC GPU can't possibly use more than the amount of ram on the board, which is wrong. PC games can stream data into GPU RAM and have done, in games, for years. PCs can also treat some main ram as graphics ram, and have done for years.

If you now want to try and retcon your claim into something else then that's a separate issue. And you're still wrong.
 
I'm extrapolating (only partially pulling things out of my ass!) based on Sebbi's comments on virtual texturing and on texture BW in general:

http://forum.beyond3d.com/showpost.php?p=1681325&postcount=3406

I'm assuming, for the sake of argument, that most PS4 games will use a 1920 x 1080 resolution with 8 texture layers using trilinear filtering, and an average of 1 byte per (trilinear) texture layer. Then I'm multiplying that by 2 or 3 or 4 for no other reason that I'm scared that I've massively undershot.

Thanks! Interesting post by sebbi. According to his post, the worst case is 64 bytes per pixel with trilinear filtering and no reuse. This figure gives 7.4 GB/s for a 1080p, 60 fps game.

This is completely excluding the need to sample from buffers, which I expect will use far, far more BW than sampling from assets normally would.

This would explain why developers would have liked the possibility for pixel shaders to read from Xenos EDRAM, However, given how small it was, I guess it wouldn't have been very useful without increasing its size. A more flexible EDRAM setup in Durango would start to be useful with at least 64 MB, in my opinion. What you could do with 64 MB and a flexible EDRAM setup is (please, correct me if I'm wrong):
1) repeat up to 4 times
1.1) render to a single 1920x1080p render target with 2xMSAA (8 bytes for color + Z), ~32 MB
1.2) resolve the render target into another chunk of EDRAM (only color, 1 sample per pixel), ~8 MB
2) we end up with up to 4 buffers, ~32 MB
3) reading from the 4 buffers, render the final image into a single 1920x1080p render target with 2xMSAA, ~32 MB
4) resolve the final image to the main memory for display output (during the resolve you can start rendering the next frame since 32 MB are free)
Of course without MSAA and with only two buffers, you could do the same with 32 MB... It seems however that 64 MB would have been the sweet spot.
 
I posted all my responses, and I didnt get in return any convincing response to my initial question : how the hell a PC game running on a limited 2 Gb GPU could look any better in terms of core gaming assets than a game designed with 4 Gb of GPU RAM in mind ?
It cannot. But that's not what'll be happening. No PS4 game will target 4GBs of assets, and no high-end PC when PS4 launches is going to have only 2GBs VRAM.

1- we should not compare PS4 to GTX680
GTX680 is current PC tech. If PS4 can't even compete with that now, how is it going to hold its own in a year's time? That's why GTX680 is being presented - proof that PCs surpass PS4's capabilities.

but they dont have the courage to admit that 4 Gb of GGDR5 at 192 Gb/s is really something special and was unexpected for nextgen consoles) whatever...
What has courage to do with it? That bares no relevance to relative performance. PS4's 'courageous' 192 GB/s is pitted against a PCs aggregate 300+ GB/s.

2- PS4 games wont use a higher amount of GPU RAM for graphical assets than a 2 Gb GTX680 would use. really ? developers will just use the other 1.5-2 Gb ramaining for doing AI ? seriously....
Think this through more carefully. PS4 has a total 4 GBs. 512 MBs is reserved, so that's 3.5 GBs. That includes executables, AI, simulations, etc. It also includes framebuffer operations. These things also eat into the bandwidth, so of that 192 GB/s, maybe 60 GB/s is available for assets (this figure will vary wildly with game and rendering choices).

In a PC, the total RAM is shared between main RAM and GPU. That means all the executable and AI and physics stuff is in main RAM*, meaning on a 2 GB card (which won't be the high end when PS4 launches, despite your insistence) you have all 2 GBs for framebuffer operations and assets. On a 4 GB card (far more realistic) you'll have 4 GBs for assets and buffers, some 300 GB/s BW for the GPU, and the main RAM doing everything else.

PS4 has no advantage in any area of hardware. It hasn't a faster bus, or more RAM, or processing power. It cannot bring 4 GBs of assets at 192 GB/s as that's 4GBs total for the entire system.

At the end of the day, your comparison is unrealistic and ignores the realities of the technology involved, which is why people are getting short with you, especially when you're blowing your own trumpet on how right you are about something that's neither proven, nor even logical to those more knowledgeable.

I suggest you leave it at that, and in a year's time we can look at real games and real hardware and see. That's when you should be saying 'I'm right!' (if you are) instead of now when there's nothing proving you're right except construed arguments.

* simplified of course, as GPGPU is not uncommon for physics.
 
It is unlikely that any PS4 game will use 2.5 GB of "graphical assets" per frame. Texture read from memory likely to be no more than several tens of MB per frame even next generation.

PC GPUs can read / write data to main ram over PCI-E. For around a thousand years PC GPUs have been able to treat a portion of main ram as graphics ram.

I would add that for most types of console games you can't use the entire chunk of ram even if you wanted to since you need to reserve some space to buffer assets from optical disk. So in the case of a 4gb gddr5 console, you would carve off a chunk of that to use as a buffer for disc i/o. You have some more extreme examples of that going back to the PS2 on a game like God Of War which I think carved memory in half, using one half for the playable visuals and the other half to keep adjacent level assets loaded. On a pc you would use ddr3 for buffering, leaving gddr5 on the gpu all usable for rendering visuals.
 
I posted all my responses, and I didnt get in return any convincing response to my initial question : how the hell a PC game running on a limited 2 Gb GPU could look any better in terms of core gaming assets than a game designed with 4 Gb of GPU RAM in mind ?


It cannot.

Wow, thanks finally an honest answer.

But that's not what'll be happening.

we will see.;)

I suggest you leave it at that, and in a year's time we can look at real games and real hardware and see.

I do agree, but we should not wait till fall 2013, this E3 2013 with full nextgen console specifications, first nextgen demos and especially exclusive sony nextgen titles, and with developers comments, we will start getting a glimpse of the answer. ;)
 
Wow, thanks finally an honest answer.
That answer didn't need giving though, which is why no-one bothered.

I do agree, but we should not wait till fall 2013, this E3 2013 with full nextgen console specifications, first nextgen demos and especially exclusive sony nextgen titles, and with developers comments, we will start getting a glimpse of the answer. ;)
That'll only show you what the consoles are managing. It won't tell you what a good PC will be at that console's launch and how that compares. Although it shouldn't matter as current high-end consoles will still be playing the cross-platform games that much better.
 
Isn't the whole console vs exotic PC debate kind of pointless?

No one actually targets exotic PC's for development.

Exactly. Name 5 games from the recent year that had significantly improved graphics on the PC, actually utilizing hardware features and speeds 6-7 years more modern than the current consoles.

We're already close to the point of diminishing returns anyway, and there are just not enough PC gamers buying such games to justify such a huge effort. Keep in mind the added asset production costs as well! It's already rare to see console exclusives because studios see the 60-70 million PS3/X360 user bases as a platform that's too small in itself.

There's no financial justification for a truly high-end PC game that can't be ported to consoles, so it won't really happen. Star Citizen is a very rare exception and even there the hardware requirements will be quite modest, certainly not far beyond the upcoming consoles.
 
Exactly. Name 5 games from the recent year that had significantly improved graphics on the PC, actually utilizing hardware features and speeds 6-7 years more modern than the current consoles.

There's no financial justification for a truly high-end PC game that can't be ported to consoles, so it won't really happen. Star Citizen is a very rare exception and even there the hardware requirements will be quite modest, certainly not far beyond the upcoming consoles.
that wasn't the argument. DoctorFouad has said that at launch, consoles will have the best graphics beyond all but the most ridiculously expensive, exotic PC. I'm saying that at launch, those PCs will have the same games with the same assets in higher IQ and framerate, possible with better in-game assets by way of more foliage, higher quality shadows, higher quality lighting, etc (same as 'extreme' versus 'high' settings on a PC game).
 
Exactly. Name 5 games from the recent year that had significantly improved graphics on the PC, actually utilizing hardware features and speeds 6-7 years more modern than the current consoles.

Name me 5 games that run at full HD and run 4xMSAA like Microsoft said all 360 games would...

Alan Wake looks stupidly better on PC, the resolution jump alone makes such a difference, Battlefield 3 also looks much much better on PC. Even BLOPS 2 looks massively better on PC.

Now show me a modern console game that looks better then the STALKER Series.

In fact me a game that's technically better on console then the now 5 years old STALKER Shadow Of Chernobyl.
 
Name me 5 games that run at full HD and run 4xMSAA like Microsoft said all 360 games would...

Alan Wake looks stupidly better on PC, the resolution jump alone makes such a difference, Battlefield 3 also looks much much better on PC. Even BLOPS 2 looks massively better on PC.

Now show me a modern console game that looks better then the STALKER Series.

In fact me a game that's technically better on console then the now 5 years old STALKER Shadow Of Chernobyl.
This argument isn't really sound. PCs are so much faster than todays consoles (and next gen too) that you could run Agnis philosophy on top setups if there was a developer in the world willing to make it. PCs are so underutilized that saying "You bump up res to 1080p and 60fps and its soooo much better" doesn't really tell the whole story.

Assets are still made with consoles in mind. You mentioned STALKER...I mean, its a solid looking game, but apart from lighting there is absolutely nothing in it that would wow people (especially not regular joe's). And Alan Wake...That game has some horrible facial models and animations, environements look poly starved and just bumping the resolution won't give you next gen game, it will only give you better looking "last decade" game. Which means, as long as devs make games with consoles in mind, PC versions won't look much different to general public.
 
Name me 5 games that run at full HD and run 4xMSAA like Microsoft said all 360 games would...

This argument isn't really sound. PCs are so much faster than todays consoles (and next gen too) that you could run Agnis philosophy on top setups if there was a developer in the world willing to make it.
That wasn't the argument. The discussion was how console compare with high-end PCs at launch. What PCs do or don't do now is immaterial other than how they'll compare with next-gen consoles at launch.
 
This argument isn't really sound. PCs are so much faster than todays consoles (and next gen too) that you could run Agnis philosophy on top setups if there was a developer in the world willing to make it. PCs are so underutilized that saying "You bump up res to 1080p and 60fps and its soooo much better" doesn't really tell the whole story.

Running a console game at 1080p 60fps is not as easy as you lot think.

BLOPS 2 is rendered at 880x720, that meens a PC is rendering 70% more pixels and then it's rendering the game at double the frame rate.

BLOPS claims 60fps but during fire fights it's only around the ~35fps mark which makes it in reality no better then any other 1280x720 30fps consle shooter.

A 512Mb 9800GT can play any console port at the same settings or higher settings with higher resolution and AA ( And I know because I've tested it )

You can get a 9600GT now for around £10
 
That wasn't the argument. The discussion was how console compare with high-end PCs at launch. What PCs do or don't do now is immaterial other than how they'll compare with next-gen consoles at launch.

Raw specs wise they be half as fast as a single GPU PC and all ports will run with no problems.
 
A custom 7850 might be used for the consoles, but they could fit a better gpu if they wanted to. So far it doesn't seem Agins Philosophy will be baseline next gen.

Agni was 60fps at 1080p on GTX 680 with 1.8gb VRAM used and unoptimized assets. I don't think it's too much of a challenge with some work.

As far as baseline? Could be... might be more likely if devs do corridor gaming tricks again.
 
Back
Top