Extra memory doesn't increase computational resources or bandwidth. If you want to render more objects or particles (more object density or higher view range), you need to perform more computations. If you want to use higher resolution textures or higher detailed polygon meshes, you need more bandwidth.And we already know something on Agni's level can be done with 1.8gb vram usage. The full ~6-7gb used should be a revelation!
You work for AMD? Sony?No I'm not Mark - nor a Epic developer. Ask yourself this? Would it be far fetch for a school teacher to know her City Councilmen/woman? Would it be far fetch for a City Councilmen/woman to know a State Senator? Would it be far fetch for a Police Officer to know an FBI agent? Would it be far fetch for an FBI agent to know a CIA agent? In other words, I don't have to be in the actual hen house to know what's going on. Let's just say I'm into hardware technology.
Anyhow, I've been lurking around Beyond3D for years, never cared to register until recently. I'm not here to fuel any one side (PC Vs. PS3 Vs. Xbox or Intel Vs. AMD Vs. Nvidia). I'm here to offer additional information... be it firsthand or secondhand. Sometimes I will have to be vague about things - just because I'm not stupid. Additionally, I will never belittle anyone here, not my thing. I've seen over the years, people (respectful people of the industry), lose their calm over internet nonsense. Taking pictures of pistol's in their hand's and threatening other board members.
Back on topic. I stand by all my comments ...that's all I can say.
More memory can be used to bring more variation for textures and meshes, and bigger (persistent) areas to explore. Clarification: Bigger areas = more objects behind the corner (in the next rooms) or beyond the view distance.
Extra memory is a very good thing for a dynamic fully destructible game world... as long as you make sure the player cannot destruct too much at once... or you run out of computational resources and bandwidth and the frame rate plummets
A lot of devs seem super excited about that, and they say it gives major longevity to the platform. I'm guessing the ability to use much higher resolution textures has something to do with that.
And approximation hardware i'm guessing, like Ubisoft for Watch Dogs, they were running it on a "comparable" PC. But there is no comparable PC with that kind of HSA APU like achitecture in combination with that kind of ram configuration(with that large amount of ram reserved for both CPU and GPU at 176gb/s. In PC's, the GDDR5 ram speed only goes for the GPU(since it is obvious graphics ram), and everywhere else has ram of much slower bandwidth. So really, you'd have to wait for retail units to see what it can really do if they aren't using the actual PS4 dev kits :/
sebbbi is a console dev, and he's just specifically said it's not going to lead to particularly higher resolution textures (over 4GB) because that would also require more bandwidth.
Unreal Engine 4 - Elemental Demo
http://www.gamefront.com/files/21816579/UE4_Elemental_Cine_1080_30_H264.mov
Unreal Engine 4 E3 2012 Elemental demo trailer
http://www.shacknews.com/file/31991/unreal-engine-4-e3-2012-elemental-demo-trailer
Do current PS4 development kits already run close to the metal, or do they run Windows + DirectX ... with initial games being released running through a DirectX emulation layer?
Sure, but I'm just looking at it from the perspective of what they could get up and running fastest ... which is probably Windows and DirectX.Isn't Sony going to be using Open GL, or at least a subset?
Isn't Sony going to be using Open GL, or at least a subset?
Texture bandwidth usage is a quite interesting discussion on it's own. As long as you have enough texture resolution to sample everything at 1:1 texel / screen pixel ratio, the bandwidth requirement stays exactly the same, no matter how big textures you are using. Mipmapping takes care of this. The GPU calculates the mip level that provides closest to 1:1 ratio, and samples only that texture mip level (and the next level for trilinear blending). If there are more detailed mip levels available that is required for 1:1 mapping, these are not sampled at all. So basically texture resolution beyond the required sampling resolution do not increase the bandwidth usage.sebbbi is a console dev, and he's just specifically said it's not going to lead to particularly higher resolution textures (over 4GB) because that would also require more bandwidth.
If you have various texture layers, can't you just merge them once before you render them the first time, or doesn't that work precisely because you want to keep them in memory separately? But then, if you keep them in memory separately and merge them when rendering, is that to conserve memory? Because in that case, using more memory by merging layers before rendering could decrease bandwidth use?