PlayStation 4 (codename Orbis) technical hardware investigation (news and rumours)

Status
Not open for further replies.
People need to remember for the high end AMD graphics on PC are back end limited.

That's why according to most leaks they've completely revamped the ROPS and various parts in the next generation cards.

The current cards have a pretty weak fill rate and other weak points relative to the shader power and this shows in some benchmarks, biggest example being running games with SGSSAA... The Keplar cards trash the AMD cards when that is enabled despite the AMD cards being at least equal in shader power with higher bandwidth.
 
People need to remember for the high end AMD graphics on PC are back end limited.

That's why according to most leaks they've completely revamped the ROPS and various parts in the next generation cards.

The current cards have a pretty weak fill rate and other weak points relative to the shader power and this shows in some benchmarks, biggest example being running games with SGSSAA... The Keplar cards trash the AMD cards when that is enabled despite the AMD cards being at least equal in shader power with higher bandwidth.

What are the implications, then, for the next gen boxes?
 
The next gen boxes will be generally targeting 1080p or 720p, if not lower. They'll probably not jack up graphics settings quite so high.

The high-end cards target resolutions double or triple that, with very high MSAA and lighting settings.

I'm also not sure which Kepler cards this is meant to be in relation to. For example, the 7970 GHz and the 680 GTX tend to switch places in many benchmarks at more reasonable resolutions, which is not an indication of a strong back-end limitation, much less for games that might be targeting sub-HD resolutions on the consoles.
 
seems like it reconfirms the clock anyway

GQ60CyZ.jpg
 
seems like it reconfirms the clock anyway

GQ60CyZ.jpg

Odd that it supports DX11.2 and OGL4.4 when neither the HD7000 nor the HD8000 series seems to support either, atleast according to what I could find on AMD's website.

Best I could find is DX11.1 and OGL4.2 (wiki suggests OGL4.3).
 
Well..
Dx11 maximum texture/buffer resolution is 16384*16384, so with a FP32 RGBA buffer you would use 4gb.
Sorry ios
Now dhow me a real world game case where 16x16k textures are required.ok yes for shoddy programming aka this 8mb is more than enuf we would of been better served
With 4mb and faster elsewhere. Sorry to drum on this
 
Odd that it supports DX11.2 and OGL4.4 when neither the HD7000 nor the HD8000 series seems to support either, atleast according to what I could find on AMD's website.

Best I could find is DX11.1 and OGL4.2 (wiki suggests OGL4.3).

i dont think it says its supports dx11.2+ its saying that the feature set of their gpu and api are equivalent to being beyond dx11.2 thats why they put +.
 
I can't seem to play that swf video link from my Mac. :-/

Here's the Google translated link for the article:
http://translate.google.com/transla...-Konsolen-220102/Specials/PS4-Inside-1084325/

So, it's like what Deep Down is doing with PRT and Spare Cone. Also:

As an example of the possibilities of the Playstation 4 Graphics Chris Ho presented his project. He combines a voxel representation on sparse octree calculation with the Partially Resident Textures, which offer GCN chips and keep as Tiled Resources DirectX 11.2 tray. By storing in 3D textures can be dispensed to the CPU-intensive generation of the octree, but without giving significant advantages art. A live demonstration of the technology was impressive lighting effects and achieved without major improvements, which still stand, according to Ho, around 38 fps for the current animation. The generation of the voxel skeleton but needs another 45 ms (corresponding to about 22 fps).

22 fps while unoptimized so far.
 
Odd that it supports DX11.2 and OGL4.4 when neither the HD7000 nor the HD8000 series seems to support either, atleast according to what I could find on AMD's website.
AMDs GCN beta drivers for Windows 8.1 support dx11.2. It's understandable that AMD hasn't yet updated their website, since Windows 8.1 is still in beta. We have to wait for October (Windows 8.1 launch) to have official dx11.2 support.

OpenGL 4.4 feature list seems to mostly mirror new dx11.1 & dx11.2 features. There seems to be some new features that are not available in dx11.2 (*), but I cannot confirm this, as I don't have a Windows 8.1 development computer to test the dx11.2 SDK. It will be a pity, if Microsoft doesn't release dx11.2 for Windows 7 as a service pack (like they did with dx11.1 and Vista). dx11.2 has so many good features, and a broad hardware support (Nvidia, AMD and Intel all are behind it). I don't think many professional dev studios (and professional software houses in general) are going to switch to Windows 8 anytime soon (as many have updated all their development computers to Windows 7 quite recently and are completely happy with it).

(*) OpenGL 4.4 async queries that write directly to GPU buffers are a very interesting feature. You could read the query result directly from a compute shader (without CPU intervention and the implied one frame extra latency), and adapt your algorithm based on it. Or you could use it in combination with indirect draw APIs to control draw call primitive counts (or dispatch thread counts).

Nvidia's comments on OpenGL 4.4 press release had also very interesting information about the forthcoming "mobile Kepler" based Tegra:
“We’re also working to bring support to Tegra, so developers can create amazing content that scales from high-end PCs down to mobile devices.”
 
AMDs GCN beta drivers for Windows 8.1 support dx11.2. It's understandable that AMD hasn't yet updated their website, since Windows 8.1 is still in beta. We have to wait for October (Windows 8.1 launch) to have official dx11.2 support.

OpenGL 4.4 feature list seems to mostly mirror new dx11.1 & dx11.2 features. There seems to be some new features that are not available in dx11.2 (*), but I cannot confirm this, as I don't have a Windows 8.1 development computer to test the dx11.2 SDK. It will be a pity, if Microsoft doesn't release dx11.2 for Windows 7 as a service pack (like they did with dx11.1 and Vista). dx11.2 has so many good features, and a broad hardware support (Nvidia, AMD and Intel all are behind it). I don't think many professional dev studios (and professional software houses in general) are going to switch to Windows 8 anytime soon (as many have updated all their development computers to Windows 7 quite recently and are completely happy with it).

(*) OpenGL 4.4 async queries that write directly to GPU buffers are a very interesting feature. You could read the query result directly from a compute shader (without CPU intervention and the implied one frame extra latency), and adapt your algorithm based on it. Or you could use it in combination with indirect draw APIs to control draw call primitive counts (or dispatch thread counts).

Nvidia's comments on OpenGL 4.4 press release had also very interesting information about the forthcoming "mobile Kepler" based Tegra:
“We’re also working to bring support to Tegra, so developers can create amazing content that scales from high-end PCs down to mobile devices.”
The onus is on the DirectX creators and graphics cards developers to bring DX11.2 to Windows 7 too. I don't know if it's going to happen though. Thanks for the insight as usual.

It is to be expected, the GPU on PS4 is theoretically quite more powerful. I am not a hardware engineer, nor I am in that camp and I don't particularly care for the differences.

Question is... Is an AMD representative actually saying that one of the consoles is going to have a great performance advantage because of that? :cool:

It sounds odd to me when AMD representatives should be in a love triangle. :eek:
 
Odd that it supports DX11.2 and OGL4.4 when neither the HD7000 nor the HD8000 series seems to support either, atleast according to what I could find on AMD's website.

Best I could find is DX11.1 and OGL4.2 (wiki suggests OGL4.3).

I would imagine that is because Sony wrote the DirectX 11.2 and OGL layers themselves, on top of their low level OS calls. At least, that's what I understood from a discussion a little while ago with a developer (I think from Ubisoft/Assisin's Creed)
 
Any news on the new PSEye? I wonder if this time it will perform Kinect like gesture/body recognition... I use the Move controller and I really like it, but in dancing games I would prefer a controller free experience.
 
Another German article, interviewing AMD Snr Product Mgr: http://www.heise.de/newsticker/meld...et-Unified-Memory-Xbox-One-nicht-1939716.html

Brief notes

There is a translation on NeoGAF.

Although both upcoming game consoles Xbox One and PlayStation 4 are based on AMD hardware, only PlayStation 4 incorporates hUMA [Heterogeneous Uniform Memory Access] for supporting a shared memory space. This was explained by AMD's Senior Product Marketing Manager Marc Diana to c't [big German IT magazine] at gamescom. This should put the 3D-performance of PlayStation 4 much farther ahead of Xbox One than many have expected so far. AMD sees hUMA as a key element for drastic performance improvements in combined processors. AMD's upcoming Kaveri desktop processors support hUMA as well.

Behind the scenes, c't could hear from developers that the 3D-performance of PlayStation 4 is very far ahead of Xbox One.

Back in April, AMD manager Phil Rogers explained to c't that hUMA improves 3D-performance in particular. "Game developers have been eager to use very large textures for years. Until now they had to resort to tricks in order to package parts of larger textures into smaller textures. That is because today a texture has to be located in a special place of physical memory before the GPU can process it. With hUMA, applications can work with textures much more efficiently". AMD will give more details on hUMA at its upcoming developer conference in November.

http://www.neogaf.com/forum/showthread.php?t=657221
 
I would imagine that is because Sony wrote the DirectX 11.2 and OGL layers themselves, on top of their low level OS calls. At least, that's what I understood from a discussion a little while ago with a developer (I think from Ubisoft/Assisin's Creed)

Uhm. No. Sony did not write any DirectX layers at all.

Besides, layers and APIs are bad, right? Everyone knows the Sony devs use CTM (close-to-the-metal) instead of OpenGL layers.
 
Uhm. No. Sony did not write any DirectX layers at all.

Besides, layers and APIs are bad, right? Everyone knows the Sony devs use CTM (close-to-the-metal) instead of OpenGL layers.

Yes, but this is part of how they support developers, by making it easier for them through providing a DirectX compatible layer. I'll find the quote.
 
Status
Not open for further replies.
Back
Top