PlayStation 4 (codename Orbis) technical hardware investigation (news and rumours)

Status
Not open for further replies.
Have people gone loopy lately? 4GB of DDR 5 is stretching the tech already, how does 8 GB of DDR5 on a motherboard (even stacking) make sense? If it were possible in a $400 retail machine, I mean forget about PS4, AMD should develop a Jaguar laptop with 8 GB of DDR5 for $500-$600 in 2014 and kill Intel. Why don't we see more mid-range cards have more than 2 GB DDR5? Either the total cost is not what people wish for or Sony will take a massive bath on 8 GB DDR5. And if they somehow get it at a reasonable cost, you will see copycat machines quickly come to market. That is the thing people don't seem to understand with the current PS4 model. It is commodity parts, not exotic parts. Anyone can build it within the first 2 years of the machine coming out. With the PS3, no one could really copy them because they used Cell and XDR but PS4 is all standard parts so if it is really as cheap as people say, it will be surpassed within 1-2 years no matter what. I would gladly welcome 8 GB DDR5, but it's not going to happen in 2013.
 
Have people gone loopy lately? 4GB of DDR 5 is stretching the tech already, how does 8 GB of DDR5 on a motherboard (even stacking) make sense? If it were possible in a $400 retail machine, I mean forget about PS4, AMD should develop a Jaguar laptop with 8 GB of DDR5 for $500-$600 in 2014 and kill Intel. Why don't we see more mid-range cards have more than 2 GB DDR5? Either the total cost is not what people wish for or Sony will take a massive bath on 8 GB DDR5. And if they somehow get it at a reasonable cost, you will see copycat machines quickly come to market. That is the thing people don't seem to understand with the current PS4 model. It is commodity parts, not exotic parts. Anyone can build it within the first 2 years of the machine coming out. With the PS3, no one could really copy them because they used Cell and XDR but PS4 is all standard parts so if it is really as cheap as people say, it will be surpassed within 1-2 years no matter what. I would gladly welcome 8 GB DDR5, but it's not going to happen in 2013.
8GB of GDDR5 is nearly impossible. The only tiny hope we have from the vgleaks leak, is that they don't mention the type of memory. With all the other details they gave, that's not a small omission. I'm thinking they could be holding on to that info and release it later.... more click money.
 
Have people gone loopy lately? 4GB of DDR 5 is stretching the tech already, how does 8 GB of DDR5 on a motherboard (even stacking) make sense? If it were possible in a $400 retail machine, I mean forget about PS4, AMD should develop a Jaguar laptop with 8 GB of DDR5 for $500-$600 in 2014 and kill Intel. Why don't we see more mid-range cards have more than 2 GB DDR5? Either the total cost is not what people wish for or Sony will take a massive bath on 8 GB DDR5. And if they somehow get it at a reasonable cost, you will see copycat machines quickly come to market. That is the thing people don't seem to understand with the current PS4 model. It is commodity parts, not exotic parts. Anyone can build it within the first 2 years of the machine coming out. With the PS3, no one could really copy them because they used Cell and XDR but PS4 is all standard parts so if it is really as cheap as people say, it will be surpassed within 1-2 years no matter what. I would gladly welcome 8 GB DDR5, but it's not going to happen in 2013.

You make PS4 sounds like a machine that anyone can make just becse it is based on "standard parts" which is not the case.
If copycatting consoles was easy, or even legal, then we would not have just 3 major console manufactures.
Really consoles are all built using already existing hardware/components, for very obvious reasons, but this doesn't mean that anyone can replicate them as you say.
 
Why do people keep saying this? Only because of the x86 ISA? An octa core Jaguar and a Pitcairn combined in a single HSA APU with 4 GB of GDDR5 RAM doesn't sound standard to me. Show me a single PC game that utilizes HSA-based GPGPU algorithms. ;)

Show me a single console game. Your comparing unreleased products to released ones. Kaveri will be in the market by the time these consoles launch and I've no idea how cross platform games that utilise the HSA nature of the new consoles will utilise HSA (or similarly) capable PC's. It would be interesting to get some developers views on that.

As for PS4 being standard. The parts may not be put together in a standard configuration but the individual components of the Liverpool (woo hoo!) APU all seem to be standard. Jaguar cores will launch on the PC in quad core format later this year with with Kabini and that same chip will also feature 2-4 GCN CU's. So basically a (very) mini PS4.

Kaveri is more comparable to PS4 in overall size/power but will obviously utilise Steamroller cores rather than Jaguar given that its targeted at a higher end market. On the GPU front it will probably be around half the PS4 with 8 CU's running at a higher clockrate.
 
You make PS4 sounds like a machine that anyone can make just becse it is based on "standard parts" which is not the case.
If copycatting consoles was easy, or even legal, then we would not have just 3 major console manufactures.
Really consoles are all built using already existing hardware/components, for very obvious reasons, but this doesn't mean that anyone can replicate them as you say.

Indeed, replicating the original xbox would have been a piece of cake, I think most PC gamers around the time were targeting very similar setups, i.e. big single core x86 CPU, nforce motherboard and GeForce 4 Ti. The only really unique feature in there was the unified memory.
 
Why do people keep saying this? Only because of the x86 ISA? An octa core Jaguar and a Pitcairn combined in a single HSA APU with 4 GB of GDDR5 RAM doesn't sound standard to me. Show me a single PC game that utilizes HSA-based GPGPU algorithms. ;)
Yeah, as pjbliverpool said, the configuration is not standard but the parts are. This is an AMD machine more than a Sony machine. AMD can release a variant of this configuration for laptops in a couple of years with better components if the market is there for it. No matter how you slice it, it is a $400 machine and you're getting $400 parts. This future proofing idea by using a unique configuration on standard components is flawed. Whatever configuration you use, if it is standard parts, it can be produced more powerfully and cheaply in 2 years. Only by using unique components such as Cell or the A6 from Apple, or EDRAM could you really differentiate yourself in a way that is hard to duplicate.
 
I think that's the intention. To use cheaper off-the-shelf parts to lower cost. Give low level APIs to developers to give games the performance leap they need.

The idea is not to overtake PCs and tablets in raw performance. They will have overhead because the user will run many other tasks at the same time. It's to provide a stable, long term platform for the devs to work on.

If Sony decide to update their consoles faster, they don't have to try to recoup their h/w R&D investments every generation. They may need to ensure their low level APIs are "portable" (or at least not hard to port) across generations.

Eventually, they may pursue streaming games more seriously.
 
Show me a single console game. Your comparing unreleased products to released ones.

That's exactly the point. It's not used by developers for pc games nor console games. Speaking of "standard" is absolutely inappropriate.

Heterogeneous architectures represent a paradigm shift just like multicore architectures did a couple of years ago. In my eyes it's a mistake to reduce the HSA to he ISAs that it's using (x86 and GCN). The HSA is more than the sum of its parts, it allows for completely new algorithms that are impossible to use with common homogeneous processor architectures. Of course the HSA is not as exotic as the Cell, which was by the way the harbinger of the heterogeneous evolution, but speaking of "standard" is wrong. I'd prefer the term "available": The HSA is available, it's not exotic but it's far from being standard. AMD is obviously using the next gen consoles to make the HSA standard for video games.
 
VGLeaks details Orbis's 'Dual Camera':
http://www.vgleaks.com/orbis-dual-camera-whats-this/

It contains a pair of wide-angle cameras. Each camera offers a resolution of 1280 x 720 pixels (720p).

Sound is processed by a 4 microphone array working at 48 Khz.

The device can perform some non gaming tasks such as:

- Recognize the user and log in the system

- Video chat

In the gaming field it supports head and hand tracking as new game inputs.
Currently it’s not clear if this device will support body tracking in the future as Kinect but it’s almost sure that it will be bundled with every Orbis system.
 
Are there any 120 Hz 720HD webcams? I expect Sony to go proprietary using their Exmor R sensor, but 120 Hz might be pushing it. 3D vision will provide the best AR possible. EyePet 2 will actually exist 'in' the room, hiding behind objects. I hope Sony have the sense to provide a means to capture light-probes for realistic lighting.
 
Sounds like it will be an evolution of the Move system, with reliable hand tracking they can lose the orbs atleast. Add an analogue stick to each Move 2.0 wand and it would be a decent upgrade imo. Im only really interested if VR support on PS4 is also part of the picture though.
 
Stereoscopic video doesn't actually provide depth information AFAIK. Would improve something like object tracking, I suppose, but there's no real way to construct a 3D model of the space ala Kinect.
 
It does provide depth information - that's the way our human vision works. It just requires some processing, more than structured light scanning of Kinect, and not as precise, though for the precision suggested by gaming, it boils down to good enough camera resolution and good enough software to detect edges of objects. Shouldn't be problematic given Orbis processing power.
 
Stereoscopic video doesn't actually provide depth information AFAIK.
Your eyes beg to differ. :p It's certainly possible, (MS have a tech for extracting 3D from 2D too)but I don't know how processor intensive that would be and whether Sony can pull it off in PS4. It could be that a scan of the room is performed in a preparation phase, the 3D data extracted, and then compositing performed with subsequent 3D only needing delta changes to be tracked.
 
Stereoscopic video doesn't actually provide depth information AFAIK. Would improve something like object tracking, I suppose, but there's no real way to construct a 3D model of the space ala Kinect.
Yes, you can do markerless mocap with two or more PS Eye. There are professional systems doing that. It's better with markers though, they could use some high contrast cloth wrist and ankle bands to make it flawless and as easy on the processing as the Move, no need for any smoothing, 120Hz, zero lag.
 
I should have been more clear. Obviously binocular vision allows us to perceive depth, but that's not the same as generating a 3D point cloud for a game to use at 120hz. I'm no cognitive scientist or anything, but the ability to effectively use the stereoscopic information taken in by our eyes takes a lot of processing and is the result of years of development, experience, utilizing many points of reference and is constantly adjusted and "corrected" as we move through a 3D environment. That's a really difficult problem, especially compared to simply recognizing an object (like a face, move controller, or AR card) which could see obvious benefits from dual cameras, higher resolution and increased framerates.
 
Kinect2 in the 2010 PDF is just a "dual camera HD" too, not Kinect1 technology
you can make depthmap like Kinect1 with dual camera
 
Yes, you can do markerless mocap with two or more PS Eye. There are professional systems doing that. It's better with markers though, they could use some high contrast cloth wrist and ankle bands to make it flawless and as easy on the processing as the Move.

How do you position the cameras? In a prepared space with two cameras positioned to give perpendicular views of a space where the distances and topography are all known it gets much easier. I'm imagining Sony doesn't expect people to mount one camera on the tv and another on the wall next to the couch and then punch in a bunch of measurements. If both cameras are mounted in a single unit that looks like a Kinect, it just strikes me as a big challenge.
 
Status
Not open for further replies.
Back
Top