Holy Wall of Text: My impressions post Playstation 4 (PS4) unveil

Acert93

Artist formerly known as Acert93
Legend
If you hate to read and/or already decided which platform to buy don't bother reading this thread. GO AWAY!

These are my random my thoughts concerning the PS4 unveiling. I had a couple days to pick through some posts, blogs, and news pieces and sample a bit of the media. I didn't really know what thread to toss it, but since I made an effort to organize my thoughts into Good/Bad/Ugly categories I decided to start a new thread. Maybe as more details are released I will update.

Overall I was quite pleased with what Sony presented. I was pleasantly surprised by the 8GB of GDDR5 and the system looks to be a nice, holistic, and stable platform. Depending on the software (KZ looked horrible imo; Infamous has me interested) and features (free online?) Sony has me interested in switching platforms. This would not be the first time I jumped (I went from Nintendo to Microsoft last go around) so I am open to being won over. While I thought Sony's presentation skills stunk I think the platform, itself, speaks loving words to core gamers. Seeing industry stalwarts like John Carmack (and to a lesser degree Epic) praising the machine and the machine hitting many of the "wish lists" from devs at Crytek and DICE made me happy as it shows Sony has been listening to developers more. As many of their choices reflect many of my own postings over the years here at B3D I would like to fictitiously believe they were listening to gamers like me as well.


The Good

# RAM. I love this design decision as Sony went all in. It is still hard to believe: 8GB of GDDR5 clocking in at 176GB/s. Every generation (every, single, generation, ever) developers have complained about the amount and bandwidth of the system memory. When designing—and cutting corners to keep costs under control—memory has been the biggest victim. Not this time. To put this into perspective Enthusiast and High End single chip GPUs clock in at 2GB-3GB of memory and PC games, bloated OS and all, are not in the practice of using more than 8GB of system memory. Sony aimed for the high end in memory density and then kicked into high gear with a Unified Memory Architecture (UMA) with high bandwidth (to put into perspective comparable GPUs to the PS4 have slightly over 150GB/s). In theory the PS4 has enough system bandwidth to access over half of the memory each frame (30Hz game). Of course that isn’t a real world scenario as much of the memory will be “on demand cache” and much of the bandwidth will be used for the relatively small framebuffer, but the point remains: the PS4 has a lot of memory and a considerable amount of bandwidth to feed the APU. The memory is forward looking and the most glaring “next generation” part in the PS4 as it is 16x denser with almost 4x the bandwidth of the PS3. This will raise the bar for PC gaming as well as hedges a bet against the coming onslaught of high bandwidth solutions coming in the next couple years. I don’t think the memory will remain cutting edge long but it certainly will help the PS4 age gracefully.

We don’t know if this is 16 chips (4Gb) in clamshell x16 mode or 8 chips (8Gb) in x32 mode (which don’t appear to be in volume production, yet) but all that doesn’t really matter. Sure it is going to be very, very expensive—4GB was going to be expensive. Further it will make the board design more complex and will almost certainly cause hurdles in size/cost reduction (PCB layout, reworked memory controllers to adjust to fewer chips, lack of GDDR5 volume, etc). Sony surely explored Silicon Interposers and Stacked Memory as methods to address the bandwidth and density issues, yet it seems they must be confident enough in the volume of GDDR5 at high densities being produced for GPUs that they went all in. This is an interesting gamble on the production side but as a gamer this move gets two big thumbs up. That said it may not be such a horrible move by Sony. It gives their platform a lot of room to grow and offers enough space to run background tasks. If their goal was to make developers happy this goes a long way in terms of good will. It will also help address the issue of slow media storage (i.e. RAM serving as a cache for large chunks of the game to reduce subsequent load times). As for cost, if Sony was making a gamer oriented console they would need to invest somewhere; due to issues discussed later the CPU and GPU were going to see size reduction in terms of silicon real estate and BDR is now no longer the expensive Trojan it played in 2006. Gone, also, was esoteric XDR. These changes in emphasis provided an opening to invest more in the long neglected system memory. If there was one design choice Sony made that clearly signaled they have not abandoned the core game it was their choice in system memory.

Larger worlds with more going on in them? Sign me up!

# GPU. OK, I really like Sony’s investment into system memory but let’s not play coy: How much Sony invested into the GPU real-estate would be a good indication of what kind of consumer Sony was chasing with the PS4. The GPU has been asked to do more and more over the years as many tasks traditionally done on the CPU were moved onto the GPU (T&L, Vertex Shaders, etc). NV’s Fermi / Kepler architectures and AMD’s GCN architecture were full blown commitments to GPGPU, further burdening the GPU will tasks. This meets an uncomfortable inflection point of diminishing returns. Core gamers don’t like to admit such, but diminishing returns have been in our face for a while. As pixel density increases (resolution increases) the typical eye notices less and less benefit. The same applies to textures. As geometry increases what was formerly glaringly obvious (a tire moving from 10 points to 100 looks much smoother) is now not so obvious (the same tire from 100 to 1000). Raw performance may go up 10x but the visual impact is not nearly as impressive. To make a visual leap traditionally associated with “next generation” hardware Sony was presented the dilemma of needing both a more powerful and efficient GPU as well as shifting more compute tasks to the GPU. The fact the PS3 is 7 years old would, at first blush, make this appear to be a “solved” problem due to time: enough time has passed that these issues are waved off by the extra time allotted to manufacturing and architectural advancements. Unfortunately power and heat constraints hit a wall with the last consoles, the power needs of new processes has been less than impressive, and shrinks have been slow to roll out.

In a nutshell due to power constraints, limits on cooling and form factor, Sony’s desire to move to a APU (SoC), the increased fabrication costs of chips, concerns over the pace and cost of future process reductions, and already being at the “ceiling” for consumer price points in a global recession it was unlikely that the PS4 GPU would be larger than RSX (over 250mm^2; Xenos was slightly larger). Sony could not solve this problem with the obvious answer (a bigger, faster GPU). The question was how far would Sony retreat from the investment from previous generations? Was Sony going to concede to Nintendo’s vision of home entertainment?

A survey of the market shows Nvidia’s Kepler is over 300mm^2, AMD’s Tahiti GCN over 350mm^2, and Pitcairn over 200mm^2 (212mm^2). This indicated that gamers concerned about Sony’s commitment to the progress of core gaming would be using Pitcairn (20CUs) as a general benchmark. Sony did not disappoint as the PS4 has a GPU with 18CUs (1.84TFlops; the APU probably has 20CUs with 2 disabled for yield purposes) and 72TMUs (57.6B texels/s) at 800MHz. This is not the place to gush over AMD’s GCN architecture but needless to say it is a cutting edge graphics architecture, proven in the PC space, with robust industry leading Compute abilities with a solid balance of features, performance, and efficiency (p/Watt, p/Area, p/$). We don’t know the full extent of the GPU’s customizations but it is not unlikely that Sony and AMD worked out some HAS features for their APU.

Gamers can be pleased: Sony did not retreat from their GPU investment with the PS4. Gamers are getting a cutting edge, modern GPU architecture along with robust performance.

That isn’t to say there aren’t some questions or concerns. The big questions are how many ROPs will the PS4 have and what changes to the architecture were made? Was additional cache added? The concerns relate to the nature of the “14+4” division of CUs mentioned in leaks. A real concern has to be the latency with rumors indicating latency in excess of 50% worse than PC GPUs; this rumor has legs due to the memory controller having to services both the CPU and GPU (the CPU is less latency tolerant so it will certainly be the prime client) and any concessions Sony had to make to get so much memory on the PS4.

There are also some saving graces. There will inevitably be those saying the PS4 is far, far behind the PC space and will be outdated before it ships. At first blush this seems true; Pitcairn is only a high end GPU (the PS4 being somewhere between Pitcairn and Cape Verde), behind Enthusiast models like the 7970 and 680GTX. As of this week those models have been “pushed down” the performance ladder by NV’s Titan GPU and, of course, there is always multi-board/chip GPUs. And of course there are the Fall 2013 refreshes and the 2014 new models. Of course what is left out of this discussion are two important factors: PC GPUs have sky rocketed in size, power, and heat since 2005 when the Xbox 360 was released whereas the console budgets have not expanded (cost of the machine or conditions to keep cool). Essentially the PC market has stratified at the high end. This is partly due to the problems mentioned earlier regarding performance on new process nodes: size reduction has slowed and the speed and/or power reduction each node has offered has diminished. To continue the pace of progress the PC space could no longer expect a new process to offer the speed bump necessary for a set chip size to offer the performance necessary so larger, hotter chips have been required. Those tactics don’t work in the console budgets. The flip side of the coin is install base; high end PC GPUs are a very small segment of the market with the baseline lagging far behind. This last aspect is being exasperated by the popularity of APUs (see: Intel, the leading shipper of GPUs), the increased market share of notebooks and netbooks, and now the emergence of tablets. The reality is between the deceleration of process manufacturing gains and the emergence of low-power computing in the PC space it is unlikely that the PC gaming space is going to rush past the PS4. In fact Sony’s decision to go with a robust feature rich GPU (DX11.x class) with nearly 2TFlops of performance and 8GB of fast memory the bar has been raised for the PC space. For those still upset about consoles and how they hold back PC gaming I must ask: How long before at least 30% of Steam survey reporting will have both 8 lite-core x86 CPUs (or 4 fat-core x86 CPUs) paired with a 2TFlops GPU? My guess is quite a few years, which speaks well of Sony’s memory and GPU choices. PC gamers can be thankful that Sony not only raised the bar to a reasonable level but they also made technology decisions that have good synergy with the PC market.

# Ports. USB3—check. Ethernet—check. BlueTooth—check. WiFi—check. HDMI—check. Optical S/PDIF—check. Sony has a long history of offering a robust number of ports covering a core of current and future looking technologies and the PS4 is par for the course. One encouraging change Sony introduced with the PS3 was support for standard USB devises (like FF Wheels) and, while not confirmed, it appears Sony may continue with that tradition.

# BDR 6x. Games have to be distributed somehow. Optical media stores a lot and is cheap. The down side is it is slow. How slow? While system memory footprint increased 16x BDR speed, which was already pathetically slow, only increased at most 3x; the over 5:1 ratio does not speak kindly to how much of a limiting factor optical media has become. Sony has addressed that a number of ways. Obviously encouraging developers to start gameplay immediately while content is cached to the large system memory reserve is one way. Offering a 6x BDR drive helps too. The aforementioned memory also helps in that it creates scenarios where large chunks of a game can be loaded in the background; initial load times may not be avoidable but subsequent ones could be muted.

# HDD. A HDD standard is a no brainer as it allows all sorts of downloadable content and distribution as well as improved gameplay (large game saves). The other upside is game caching for improved performance. While HDD technology isn’t getting significantly faster after generation it still is faster than optical drives and has a huge advantage in seek time.

# Extra CPU for OS. Hopefully having the OS running on a separate chip will keep dashboard/blade performance responsive while leaving valuable core resources to developers.

# Used Games. I sympathize with developers. A lot. And I am not a fan of the Gamestop business model. But the “licensed” software route hurts consumers ability to rent, trade, and travel with their legally purchased software. A let’s be blunt: See how Sony dropped BC? Yep, you just lost all your PSN games. The consoles, good or bad, still are unlike the PC in that the PC is a backwards compatible platform. Sony and MS can diminish your previous purchases every generation if they wish. Consumers have been unfairly targeted with DRM tactics and it is time someone stood up for consumers. I am not sure Sony is doing that (their response is vague) but at least it was not a full blown concession of blocking all second hand sales.


The Ok

# CPU. 8x Jaguar cores with 4MB of L2 cache will be under 80mm^2 on 28nm, not to mention very low power. This is very good. It is also very good that while on paper peak performance has dropped real performance in software on average has increased significantly (and those tasks that Cell did very well at will be well suited for GPGPU). So this is all Good, so why is the CPU in the OK category? Cell was over 230mm^2 (8 cores) and even Xenon was bigger (over 160mm^2, 3 cores with 1MB of total L2 cache) so we are seeing a reduction in silicon footprint and a flat count in cores. The PS4 will be on the market a long time, so it seems curious that a little more budget was put into the CPU. While the Bulldozer/Piledriver/Steamroller power envelope made those poor candidates for the SoC I wonder why not 12 or 16 Jaguar cores were not chosen? Or a push for a wider SIMD unit (AVX2 is twice as wide as AVX); peak performance would double and real world workloads may possibly see a double digit improvement with a small cost in chip area. If a customer SIMD design were off the table (cost, time) and core count could not inflate significantly I have to wonder why a single “fat” x64 core was not chosen to go along with the sea of smaller Jaguars (e.g. a big-little arrangement of 1 Piledriver and 8 Jaguar cores). Code that is severely restricted by serial performance could get delegated to the fat x64 core while keeping code all on the same general feature set.


The Unknown

# 14+4. This is still an unknown and recent information indicates that all CUs can be used for rendering, compute, or any mix (which is good). Hopefully Sony has not reserved or neutered 4 CUs.

# 3D, Split Screen. I cannot focus on 3D content well but the idea of using 3D technology for full screen split screen is interesting. I was hoping Sony would usher back in local multiplayer gaming with a bullet point regarding this. That said the console does not appear fast enough to really push 1080p3D.

# PC peripherals. PC CPU—check. PC GPU—check. PC ports—check. Sony, just a little FYI, that little thing called a Xim allows nearly PC perfect mouse controls on your beloved console. Just support it out of the box and mandate all FPS support mice and then offer a mouse filter option. If not, no biggie.

# ROPs. When the initial leaks indicated the PS4 would have more compute power but the Xbox embedded memory I had proposed that one way the new Xbox could provide more visual punch was in pixel fill rate (assuming high bandwidth and fillrate). On the Xbox side that does not appear to be panning out (rumors indicate 16 ROPs and no more bandwidth than the PS4). I have seen 2 rumors for the PS4: it may either have 32 ROPs like Pitcairn or 18 ROPs. I believe the second number was incorrectly deducted by assuming a 7850 starting point and the assumption a reduction in CUs results in reduced ROPs. The bandwidth figures of the PS4 indicate 32 may be likely. If in fact the PS4 has 32 ROPs and the Xbox has 16 this may be a more significant advantage than the compute units.

# Online. Pay to Play would sink this major bullet point right past Bad into Ugly territory.

# Camera. How well does it work and will it be a standard pack in? If not a standard pack in then why even bother Sony? Here is a tip: List the console at $399 but mandate all release consoles include the Camera pack in for a retail price of $449. In Fall 2014 (if it has justified itself) you can announce that going forward the Camera will ship with every PS4, include a bigger HDD, and this new bundle can be $449 with the smaller HDD w/ camera can move to $399.

# SIMD. So are the SIMD’s beefed up in the Jags?

# OS Features. Based on the Yukon leak it seems the new Xbox will have DVR, local server streaming, and a host of instant OS features built into the platform (costing nearly 3GB of memory). How many of these services and features are built in? Which ones are planned?

# RAM Reserved. Which leads to memory. It appears MS is aiming high with 3GB reserved. With the HDD being used actively for streaming this should allow ample DVR space during gaming sessions. The question is how much has Sony reserved—which will impact what features can run in the background.


The Bad

# Move. Sony had a patent for a “break out” controller which was essentially 2 Move wands. I think the Move technology and implementation was solid and was a big step above waggle (Wii) without losing the core game support (Kinect). As a standard pack in of 2 wands I think this could have been a big perk for Motion gaming being integrated gracefully into core gaming. I am very disappointed this did not occur.

# Latency. The rumor is the PS4 has higher than expected latencies. This makes sense as the CPU and GPU share a memory controller and the CPU will be slotted as the primary client due to the CPU being less latency tolerant. That mixed in with possibly cheaper memory threshholds on the GDDR5 could impact performance.


The Ugly

These are conjectures, not factual statements. But if MS is right about some of these it will significantly impact how the "positive" impression plays out where it matters: in the marketplace. Make no mistake about it, Sony is in fiscal straits. They NEED the PS4 to be a huge success. Sony is actively betting each of the following are NOT right.

# Microsoft is right about performance balance. Assuming MS has traded compute units for memory, what if MS is right about latency, specifically low latency on-chip communication and memory, are the key to extracting the best performance out of tomorrows most important software techniques on modern GPUs? The PS4 will have a bit of latency compared to their PC counterparts, if MS has actually found a better “bang for transistor buck” by dropping execution units for SRAM things could get uncomfortable for Sony. E.g. Virtual texturing budgets indicate most if not all of a texture budget can fit into 32MB of memory and should be large enough for some framebuffer organizations (and hence post process). And while only speculated MS could actually offer more in terms of peak CPU flops (2x more is rumored). This may only net a 5-10% improvement in aggregate performance on game code but it may alleviate some of the CPU bottleneck (the more smoothly your CPU feeds the GPU the less often the GPU will be idle), it could mean getting more out of each CPU core (which is a benefit per Amdahl’s Law), and it could save some work moving tasks over to the GPU for Compute. Now factor in the advantage of DDR3 for the CPU and MS may have a slight faster CPU solution and may have an every-bit-as-good GPU if their bets are right. This is uncomfortable for Sony because it appears MS’s rumored architecture will be cheaper, much cheaper, than the PS4. If the extra hardware MS has included in their console (DMEs, Display Planes, uber audio processing, blah blah blah) are worth their salt and do anything to help close the gap the “brute force PC” approach of the PS4, while very accessible and developer friendly, may be more of a paper tiger against what appears to be an outmatched Xbox. We just don't have enough information yet, but the early signs are encouraging: the PS4 is a really solid piece of hardware that should have a nice lifespan.

# MS is right about diminishing returns. If Sony and MS have the same CPU architectures they can run the exact same game at very similar performance levels with the bottleneck shifting to the GPU. The good news for gamers is GPU performance can scale with features and resolution. If Sony encourages developers to target 1080p, and the next Xbox is truly 33% slower, the performance deficit is easily remedied by resolution scaling. In each of the following scenarios only 66% of the pixels are drawn compared to 1080p:

1536*864 (80% * 80%)
1280*1080 (66% * 100%)
1920*720 (100% * 66%)

Of course it gets complicated when comparing the entire product (33% less compute and triangle setup, possibly half the pixel fillrate, etc) but the point is diminishing returns may mute the difference in performance. While pixel counters will quickly point out the difference in still shots but there is a real question how noticeable these differences will be—and matter—to the 150M+ target audience? The fact is while HD display penetration continues to grow at an exceptional rate many consumers still connect up to non-True of Full HD displays.

# MS is right about consumer desires. Hard core gamers like to pick on Kinect and, considering the lack of ability to interface with the industry’s most popular games and genres (you cannot even move in an FPS!), much of the criticism is valid. There is always a concern that a new control method is not a forward or, minimally, lateral move, but a step back. The Kinect for many genres is not just a step back but completely unusable. The fact it could be laggy and developers could not figure out good menu interfaces (IMO Ubi’s Your Shape 2012 did it right with 3D push buttons) and the general Xbox dashboard lag on top of everything else it was a sub-par experience.

I don’t own a TV or watch TV but the idea that the Xbox could record hours of TV (or gaming) and offer robust built in video editing and Facebook and Youtube integration, all at a cheaper price with Kinect 2 interface and Skype, could be the kind of market appeal that catches fire if it is intuitive and “just works.” We saw it with the Wii—better hardware doesn’t always mean better sales—and MS may have struck a balance of supporting all the major PS4 software while appealing to all kinds of non-gamers and media consumers that the PS4 will be too expensive for and not be targeting.

# MS is right about price. It appears an almost certainty that, unless the Xbox leaked specs are incorrect, the PS4 is going to have a much, much higher BOM, maybe in excess of $100. DDR3 is much cheaper than GDDR5, the ESRAM is likely smaller than the 6 compute units they displace, and it does not appear that there are secondary processors to run the OS. Other items are uncertain (will PSEye be included with the PS4 standard? What changes have been made to the Xbox controller?). In the net game the Xbox will have a simpler PCB layout, likely draw less power, and require a simpler cooling solution. MS could very well command a $100 advantage at retail for years which will impact movement into price sensitive segments. A lower price ($299?) and being able to target more casual consumers with mainstream features like background DVR, Kinect 2 media controls, Skype, and the like price may become a significant factor in marketshare. If MS is playing all the same game software as the PS4, and 90% of consumers cannot see the difference (due to display limitations of attention acumen), has “better” services and apps / integration, and tops it all off with a commanding price advantage due to BOM Sony’s bet could come back to bite them as servicing the core gaming market, and paying a technological premium for a resolution advantage, may damage their market share and profit margins.

Of course if PSN online gaming remains free and MS insists on charging for online play XBL may see an exodus of core gamers who will justify the upfront unit costs by long term Gold savings—the extra technology under the hood being the cherry on top. And that would be my 2 nuggets of advice for Sony: Target 720p to force MS to make visual feature concessions (not resolution concessions which will be less important/noticeable) and to hit MS where it hurts (the pocket) by attracting core gamers with free online play. Sony has already lost on price but where they can win is on perceived value. Not gutting used games is one way they have maintained the perception of value. Attacking value from the front (better product on screen) and long term (free online, used games) could even sway fickle price conscious consumers to see the PS4 as the better value.
 
Im not sure I agree that the xbox 720 will have a simpler PCB layout, I think it all hinges on how Sony does the GDDR5 in that regard, the 720 does seem to have more parts (2 memory controllers / pools of ram, etc).

Based on the leaks I don't think I can agree about the memory controllers/PCB comment; the ESRAM is rumored to be on-die memory, essentially a cache. While a CPU with a last-level cache and such takes additional design effort it doesn't typically bloat the PCB. The new Xbox, if it has 8GB of DDR3 and 32MB of ESRAM on die, should be a fairly simple design (further the leaked docs indicate memory tables so the 2 pools is also a misnomer). In retrospect depending on the densities used it, too, could end up with a ton of chips on the PCB but unless MS has a number of "helper" chips not on the SoC it should be a simpler design and also have a simpler cooling solution.

We will see--Sony are pretty keen at designing their PCBs.
 
I find it odd how people claim the supposed gpu advantage would only be minor increase in resolution. What if the supposed gpu resource is used to render more cars, pedestrians and whatnot in grand theft auto. Also the supposed gpu resource could be used for physics, particles and whatnot... Then the comparison doesn't become between resolution but content seen. It's pretty easy to see if the city is barren and void or full of life.

Maybe this is good thread for this. I'm really happy about ps4. it sets bar really nicely. Next gen consoles and stuff running on those consoles will be purely awesome. I'm sure both microsoft and sony will bring their A game next christmas. Focus on the game might be different though(games versus services)
 
I think it's been demonstrated that people are much less affected by horizontal resolution reductions compared to vertical ones, so while a solution to combat its raw power disadvantage for Microsoft could be simply dynamically reducing resolution in order to maintain frame rate (such that the avg consumer can't notice). Might choose to have your chart reflect a possible 1268x1080 resolution for difficult content / scenes, and the use of display planes to maintain native res on overlays and OS.

In the end it always comes down to price and content. The most powerful machine has lost in a number of console generations. But Sony went from easily outselling the competition 3 then 4 and at some points nearly 5 to 1, and as dismissive about the competition as ever; to fighting for it's life to stay out of last place. This reveal, the jump to 8GB, improvements to dev environment, and commitment to online infrastructure is their haymaker and attempt to simply knock out the competition. As a gamer, I'm super excited and confident that Sony will have excellent game content. As a realist, if the blow doesn't connect, and buyers want and get something else, I'm fearful Sony won't have the cash to stay in the fight. And that would be very sad indeed.

PS. I really enjoy the compact writing style. ;) Great post.

I find it odd how people claim the supposed gpu advantage would only be minor increase in resolution. What if the supposed gpu resource is used to render more cars, pedestrians and whatnot in grand theft auto. Also the supposed gpu resource could be used for physics, particles and whatnot... Then the comparison doesn't become between resolution but content seen. It's pretty easy to see if the city is barren and void or full of life.

Additional GPU resources won't really help you render more objects, though the fill rate might in certain instances, however those things are fairly easily scaled in less noticeable ways (usually LOD modifications based on distance, detail level, imposters, etc.)

The big advantage will be in simulations, like physics for smoke, water, cloth, etc. These are lost in the action pretty easily too for your avg Joe. Just check the examples we have today enabling and disabling PhysX.

The reason for focusing on resolution, is that it's quite simply the most commonly used and visible performance tuning parameter we have.
 
Additional GPU resources won't really help you render more objects, though the fill rate might in certain instances, however those things are fairly easily scaled in less noticeable ways (usually LOD modifications based on distance, detail level, imposters, etc.)

The big advantage will be in simulations, like physics for smoke, water, cloth, etc. These are lost in the action pretty easily too for your avg Joe. Just check the examples we have today enabling and disabling PhysX.

The reason for focusing on resolution, is that it's quite simply the most commonly used and visible performance tuning parameter we have.

Depending how capable the gpu is and how the interconnect between cpu-gpu is the better gpu could let you simulate, animate and draw more objects and particles. It's not like "you only can increase resolution, period."
 
Only one post and I'll leave the thread forever.

This looks like a one level analysis (one level decision tree). In almost all practical situations, business people will do a multi-level analysis. All the points about "MS is right about..." in the OP are iOS and Android strongholds. That means if a vendor matches them in these areas, the vendor has to fight them _and_ do a splendid job in gaming at the same time. There is also an implicit assumption that those guys are standing still (i.e., They are not going to embrace gaming, sure about that ?).

That's why sometimes in marketing, you do the counter-intuitive. Instead of trying to mimic them, you may stand your ground, and innovate yourself out of the situation, faster than anyone can catch up. I'm not saying Sony is doing the right thing, or is adopting this strategy. I'm not even saying there is a winning strategy. But there are many approaches to the problem. There is no right or wrong. It's all in the execution.

And btw, the gaming press s*cks. They need to be less arrogant. They seem to think that they are smarter than the vendors, and can criticize everyone. Frankly, if they look closer, they are the worst and most untalented crowd in the gaming industry. :runaway:

If the vendors want to fix the industry, they may need to fix the gaming press at the same time, or derive a digital infrastructure to workaround/sideline them. It is not targeted at any individual, but at the collective whole.

The second thing they need to do is to stop using negative guerilla marketing. It will have long term negative impact to the industry.
 
Only one post and I'll leave the thread forever.


The second thing they need to do is to stop using negative guerilla marketing. It will have long term negative impact to the industry.

That's probably right up there with asking political campaigns not to run negative ads.
 
One question I have about the PS4 memory is could the current use of GDDR5 be replaced by something else down the road which is cheaper and if so could be that Sony was prepared to eat the cost of GDDR5 for now rather than spend money on R&D?

Also regarding price going with a conventional design saved them lots of money in R&D; won't those savings translate into a lower cost for the actual hardware? Sony's financial situation makes me think selling at a lost might be unlikely but if they have less investment overall to recoup in theory they could afford to sell hardware at or near cost.

I bring this up because we still know very little about the actual business model of either machine which makes predicting the price hard to predict.
 
??? anyone got any info on this
It's been dubbed the Other Stuff processor by Npl on this forum. There was a slide shown at the presentation that spoke of a custom second processor that managed stuff like background downloads. "Operating System" processor is ascribing it more importance than it may warrant, but people are going to call it something and with already a CPU, GPU, and DSP, seems natural for people to throw the other stuff under the OS umbrella. So I think Npl's name very apt! ;)
 
I find it odd how people claim the supposed gpu advantage would only be minor increase in resolution. What if the supposed gpu resource is used to render more cars, pedestrians and whatnot in grand theft auto. Also the supposed gpu resource could be used for physics, particles and whatnot... Then the comparison doesn't become between resolution but content seen. It's pretty easy to see if the city is barren and void or full of life.

Maybe this is good thread for this. I'm really happy about ps4. it sets bar really nicely. Next gen consoles and stuff running on those consoles will be purely awesome. I'm sure both microsoft and sony will bring their A game next christmas. Focus on the game might be different though(games versus services)

Problem of course is that for multi-platform titles at least, the weaker system generally sets the bar. ;)

Nice wall of text Acert, I may well post a wall of my own later.
 
The Ok

# CPU Cell was over 230mm^2 (8 cores) and even Xenon was bigger (over 160mm^2, 3 cores with 1MB of total L2 cache) so we are seeing a reduction in silicon footprint and a flat count in cores. The PS4 will be on the market a long time, so it seems curious that a little more budget was put into the CPU. While the Bulldozer/Piledriver/Steamroller power envelope made those poor candidates for the SoC I wonder why not 12 or 16 Jaguar cores were not chosen?

The PS2 and PS3 has around 500mm^2 silicon real estate, PS4 is estimated to be around 300mm^2, that's quite a reduction. I'm still hoping that Sony is going to put two of those SOC with 4GB GDDR5 each in Xfire in PS4. Because that's the sort of system I'm expecting if they want another 8 years out of it. Though I guess it's better to get a new console every 4-5 years with lesser specs.
 
??? anyone got any info on this
An extremely low-power ARM core, like the Wii's Starlet, which is active even when the rest of the console is in standby, watching and servicing the bluetooth and network hardware and attached software stacks, storage file system and so on, as well as the buttons on the console and its paired controllers.

At least phat PS3s also has an always-on ARM controller I believe, but what its capabilities are I'm unsure of. Basically its role is to check the touch buttons on the front panel and eject the disc in the BR drive, and/or kickstart the rest of the console when the appropriate button is pressed...

At some point sony added remote play to the PS3, which never actually took off, which may be facilitated through this processor. Can PS3 also finish downloads in standby mode? I can't remember; I have some recollection this capability was added to the console at some point in the past, many years ago now but it's been so long now... My brain's getting all crusty. :LOL:
 
You can set a time for the PS3 to wakeup and download patches and updates and sync saves and trophies, but it has to bring the system all the way up. You can also tell it to finish downloads and then shutdown. Nothing like the slow blink low power mode the 360 does if you hit the power button before a download completes.
 
Several things i found odd.


First do you have a link to any article proving that the PS4 has higher latency than expected.? I haven't hear that any where other that in some forums and is basically wishful thinking.

The Price..

Any one who think Durango will go on sale for $300 or less may be in for a shock,and is not actually looking at how much the xbox 360 cost right now,and is an 8 year old console that should not be more than $150 now.

The xbox 360 has go down in price $100 dollars in 8 years,the PS3 has go down $350 dollars while having blu-ray which the xbox 360 lack,so people should not get their hopes up.

MS balance over power.

MS did not chose 8GB of ram over power and it wasn't searching to balance,they chose Kinect over power as a result to that and the inclusion of their big PC OS windows,they basically had not choice but to go 8GB DDR3,they want an always on Kinect and windows 8 to have connectivity between peripherals like Surface and so on.

Is the reason why 8GB of DDR3 was chose,if any company can afford GDDR5 is MS,they could from the start chose it,and still have windows 8,and still have Kinect,and they would not need ESRAM or DME.

MS target is very clear profits from day 1 or close,Sony also has a camera but is not like Kinect and it shows.

Sory if i sound rude but your thread sound more like a Parade to cheer for MS than and actual PS4 thread.:cry:
 
Sory if i sound rude but your thread sound more like a Parade to cheer for MS than and actual PS4 thread.:cry:
It's a bit hypocritical to claim someone is a shill when you bat down any criticisms towards the PS4 while dishing out unsubstantiated claims of your own.
 
Only one post and I'll leave the thread forever.

This looks like a one level analysis (one level decision tree). In almost all practical situations, business people will do a multi-level analysis. All the points about "MS is right about..." in the OP are iOS and Android strongholds. That means if a vendor matches them in these areas, the vendor has to fight them _and_ do a splendid job in gaming at the same time. There is also an implicit assumption that those guys are standing still (i.e., They are not going to embrace gaming, sure about that ?).

That's why sometimes in marketing, you do the counter-intuitive. Instead of trying to mimic them, you may stand your ground, and innovate yourself out of the situation, faster than anyone can catch up. I'm not saying Sony is doing the right thing, or is adopting this strategy. I'm not even saying there is a winning strategy. But there are many approaches to the problem. There is no right or wrong. It's all in the execution.

And btw, the gaming press s*cks. They need to be less arrogant. They seem to think that they are smarter than the vendors, and can criticize everyone. Frankly, if they look closer, they are the worst and most untalented crowd in the gaming industry. :runaway:

If the vendors want to fix the industry, they may need to fix the gaming press at the same time, or derive a digital infrastructure to workaround/sideline them. It is not targeted at any individual, but at the collective whole.

The second thing they need to do is to stop using negative guerilla marketing. It will have long term negative impact to the industry.

You should read Kotaku last article it was a mess,saying that killzone did not look much better than a PS3 game,the sad part is that people like this are every where and they call them self journalist.:rolleyes:
 
Back
Top