Predict: The Next Generation Console Tech

Status
Not open for further replies.
I don't agree Chef, it's not safe to assume at this point, any rumors we got from a not laughable source point to something conservative, SoC vs dedicate chips, akin to Hd6670. At this point the only hope we have for something powerful is the x6 in raw processing performances putting the 360 roughly @ 300MFLOPS that's 1.8 TFLOPS but sadly it doesn't match the other claims so far. 1.8TFLOPS is a bit too much to ask from a SoC.

Actually the 360 was "sold" @ 1TFLOPS in 2005 so 6 times the power of the 360 "in real term" from a gaming journalist's POV could very well be 6TFLOPS
now MS has something amazing to blow away the world with this E3, I'm both sarcastic and... despaired...
. I don't exactly remember how MS came to that number by 2005 it may have something to do with texturing operations and some others things. Anyway my point is that in our terrible world ruled by marketing guys the same thing could happen again, and that's sadly more predictable than silicon budget. I would not be surprised if actually the hd6670 could indeed achieve 6TFLOPS using the same "marketing sauce floating operations per second or MSFLOPS" calculation
I can hear people laughing in the background, go ahead as I also laugh at the idea / prospect
.
I know that ugly.

There is another thing I don't agree with is your comment on GPGPU as a way to justify a bigger investment in the GPU. If we consider low perf GPU a really good CPU is a most wanted feature.
Ok we might have to deal with an lesser than expected increase in IQ but at least we could get greater AI, more physics (extra RAM will helps), better animations (high priority in my list), etc. It may recoup some talk about next-gen being about "greater simulation".
A really good (not even looking up to Intel lvl of performance per cycle) OoO quad-core with 256bits wide SIMD units might sound in "real term"
journalistic, peak figures, etc.
like underwhelming but could prove in the real world to be the real next gen enabler along with the extra RAM.
There is a lot to gain from going to IO to OoO, from wider SIMD units, and possibly jumping from 2way SMT to 4way SMT. Using xenon as a reference we are far from diminishing return as far as the CPU is concerned.

I still wish for like anybody here I guess but if we're to discuss stuff we heard so far... :-|

I watched some vids of what llano as well as the HD6670 (especially Me5age) on youtube and I think it close to good enough for the masses. It's a much wanted refresh for those obsolete console of ours, not the step forward expected but the more than necessary refresh to keep up with PC tech.
Interestingly this me5age is using a 300$ PC for its vids consisting in:
- Athlon II X2 250 (2x1 MB L2 cache)
- Foxconn A88GM mobo (710 SB)
- SATA 600 1TB Seagate Barracuda 7200 RPM HDD
- Sapphire Radeon HD 6670 1 GB GDDR5 (128-bit)
- 4GB DDR3 1333 MHz RAM
- TSST corp DVDRW optical drive
- 500W silent flow "No Name" PSU
- Codegen case

That's put things into perspective, using a SoC, slower and tinier HDD, etc. MS should be able to get the BOM really low. Actually they might be able to sell their refresh product with Kinect2 and a HDD @300$. They should be able to produce the thing in big quantity and try to get all their use base to transition quickly to their new system (as well as their live fees...). A lot of gamers don't jump into a new gen at release because of to high prices.

Honestly like everybody here I assume I would more than happily welcome a significantly more powerful system but till further rumors :-| Still MS has proved that it can make good hardware decision so I expect significant architectural improvement to what would be otherwise a dumb SoC putting together an off the shelves hd6670 and whatever CPU IBM designed for them.

there should be more info/noise soon enough.

EDIT
As a summup I would say that we at leat have a system 720p ready :)
 
Last edited by a moderator:
Prior to this week's rumors, I thought MS was going to do the following:
Release a $399 system in 2013 that has a $499 high-end SKU, both of which breaks even or makes a small profit. The system itself is anywhere from 10-15x the 360.

However, the following strategy, which is what I can piece together from all the recent rumors, might be the safest investment for MS:
Release a $299 Kinect 2 System in Fall of 2012 to compete with the Wii-U. The system is 6-8x more powerful than the 360, and can play newest 360 games in 1080p 60fps. It breaks even on release or sells for a small profit.

Wait until the next major advancement in semiconductor technology, i.e. 3d ICs etc before putting out the true Xbox successor in 2014+. If Sony makes a push for a monster PS4, then react quickly. On the other hand, if it looks like disruptive technology such as Onlive, mobile, etc gains steam, cancel the project.

MS might have gone for the first option originally, but the Wii-U, contrarily to what Mark Rein said, might be making MS worried. I would not be surprised to see the Kinect 2 system at E3, running a forward compatible version of Halo 4.

To Microsoft, releasing a loss leader like 360 by itself is too risky. One mishap like RROD or $599 again, and the brand might be done for. Even without mishaps, they're susceptible to outside paradigm shifts. An unprofitable system that's becoming more and more irrelevant and outdated is the worst nightmare of all the big three.

Having said all that, I will gladly eat crow if all the recent rumors are MS FUD, and that the next Xbox and PS4 are unprecedented beasts (100 PR TERAFLOPS!) .
 
Last edited by a moderator:
Supposedly...

Pitcairn
???MHz
1408 ALUs
24 ROPs
88 texels per clock (half-rate for FP16)
256-bit GDDR5
245mm^2 @ 28nm

Cape Verde
1GHz
896 ALUs
16 ROPs
56 texels per clock (half-rate for FP16)
128-bit GDDR5
164mm^2 @ 28nm

(Corrections on specs are welcome...)

-----

Cape Verde die has been analysed from photos to be at around 135mm^2 and that should be very close if not spot on.

The source, which I assume is the base for those Pitcairn and Cape Verde specs is not legit. We don't have accurate information on them yet, except the Cape Verde Die size, however logically the possible range for the specs can't imo be too far from what you wrote up there.
 
I explained in the follow up post.

Yield is not linear with the die size so you can't just blindly add them together. Naturally, two smaller dice are easier to produce than a monolithic die that's equal in area.

eh, coincidence. eDRAM is also a different manufacturing process from CMOS (different cost structure as well). You can't just equate 80mm^2 of (mostly) eDRAM to that of CMOS logic in terms of both cost and TDP* and power consumption and board complexity.

RSX was also significantly bloated compared to G71 (191mm^2), in no small part due to XDR I/O and redundant hardware being shoved in there to increase yields.


The whole point of looking at the ballpark 180mm^2 was to look for a base die upon which there's the option of eDRAM. Again, as I already mentioned and you hastily skipped, we don't know what the high bandwidth options will be.


-------------

You might even think about how ditching eDRAM and merging the ROPs and Z/Stencil back into the mother die would still have produced a chip that's significantly smaller than RSX and yet it'd still have shading efficiencies and geometry setup advantages and single cycle 4xMSAA. Replace the eDRAM I/O with another 128-bit bus.

*If you consider the TDP as well, I'm not so sure MS would have asked for a 250mm^2 part if they didn't have eDRAM. eDRAM thermals aren't going to be anywhere near that of the main processor logic - the heatmaps of the 65nm eDRAM and 45nm CGPU indicate as much if you really want something tangible, but it should be obvious.

There are going to be more considerations than just overall die area.

EDRAM different process = different costs? Two smaller die cheaper than 1 bigger die? Sure, but that doesn't make it free.

Granted, yield for two smaller chips is generally better than one larger chip (we went over this months ago) but that doesn't discount the fact that both MS and Sony budgeted ~250mm2 for graphics purposes.

Two 125mm2 dies thrown onto the same package? Possible ... but 250mm2 shouldn't be considered too prohibitive as Sony had no problem going with two(!) ~250mm2 chips in ps3 last time.

EDRAM again? Possible, but then they'd be stuck with a more difficult programming environment (if manual memory management is required for a "scratch pad" type use instead of a simple framebuffer) and the issues of shrinking the EDRAM process again and integrating it within the die.

Redundancy? Sure depending on yields they may have to cut some of the alus ... but the overall budget was 250mm2 and should be more than that for xb720/ps4 given the greater dependance on gpgpu and less dependance on raw CPU power.


Yes, PS3 was expensive BOM, but much of it was BRD, mandatory HDD, redundant ps2 hardware, multiple mem buses. I'd suggest these had more to do with the $500-600MSRP than the two 250mm2 dies.

I don't doubt that there are more considerations than overall die budget, but that is one of them. Bang for the buck as they say. Kinect2 may be in their plans for mandatory inclusion with all skus, this would likely take budget away (as I stated in my post) but there's no reason for MS to sabotage their render hardware just to appease casual gamers that would be far more interested in a $150 kinect x360 than a nextgen system.

Same would apply to Sony.

The only caveat to this would be if they plan a biennial type of Apple hardware refresh update cycle which could affect the nextgen console tech considerably.

Barring such a radical change of direction from Sony/MS, I think my die budgets are much more in line with what to expect.

Edit:

Another angle with this whole approach could be literally the same as what AMD/Nvidia do with their larger chips. If yields are poor for hitting spec speed or for functional units, they produce another GPU model number with enough binned GPUs that meet this lower criteria, and salvage the cost. The same could be done in this instance if the GPU to be used in ps4/xb720 were literally an off the shelf part...

Take the Tahiti that can't quite hit the ~900MHz rate but run fine at 600MHz, candidate for xb720/ps4.
Tahiti with only 1800 functional Alus? = HD7920

Etc.

This would require having the "glue" on the CPU side rather than the GPU side as is the case in xb360, but again, the importance is more on the GPU side going forward so it will be more important to get as fast a GPU as possible rather than worrying about the CPU so a flexible method such as the one I laid out above would be worth investigating on their end to save costs and be as efficient as possible.... Assuming TSMC is even having yield issues on 28nm for their 352mm2 Tahiti chips ;)
 
Last edited by a moderator:
I don't agree Chef, it's not safe to assume at this point, any rumors we got from a not laughable source point to something conservative, SoC vs dedicate chips, akin to Hd6670. At this point the only hope we have for something powerful is the x6 in raw processing performances putting the 360 roughly @ 300MFLOPS that's 1.8 TFLOPS but sadly it doesn't match the other claims so far. 1.8TFLOPS is a bit too much to ask from a SoC.

Actually the 360 was "sold" @ 1TFLOPS in 2005 so 6 times the power of the 360 "in real term" from a gaming journalist's POV could very well be 6TFLOPS
now MS has something amazing to blow away the world with this E3, I'm both sarcastic and... despaired...
. I don't exactly remember how MS came to that number by 2005 it may have something to do with texturing operations and some others things. Anyway my point is that in our terrible world ruled by marketing guys the same thing could happen again, and that's sadly more predictable than silicon budget. I would not be surprised if actually the hd6670 could indeed achieve 6TFLOPS using the same "marketing sauce floating operations per second or MSFLOPS" calculation
I can hear people laughing in the background, go ahead as I also laugh at the idea / prospect
.
I know that ugly.

There is another thing I don't agree with is your comment on GPGPU as a way to justify a bigger investment in the GPU. If we consider low perf GPU a really good CPU is a most wanted feature.
Ok we might have to deal with an lesser than expected increase in IQ but at least we could get greater AI, more physics (extra RAM will helps), better animations (high priority in my list), etc. It may recoup some talk about next-gen being about "greater simulation".
A really good (not even looking up to Intel lvl of performance per cycle) OoO quad-core with 256bits wide SIMD units might sound in "real term"
journalistic, peak figures, etc.
like underwhelming but could prove in the real world to be the real next gen enabler along with the extra RAM.
There is a lot to gain from going to IO to OoO, from wider SIMD units, and possibly jumping from 2way SMT to 4way SMT. Using xenon as a reference we are far from diminishing return as far as the CPU is concerned.

I still wish for like anybody here I guess but if we're to discuss stuff we heard so far... :-|

I watched some vids of what llano as well as the HD6670 (especially Me5age) on youtube and I think it close to good enough for the masses. It's a much wanted refresh for those obsolete console of ours, not the step forward expected but the more than necessary refresh to keep up with PC tech.
Interestingly this me5age is using a 300$ PC for its vids consisting in:
- Athlon II X2 250 (2x1 MB L2 cache)
- Foxconn A88GM mobo (710 SB)
- SATA 600 1TB Seagate Barracuda 7200 RPM HDD
- Sapphire Radeon HD 6670 1 GB GDDR5 (128-bit)
- 4GB DDR3 1333 MHz RAM
- TSST corp DVDRW optical drive
- 500W silent flow "No Name" PSU
- Codegen case

That's put things into perspective, using a SoC, slower and tinier HDD, etc. MS should be able to get the BOM really low. Actually they might be able to sell their refresh product with Kinect2 and a HDD @300$. They should be able to produce the thing in big quantity and try to get all their use base to transition quickly to their new system (as well as their live fees...). A lot of gamers don't jump into a new gen at release because of to high prices.

Honestly like everybody here I assume I would more than happily welcome a significantly more powerful system but till further rumors :-| Still MS has proved that it can make good hardware decision so I expect significant architectural improvement to what would be otherwise a dumb SoC putting together an off the shelves hd6670 and whatever CPU IBM designed for them.

there should be more info/noise soon enough.

EDIT
As a summup I would say that we at leat have a system 720p ready :)

Only way a gimped half-gen xb360+ would work is if they adopt a faster hardware refresh schedule. Otherwise it is setting up for failure.

And a reminder on price:
xb360 core/arcade launched for $300 with roughly the same die budget as the $400 "premium" sku and the $600 ps3.

Granted, we all know they lost money on them, but they both designed the hw spec with that die budget in mind along with the intended price range.

Only reason it was so bad for MS was due to RRoD. PS3 had a myriad of other price issues beyond the die budget.
 
Why should we be comparing the die size of Xenos alone when Xenos also had dedicated EDRAM for the sole purpose of assisting graphics throughput?

182mm2 Xenos
80mm2 Edram

262mm2 Dedicated graphics die size for Xbox360
240mm2 Dedicated graphics die size for RSX

In context, I think it's safe to say ~250mm2 was the graphics budget this gen.

Projecting the future GPU budget on this number also assumes that the overall die size budget will scale equally for GPU and CPU which for many reasons I don't think is a reasonable expectation.

GPGPU will see more CPU functions cast off the CPU and onto the GPU. Also from discussions that have taken place regarding current and future CPU projections, the ability to cram more into a CPU die budget reaches diminishing returns more quickly than expanding the GPU die budget which scales more linearly.

Therefore, I think it is safe to assume that whatever the overall die budget is, a larger percentage of that budget will be dedicated to the GPU than ps3/xb360's 51-60%.


Perhaps there will be a shrinking of overall die budget for standard inclusion other items/features (kinect2/move2), but I can't see a good reason for Sony/MS to significantly undermine overall system performance and potential sales to save a few dollars in die space.

Especially with increased competition in the console sector.


1400-2000 alu
250-350mm2 w/ motion control as standard
300-400mm2 w/o motion control standard
80-120mm2 cpu

Have you ever considered for inflation? Assuming the same die size as 360/ps3, those dies are going to cost more now than they did back then, reguardless of yeilds being less, ect. So essentially they would lose more money, which they don't want to do this time, at the same die size, and at the same launch prices, which they have said they would like to lower. Any way you slice it, it doesn't add up to more hardware budget this round.
 
So apparently there are only two options: either you build a 250W machine with 500 mm^2 of processors or you're building a "half gen" machine that will only last 2 years.

And die budget is all that matters. Not dollar budget.
 
Power consumption scales with frequency cubed, a 20% reduction should result in around half the power consumption. I could see both MS and Sony go for largish dies but clocked lower than their PC counterparts. The cost of die area falls continuously, the cost for a given cooling solution does not.

An initial size of 250 mm² for the GPU shouldn't be a problem, over a consoles lifetime (~7 years) it should see two shrinks. Even if new process nodes fail to materialize, the cost of producing at the same node will continue to fall as capital costs are amortized.

I expect around 500mm² for CPU/GPU/eDRAM; The PS2, the PS3 and 360 were all around that at launch.

Cheers
 
An initial size of 250 mm² for the GPU shouldn't be a problem, over a consoles lifetime (~7 years) it should see two shrinks. Even if new process nodes fail to materialize, the cost of producing at the same node will continue to fall as capital costs are amortized.

Nintendo don't appear to have shrunk their processors - at least as far as I can tell - on the N64, GC or Wii. And because they aren't waiting for year 3 or 4 or 5 to hit the break even point (waiting for shrinks and associated cost reductions) they can respond more quickly to changes in the market.

I get the feeling that MS and Sony will be looking at the PS360, looking at the GC and Wii and possibly the WiiU, and thinking that they'd like to be free of such a long bleed period at the start and middle of the generation. Perhaps that will encourage them to be more conservative on die size (or sizes if it's not a SoC) next time, especially as it won't hamper their ability to also be a KinectCasualBingFlix box.

I expect around 500mm² for CPU/GPU/eDRAM; The PS2, the PS3 and 360 were all around that at launch.

If they go for a SoC, or at least a combined CPU and GPU, I had the understanding that it would effectively reduce the total die area that would be viable. What are your thoughts regarding a SoC?
 
Nintendo don't appear to have shrunk their processors - at least as far as I can tell - on the N64, GC or Wii. And because they aren't waiting for year 3 or 4 or 5 to hit the break even point (waiting for shrinks and associated cost reductions) they can respond more quickly to changes in the market.

I suspect the Wii ICs to be pad limited. In that case, they *can't* shrink the ICs. so while Nintendo didn't have a loss leader to begin with, they can't cost reduce as aggressively as Sony and MS.

Cheers
 
I suspect the Wii ICs to be pad limited.

The CPU was. 19mm^2 is kind of hard to shrink further. No one seems to care about opening up the latest Wii unit 5 years later, so... hard to tell what happened to the GPU and 24MB 1T-SRAM. I'd be a bit shocked if the latter weren't shrunk since it was not an insignificant die area.

Wii_Hollywood.jpg
 
Random, off-topic thought:

One of the things that the 360 and PS3 had in common last generation was the use of specialized high speed ram for the entire system, unlike PC's which have mass quantities of slower ram and dedicated high speed ram for the GPU.

It seems like having some slower ram in bulk could be useful for caching of content from optical disc / hdd, but nobody went that way last generation.

Is the cost differential between the fancy ram and the normal stuff too small to make it worthwhile using both types in the next generation? It seems like plentiful caching is only going to get more important as content size increases on storage subsystems that can't get much faster than they were last generation.
 
Power consumption scales with frequency cubed, a 20% reduction should result in around half the power consumption. I could see both MS and Sony go for largish dies but clocked lower than their PC counterparts. The cost of die area falls continuously, the cost for a given cooling solution does not.

An initial size of 250 mm² for the GPU shouldn't be a problem, over a consoles lifetime (~7 years) it should see two shrinks. Even if new process nodes fail to materialize, the cost of producing at the same node will continue to fall as capital costs are amortized.

I expect around 500mm² for CPU/GPU/eDRAM; The PS2, the PS3 and 360 were all around that at launch.

Cheers

I'm more cynical, my previous guesses were around that, but with the info we have now I'm guessing Xbox720 is a SoC in the 300-350mm2 range. PS4 is still pretty up in the air, so I'm still holding off on that.

Any guess as to how many ROPs will be in Wii U's GPU? 8? 16?

The GPUs in the rumored range for it all are 8 ROPs, so I'm guessing 8, unless they have a customized chip with 12. 16 would be a waste relative to the rest of the hardware. With the rumored large chunk of eDRAM, 8 should be enough.
 
Random, off-topic thought:

One of the things that the 360 and PS3 had in common last generation was the use of specialized high speed ram for the entire system, unlike PC's which have mass quantities of slower ram and dedicated high speed ram for the GPU.

It seems like having some slower ram in bulk could be useful for caching of content from optical disc / hdd, but nobody went that way last generation.

Is the cost differential between the fancy ram and the normal stuff too small to make it worthwhile using both types in the next generation? It seems like plentiful caching is only going to get more important as content size increases on storage subsystems that can't get much faster than they were last generation.

PCs's biggest advantage is having DIMM slots so that they can shove over 32 memory chips into the system, but at the same time, the wire tracing is obviously a lot more complex, and there's going to be higher latency in there too.

For a console it's really a problem of motherboard complexity as well as the CPU/GPU I/O. That much RAM is kind of a big expense for a $399+ machine as well.

Two memory types imply two different types of memory controllers, and if you want both processors to have access to one another's pools, it's just added die space and a shit ton of wiring. It gets messy with high frequency interconnects coming into play.

If there's just a single wide bus on one of the processors, it makes things so much easier to implement. Obviously, there'd need to be a fairly big bus between the two processors with low latency, but that's not particularly difficult...

Devs favour UMA for obvious reasons (contiguous memory space for example, no replicating of data necessary for certain tech implementations etc).

It doesn't really help if you have a ton of RAM with low bandwidth either... So it just ends up being some crazy system of trade-offs that goes far beyond just a simple cost analysis of one ram chip vs another ram chip.

There does exist higher density slow speed RAM, which can cut down on the number of chips quite a bit, but again you still have to worry about what you intend for the system. Split RAM is just awful from a manufacturing and design perspective. But if you want all that RAM, what do you do about bandwidth? eDRAM? Even fatter bus on the processor, which will impose a higher minimum die (future die reduction plans going up in smoke).


Not an easy thing to nail down.

The GPUs in the rumored range for it all are 8 ROPs, so I'm guessing 8, unless they have a customized chip with 12. 16 would be a waste relative to the rest of the hardware. With the rumored large chunk of eDRAM, 8 should be enough.

8ROPs would be pretty awful (even with high GPU clocks) as the WiiU has to support both the television and the controller screen. There's no way 8ROPs will be able to handle 1080p+480p (whatever the res of the controller screen is) with current gen graphics requirements. Hell, it would barely handle 1080p unless it were a simple game with next to no blending and really easy culling scenarios.

Without MSAA, the bandwidth requirements aren't that high for pixel throughput unless you're also doing a lot of blending, but then you'd probably be fill limited anyway.
 
Yeah, I understand the craziness you'd get into if you try having an entire extra ram controller on the system. It just seems odd having a high capacity system without any disk caching.

I know modern hard drives have on-board ram caches in the tens of megabytes, and Seagate is coming out with hybrid hard drives that have 4 or 8 gigs of flash on board to act as a cache for the spinning platter, so maybe the problem will solve itself as far as the hard drive is concerned.
 
Only way a gimped half-gen xb360+ would work is if they adopt a faster hardware refresh schedule. Otherwise it is setting up for failure.
But the half gen description is your pov not the reality as a whole. Look at this vid it's a mod for GTA4 running on a quad-core + radeon HD6670 PC, it illustrates perfectly how old ours systems are.
It's a really neat improvement (it could have come earlier tho which is kind of a proof to me that the market usual business model needs to be disrupted in some way or another).
It would be interesting to real world comparison between, ps360, a "non CPU limited" PC running a hd6670 and one running a mid high end card let say a GTX560. From the vids I watched I would say that the gap between the 360 and the hd6670 is way bigger than the one between the the hd6670 and the GTx560. You have way better filtering, way higher quality textures, better lightning, in fact you can get the HD6670 to run at the same setting that the GTX560 but you will run the game @720p with less AA (vs 1920+higher AA level).
Once you've realize this you can understand MS (possible) choice, why invest that much when I can get something close (for most gamers) by running at 720p and up-scaling the whole thing to 1080p (or letting the devs the choice depending on the game needs). I suspect that the UVD3 in the HD6670 is able to do a hell of a job at up-scaling, I would not be surprised if it does better than the 360.
As for the refresh rate well it's up to MS but at this point 4/5 years sounds sane depending on market evolution.

And a reminder on price:
xb360 core/arcade launched for $300 with roughly the same die budget as the $400 "premium" sku and the $600 ps3.

Granted, we all know they lost money on them, but they both designed the hw spec with that die budget in mind along with the intended price range.

Only reason it was so bad for MS was due to RroD. PS3 had a myriad of other price issues beyond the die budget.
Well economy is bad,there are markets console manufacturers should start to take more seriously and expansive system won't cut it, there is also the fact that a lot of system are sold at lower price than release. See this gen sales a lot of the sales are during Christmas with rebates, etc. I bough my 360 200$ years ago, it looks like we're numerous. Kinect blurred the line lately as it moved the SKU price higher. Starting at 300$ with a single sku is a strong offering.
At this point I don't think that ms can go with multiple sku (or with no difference on functionality, read bigger HDD). MS managed to go away with it this time around because on the market they pursued most of this gen there was indeed not that strong competition. So it's not because happened once that it's a constant there are consistencies. It's pretty clear now that DVD size has enforced strong restrictions to the quality of the assets this gen and MS managed to go away with it because Sony was weak, the same is true with the lack of HDD.
 
Yeah, I understand the craziness you'd get into if you try having an entire extra ram controller on the system. It just seems odd having a high capacity system without any disk caching.

I know modern hard drives have on-board ram caches in the tens of megabytes, and Seagate is coming out with hybrid hard drives that have 4 or 8 gigs of flash on board to act as a cache for the spinning platter, so maybe the problem will solve itself as far as the hard drive is concerned.
Yeah. It seems an idea to have lump of RAM as something of an IO device rather than closely bound to the CPU and GPU. Of course loading that with data will take time, hence a more commen request for fast flash. I doubt you could afford both, and flash would be more useful methinks.
 
Yeah. It seems an idea to have lump of RAM as something of an IO device rather than closely bound to the CPU and GPU. Of course loading that with data will take time, hence a more commen request for fast flash. I doubt you could afford both, and flash would be more useful methinks.

That (adding RAM to the IO chain) is what Sony effectively did in later models of the PSP, but that was as much to reduce the power draw of the UMD as for performance, and it was only 32 MB in that case.

I wonder if flash will be cheap and reliable enough for the manufacturers to put 8-16 gigs on board for a cache for both the optical and hard drives. It seems like that'd be cheaper than trying to use an SSD to wholly replace a hard drive?
 
I
Well old VLIW5 architecture offer great bang for buck as far as compute density is concerned.

On raw performance side, that is true but I think that in the long run, using GNC would be a much better choice, even if they go the low-power route.
 
Status
Not open for further replies.
Back
Top