Predict: The Next Generation Console Tech

Status
Not open for further replies.
I would not be surprised to see the whole AMD solution in the Nintendo camp.
I think about something like a Fusion.
But on the otherside and considering Nintendo heavy implication in handled I would not be shocked either by a turbo charged system coming from the embedded space (likely a single chip design too).
The point would be to have engines scalable from their next Gameboy to their next Wii or simply to pack quiet some power in a tiny package.
 
Last edited by a moderator:
Its almost certain they will all ship with BR drives and modest harddrives in the 500-1000 GB range.

I wonder if they can ship the Xbox 720 with no hdd, instead have ~32gb or so of flash memory which should be cheap by 2011. That's enough space to download demos, patches, save game files, and a portion of it can be dedicated for developers to use as unfragmentable performance cache memory. It would make the console lighter, cheaper, less heat, and would cost scale down better over time compared to hdd's, which are horrible for that.

Keep an empty slot on the console for people to be able to slide in a laptop drive if they so choose. But ship without one, that way they shift the cost burden of the hdd to the users that really want it. This seems like the best of both worlds to me, since devs can still leverage that flash memory to speed up game loading, and users who want to download lots of movies can add their own laptop hdd cheaply bought from Newegg or wherever. Plus, it makes a $299 launch more doable. $399 launch just seems like a mistake to me.
 
I wonder if they can ship the Xbox 720 with no hdd, instead have ~32gb or so of flash memory which should be cheap by 2011. That's enough space to download demos, patches, save game files, and a portion of it can be dedicated for developers to use as unfragmentable performance cache memory. It would make the console lighter, cheaper, less heat, and would cost scale down better over time compared to hdd's, which are horrible for that.

its not enough space. DD will be a big factor in the next gen consoles for games and HD video content will also be a big factor. Unless Sony/MS want to radically change their strategies, they'll have to have a HD. Its roughly a factor of 20-30x capacity at the same price.

Keep an empty slot on the console for people to be able to slide in a laptop drive if they so choose. But ship without one, that way they shift the cost burden of the hdd to the users that really want it. This seems like the best of both worlds to me, since devs can still leverage that flash memory to speed up game loading, and users who want to download lots of movies can add their own laptop hdd cheaply bought from Newegg or wherever. Plus, it makes a $299 launch more doable. $399 launch just seems like a mistake to me.

399 has been a pretty damn standard launch price for multiple generations.
 
399 has been a pretty damn standard launch price for multiple generations.

Not really, $399 launch is new to this generation. Here's a recap of launch prices snagged from the net:

Atari VCS launched in 1977 for $249.99
Nintendo Entertainment System launched in 1985 for $199.99
SEGA Genesis launched in 1989 for $249.99
NeoGeo launched in 1990 for $699.99
Super Nintendo launched in 1991 for $199.99
Jaguar launched in 1993 for $249.99
3DO Interactive Multiplayer launched in 1993 for $699.95
SEGA Saturn launched in 1995 for $399.99
Nintendo 64 launched in 1996 for $199.99
SEGA Dreamcast launches in 1999 for $199.99
PlayStation launched in 1995 for $299.99
PlayStation 2 launched in 2000 for $299.99
Xbox Launched in 2001 for $299.99
GameCube launched in 2001 for $199.99

Launching again at $399 means that customers will wait on the sidelines for prices to fall, and developers will stick even longer with PS3/360 while they wait for prices to fall.
 
Launching again at $399 means that customers will wait on the sidelines for prices to fall, and developers will stick even longer with PS3/360 while they wait for prices to fall.
The conclusion stays the same, developers will stick with PS3/360 for a long, long time as the specs people have been dreaming about so far in this thread can't be realized at $299 for a long time :smile:

I agree with aaronspink that new consoles around 2011 will have HDD and a BD drive, now that Microsoft have greenlighted HDD installation the direction is very clear. A BD drive may become an external option in some regions, but I doubt it for the launch version of these consoles unless they are released in 2015.

On the other hand, most PCs sold in the next 5 years will be mobile netbooks that have SSD while sales of discrete video cards and high-end PC games are going down and down, it's because Google Docs is really convenient and battery life or boot time is important for mobile devices. But you don't often turn on/off routers or servers, current game consoles (even including Wii for its 24/7 service) are already meant for home servers. Now I've mentioned Wii, and it has a flash drive in its current form, but even they cannot escape from HDD in the next generation if they try to sell DVD-sized games or MMO that require patches in future.
 
What would you consider not "very powerful" ?
Less then 2 "real" TFlops"?

"not very powerful" is not measured in flops, it's measured in dollars.

Gates contacted Intel because he wanted some Windows to run on the box, no?
For MS X86 would be a blessing.

If Intel is so willing about forcing larrabee in a console and is ok to cut drastically in its margins well everything is possible. I mean how many money Intel is ready to pay to force "X86 everywhere"?
If they want to go x86 they can use something way less 'exotic' and proven then LRB, in addition to a next gen GPU.

On the other hand having a single architecture and ISA to support across CPU and GPU (good for MS) and to develop for (good for us game developers) might be an advantage.
 
On the other hand having a single architecture and ISA to support across CPU and GPU (good for MS) and to develop for (good for us game developers) might be an advantage.

I don't know the answer to this Marco, so maybe you can give me some insight here: Would migrating to an x86ish ISA for graphics be beneficial in regards to developers? In theory you could migrate code from the GPU and CPU, and vice versa, depending on available resources. On the other hand DX is familiar and pervasive. You could use a DX like layer for x86 I am sure, but are you thinking of something different? Like a CPU with GPU like vector extensions and using an API (like DX or OpenGL of sorts) that can be executed CPU side or GPU side. Ok, I am fishing ... but I am curious what you have in mind and how it would be easier for developers than, say, x86 CPU and DX** GPU.

** By DX GPU I don't necessarily mean a MS API, but a GPU that is following the current trends and supports the main thrust of the DX evolutionary featuresets.
 
SPUs arguably currently have a slightly better local store to ALU capacity ratio than current GPUs, but by next generation this gap could very well be gone. Would be a hard choice to either force developers to migrate SPUs jobs to the compute shader and toss the SPUs, or offer developers a mess of both.
Unfortunately I have to agree with Aaron on this (that's very rare, it doesn't happen often :) ), CELL is a bit of a dead end architecture wise. I'm curious to see how much life IBM will be able to breath in it with CELL2
 
Various thoughts about previous post.

First about the "all AMD hypothesis" (related To some Joker's comments)
I guess that it's tied to AMD financial situation.
AMD fares pretty well with its actual GPUs. For their own sake their CPU would better improve.
It's clear that AMD would be willing to signed a contract even if its profitability is low. My point is that they depending on their overall situation (financial + how competitive theirs products are) they could be slightly more greedy. Thus at least for cpu IBM may end competitive.
In regard to the performances of such a cpu in comparison to say a "cell2", I don't think it would end close to 25%. I mean given time and this well utilized an actual Cell on some demanding workload could give a run for their money to actual CPU with at least 4 times cell transistors budget.
But I do get Joker454 point as such a CPU is likely to be include in a GPU heavy design (in regard to silicon budget) where some numbers crushing tasks could be offload to the GPU.
And in this case for a not that good CPU IBM may be more interesting cost wize.

I see nothing wrong with a launch price of 399$.It's clear that manufacturers will subsidize less (if any) than in the past generation but that also mean that they might willing tp drop the price quicker. Even while subsidizing this gen Ms should have reach affordable price point earlier than they will. The Rrod costed them +1Billion$, on ~20 millions units it's ~50$ per system of "unexpected " costs.


In regard to the larrabee PRO/CON. (related to some Joshua's comments)
If I understand properly, have only one type of ressources will helps the learning curves.
Coders in charges of optimization will have to deal with only one architecture. In larrabee case the scalar part will be well known, thus the focus will be on the SIMD units.
It's new but Intel claims to have done it right consulting bunch of people and coming with something "compilator friendly".
Larrabee would also benefit from the same advantages first unified GPus enjoyed, it will easier to balance your different workloads on a homogenous type of ressources.
These factors are likely to help the few dev houses that will try to go with custom solutions and the teams in charges to provide tools akin to provide a software layer close enough to directX11 to not hinder multiplateform games development.

(Related to some Aaron's comments and somewhat a response to Joshua too) One other advantage is that if Intel make it in the console market (meaning their solution is good enough thus relying on an "if") it will be conforted as an important actor of the GPU space. As Aaron pointed out (multiple times I would dare to say) programming model related to actual GPU or cell for example are pretty much dead end.
While Intel could be reaching high wolumes though IGP/gpu/consoles/GPGPU spaces ones could notice that while larrabee requires some extra efforts the gain are likely to last. That's a huge advantage of the Larrabee, it's the first "graphical" chip based on a ISA likely to be perene (X86+SIMD instructions set). I mean further iteration of the chip are likely to run a given soft better without any work for the developpers.
If the larrabee performs properly (It doesn't need to be the best), I feel that its main strenght overtime will be that it provide a "perene" environment to differents kind of developpers. And given Intel overall weight I expect it to make that clear to everyone / try to force it the same way they did for the SSE instruction for example.

Nao as I agree with your response I've nothing relevant to add.
Anyway thanks for your answer ;)
 
Last edited by a moderator:
Joshua Luna said:
Would migrating to an x86ish ISA for graphics be beneficial in regards to developers? ... On the other hand DX is familiar and pervasive.
I don't quite get the line of thinking there. Working from DX level, there's no such thing as a GPU ISA.
And for the few that do go below API level, there's not really a GPU ISA either - there's GPU command-lists, which aside for being different for every piece of hw, are really just a primitive state machine, not a full blown Turing, and there's shader opcodes which to my knowledge, are currently accessible only on one of the 4 platforms out there (counting PC as one), and the way they are used is still a far cry from GP programming model.

So if we suddenly get this extra layer on GPU where we DO deal with an ISA, it would only be of interest to very few anyway*. The interesting(and very beneficial) part is ability to load-balance programmable resources any way application sees fit instead of one or another heterogeneous split, and not being limited to any particular hardware(or API) featureset. And the former of these two could very well be an API abstraction.

*These mostly fall under mad-scientist afflicted people (like nAo), and to those, it's a huge boon, because they could finally optimize/write pipelines to their heart's content rather then having hands tied by hardware design.
 
*These mostly fall under mad-scientist afflicted people (like nAo), and to those, it's a huge boon, because they could finally optimize/write pipelines to their heart's content rather then having hands tied by hardware design.
:LOL:

EDIT more seriously buying studios is not always the best solution to keep/attract/gather talents.
As a side effect the "kind" of people described by Fafalada may want to quit their actual studio to have the chance to experiment. Could a nice way to gather talents/(Ninjas :LOL:) without buying studios.

What are you professional opinion about that guys (as lot of you are actually the business)?
 
Last edited by a moderator:
Unfortunately I have to agree with Aaron on this (that's very rare, it doesn't happen often :) ), CELL is a bit of a dead end architecture wise. I'm curious to see how much life IBM will be able to breath in it with CELL2

In post #918 I proposed a method of bringing cache and transparent memory coherence to Cell2. Once this is obtained Cell2 can operate like any other traditional multi-threaded processor evicting contexts from cache and registers as need be. BC would be handled with cache-line locking for Cell1 like "SPU jobs". SPU jobs would not be the de facto programming model for Cell2 but instead just an option vs any other programming model. With cache the LS is not part of a Cell2 process context.

SPUs are little more than single-threaded dual-issue cores that share a hierachal cache in Cell2.

1. LS problem solved (i.e. they exist physically but not in software)
2. Threading issue solved. (i.e. pre-emptive process scheduling is now a good deal)

The latency issue is improved but not 100% solved. Since the LS is not part of a process context they should be a good deal more lightweight making pre-emptive scheduling at the process level the standard mode of operation. Prefetching is a software solution that can still be employed that may be a hassle but one that is not unwieldy and difficult to understand.

To completely solve the latency issue there seems to two options to me.
1. Replicate HW contexts in SPUs
2. Employ OS controlled sharing of process state in the kernel/hypervisor.

The first option will put a nice dent in the transitor budget as well as make Cell hotter and more expensive to produce.

The fore mentioned is why I would expect Sony to prefer the latter option so long as it is transparent and fast. To programmers it would seem no different than using a standard system call to start a new thread in the same process. The OS would track which data is common to threads in the same process...pids, program counters etc.

Since threads in the same process can now share data...or not...faster switches are attainable as compared to switches at the process level. Heuristics could attempt to group threads which commonly touch the same data in the same process but...I'm not high on how successful this would work out.

I would hope STI makes these sorts of changes to Cell...or Cell2 would be a non-starter. The changes I am suggesting are not terribly difficult to implement and shouldn't nerf the good things about Cell1.
 
Last edited by a moderator:
What is the cause for the CELL architecture to end up as a deadend? How can this be possible with the millions and hours and scientific braincells (pardon the pun) of STI? They started from a clean sheet no? Where did they gone wrong? How can they not anticipate the problem with Local Store? What is it that held back STI from resolving this problem at design stage?

Must the PS4 be made an unified computing platform? Why not use CELL2 and GF11 part? Will that not make the PS4 more powerful than just a unified CPGPU?
 
SEGA Genesis launched in 1989 for $249.99
NeoGeo launched in 1990 for $699.99

fall.

Sega Genesis launched in September 1989 for $189.99
(ten bucks less than TurboGrafx-16).
Maybe the Genesis test launch was at $249.99 in NYC & L.A. in August 1989
but I only can speak for what it was in Chicago at full launch.

NEO-GEO (Gold System) launched in late 1990 or early 1991 for $649.99
(console, 2 arcade control sticks, Magician Lord)
NEO-GEO (Silver System) was $399.99
(console, 1 arcade control stick, no game).
This two SKU approach was in place of an earlier, canceled 1990 plan that called for a $599 price point with choice of Baseball Stars Professional or Nam 1975.
 
Last edited by a moderator:
scificube: Perhaps I don't understand your solution but I really can't see it's even remotely possible to make SPUs' local store disappear as logical entities if you don't also heavily modify SPUs ISA and break backward compatibility, and once you do that you don't need explicit DMA transfers anymore.

If you are willing to do that, and there are zilions of ways to do it, you end up with something that is not really CELL, which to me speaks volumes about it's not so forward looking design.
 
I don't know how well you remember 1985, but I'd easily say $199 in 1985 = $399 today.

Sure, but the cost of an NES today is largely irrelevant. What matters is what it cost back then in reference to other products at the time. In 1985, PC's, TV's, Walkman, Laserdiscs, VCR's, etc were all very expensive. Hifi VCR's were still $700+, and PC's were in the thousands. A $199 NES was an entertainment bargain at that time irregardless of what that translates to in todays dollars. Plus, psychology is every bit a part of this, and $199 looked great even in 1985.


liolio said:
I see nothing wrong with a launch price of 399$

$399 is doable, but I think whoever launches at $299 and makes devs lives easy will ultimately win, even if they have the technically lesser box. Microsoft has the advantage here since they are expected to launch a year earlier, and the Xbox brand is no longer a joke, so $299 along with their great developer support is a winning launch combination.
 
Cell is very good at what it does, as long as what you are doing fits it batch processing model and the local store it will be hard to beat ... the efficiency good programmers can get out of it for some basic stuff like FFT and dense matrix multiplication is unrivalled except for supercomputer architectures, I very much doubt Larrabee will get anywhere near. It's just not terribly convenient for developers.
 
What is the cause for the CELL architecture to end up as a deadend?
For a number of reasons. For example adding hw multi-threading without breaking backward compatibility with older sw would cost an arm and a leg transistors wise as you'd need N copies of your a 256kb local store. A 4 hw threads SPU ala Larrabee would need 1 MByte worth of local mem per SPU.

Supporting a flat memory model on top of local stores, entirely bypassing DMA, is certainly possible and reasonably feasible and I expect CELL2 to have something like that.
 
Status
Not open for further replies.
Back
Top