Predict: The Next Generation Console Tech

Status
Not open for further replies.
I suppose my ideal would be a scalable hardware architecture that allows the same code to run on both a console and a tablet, allwoing a hardcore machine and a Lite version for portability, for the best of both worlds. Sell the tablet as an accessory (PlayStation Tablet) alongside the console (PS4), and you'd have a winning, but expensive!, combination.

Well, that is an interesting idea - I don't mind the portable stuff, but scalable hardware seems to be interesting.

Would it make sense to construct a console, which can be hardware updated - to some extend in the sense of a PC?
If you design it from the start, that after view years you release an upgrade which enhances overall performance...and the upgradable stuff is something that will get cheap over time, say for instance after view years you can buy an update for RAM?!

Then you can increase the life cycle of your console without putting out major development costs such as Kinect or Move. It would also propable push the console ahead the competition tech wise. And the risk is not so high as the hardware development cost can be rather low and the game dev cost wouldn't explode as well...
 
I'm surprised no one has revisited the old VR googles approach. It's been twenty years since the Amiga-based VR systems. They didn't work worth a damn given the limitations of the technology, but surely much, much better could be done in the next generation.

Presumably, the blocking factor is the impracticality of being blinded while playing a game unless you're in a giant hamster wheel or the like. ;-)
 
I suppose the next generation (or the generation after that) could go Augmented Reality VR, in which the game elements are overlaid in the real world. Have the system do full 3d mapping of the room you're in, and then have the gremlins running over your desk, hiding behind your cat, and the like. :smile:
 
Well, that is an interesting idea - I don't mind the portable stuff, but scalable hardware seems to be interesting.

Would it make sense to construct a console, which can be hardware updated - to some extend in the sense of a PC?
I wouldn't go with hardware upgradability. At least, not in terms of replaceable CPU and GPU parts. For economies you want a tight, perfectly designed mobo. However, if the hardware design is upgradeable, you can do like Apple and release generational hardware. The problem is fragmenting the user base, where if you want to target 3rd gen features, you alienate first and 2nd gen owners; a majot headache with PCs. But a closed-hardware, yet uniform upgradeable design, would solve a lot of that. You'd basically just want static cores (don't change the design for 100% compatibility) and add more of them in future iterations, not changing anything, with improved performance coming from number of cores. The same workload distribution mechanisms in place in all devices, the code will work across all scales, and the developers would just need to factor in scaling of their methods to add more stuff/effects/eye-candy for those with better hardware. I suppose a simple hardware polling system to info the game/app how many cores are available would solve that.

And if you do that, it makes sense to be able to sell people a mobo replacement so they can upgrade and keep everything else, instead of throwing away the whole old machine to get a whole new one that's fundamentally the same collection of USB ports and cases and PSU and cooling and stuff. Then again, replacing old machines means passing the old one onto friends and family, which, as they all run the same code, means increasing the install base. Unlike conventional console generations where passing down a console oftem means giving them your old library and that old gen gaining no software growth.

So, my slightly revised, Billy Idol friendly vision of the future, is a core architecture (hell, let's go unified single-core solution for graphics and CPU, meaning the most flexible, scalable solution possible) that can be a few cores for a portable, lots of cores for a console, running the same code that scales according to number of cores meaning your games and apps are playable on all Core Architecture devices. There's a many-core home console version implementation of the Core Architecture for bestest graphics to appease the hardcore. There's a multi-core CorePad tablet for portabililty. All games, save files, data and apps are shareable between tablet and home console. Tablet can be plugged into TVs and use console peripherals, working as Console Lite on the move. With inbuilt camera to provide motion interfacing, perhaps a depth camera. And as chip manufacturing improves enabling more cores in the same space, maybe the innards of both the console and CorePad can be upgraded. Probably not actually. I like the idea of increasing the user base, even if the hand-me-down buyers only buy cheap apps and minigames! Developers still target the same hardware architecture so they don't have to differentiate between the existing 300 million users of Core Architecture or the new buyers of 18nm Core Architecture Gen 2 or Gen 3, but the upgraders get prettier stuff.

All it needs is someone to create a single-core, all-purpose, fully scalable architecture, and we can go into production. At all of $2000 a system it's bound to do well! :mrgreen:
 
All it needs is someone to create a single-core, all-purpose, fully scalable architecture, and we can go into production. At all of $2000 a system it's bound to do well! :mrgreen:

Take your pick, I'd choose Ontario, since GMA3150 is shit.
AMD_Ontario_Bobcat_vs_Intel_Pineview_Atom.jpg
 
But the graphics transisitors become a nuisance to dedicate to non-graphics work. An open architecture won't have idle transistors. I'd rather have an open, fully programmable processing model, to use however you choose. Much more efficient (as long as you can get respectable rendering from the thing).
 
But the graphics transisitors become a nuisance to dedicate to non-graphics work. An open architecture won't have idle transistors. I'd rather have an open, fully programmable processing model, to use however you choose. Much more efficient (as long as you can get respectable rendering from the thing).

I don't buy the whole "much more efficient" idealogy.
In lay terms, if it takes 10x the "general purpose" transistors to get the same performance to perform a specific task a "fixed function" block can do with 10x less transistors then those much more efficient transistors are actually bloat.

Case in point: hardware encoder for video.

Until the transistor and therefore die and power size advantages by going fixed function become negligible, this whole Larrabee notion is doomed from the start.

Case in point: a graphics chip that is 500mm2 not competing with an architecture that is approx half the size in accelerating 3D rendering.

One day maybe, but for now, Ontario, Llano, SanyBridge et al are the way forward.

After a three years or so on the market these approaches may indeed need to be refreshed but until then, no way and nowhere near efficient enough.
 
I don't buy the whole "much more efficient" idealogy.
Me neither. I don't want branch predictors and units for integer bit rotations bloating up my graphics chip. Anything I'd need that for already runs better on my CPU.

Let the GPU keep its domain advantage. The more extensive the GPU instruction set gets, the more parts of it will sit idle on average, and performance per watt and per area will go down.

With all the things you'd have to do to "evolve" a GPU into a barely acceptable CPU replacement (or vice versa), you'd blow so many transistors that you might as well keep separate blocks that are both exceptional at what they do.
 
I still hold the view that the problem is with the graphics paradigm locking itself into the hardware. Once the hardware is fully programmable, alternatives ways will be found to structure the data and process it to render pixels. I'm reminded of the recent discussion with Laa-Yosh about the progress of fully raytraced offline renderers. It used to be that fully raytraced was so much slower than rasterlines with many hacks for shadow maps and lightmaps and such, that off-line production meant using lots of specific, mixed techniques. As graphics have become more complex, the rendering time savings of outputting an individual frame have been overtaken by the costs in fine tuning all these disparate methods, and going with a straight raytracer is proving more economical.

In GPU terms, we're dividing the workload into specific tasks, but as things get more complex, a simpler solution that covers the graphics building operation becomes more appealing. We took our first step that way with unified shaders, and no-one can argue the benefits of those! If processing were no object, a full raytracer would be the ideal solution and would work with fully programmable hardware such that you could turn it to any task.

Of course, reality gets in the way and in terms of output per mm^2 silicon, specialised GPU hardware has the edge. That'll probably remain true for a generation or two. However, I'd love to see what the best developers and smartest researchers could come up with using a CPU-only solution. Cell can handle vertex work and post-processing on a level with GPUs. I dare say if GPUs weren't that fast, we'd have some novel data representation and rastering techniques to use that could give them a run for their money on less specialised processors.
 
Me neither. I don't want branch predictors and units for integer bit rotations bloating up my graphics chip. Anything I'd need that for already runs better on my CPU.

Let the GPU keep its domain advantage. The more extensive the GPU instruction set gets, the more parts of it will sit idle on average, and performance per watt and per area will go down.

With all the things you'd have to do to "evolve" a GPU into a barely acceptable CPU replacement (or vice versa), you'd blow so many transistors that you might as well keep separate blocks that are both exceptional at what they do.

This.

I agree that eventually, we will be at a place where a cell-like processor will be able to handle all graphics duties as well as a dedicated chip, but that time has not yet come. Heck it took until this generation for cpu's to handle all audio work!

As is, these companies will be looking ever closer at perf/watt and die size to keep costs down and reliability up.

Give it a couple more gens and we may be at that place ... or may be onto something such as onlive which makes the whole concept moot. ;)
 
So does anyone have any idea as to how Bobcat would scale in terms of performance over X number of cores. I.E. 4 core, 8 core, 16 cores etc? Would an 8 or 16 core version on 28nm even be worthwhile against other architectures?
 
So does anyone have any idea as to how Bobcat would scale in terms of performance over X number of cores. I.E. 4 core, 8 core, 16 cores etc? Would an 8 or 16 core version on 28nm even be worthwhile against other architectures?

Not sure how it scales in performance but a 2x2 (2 bobcat 2 gpu) uses 18w . So a 8x8 would use 72w of power.

Who knows if the gpu cores will scale though , do they use crossfire and if so would crossfire even scale to 8 gpus .


I think a nice 4-6 core bobcat with no built in gpus and the xenos on 28nm could make a good xpad though
 
<...>
In GPU terms, we're dividing the workload into specific tasks, but as things get more complex, a simpler solution that covers the graphics building operation becomes more appealing. We took our first step that way with unified shaders, and no-one can argue the benefits of those!
Oh believe me, I can.
Sure, unified shaders are great efficiency boost ... once you've decided that you want to do FP32 pixel processing. But why has the industry decided that again? Because they decided early on they wanted to do vertex processing in the pixel shader. What a crazy idea. The whole notion is born out of quirks in PC architecture, and the solution is fairly optimal only for a status quo of slow buses and byzanthine APIs, but not necessarily for the tasks performed.

There's also the element of competition between GPU makers and CPU makers. Of course GPU makers wouldn't want to rely on CPU vertex processing, when they can sell their own vertex processing hardware for a good margin. Of course GPU makers want you to do "GPGPU" processing, because then you're buying their hardware.

A console can be designed in a more holistic fashion, with parts and buses that are specified in tandem, and also important, with software that is built with awareness of the one fixed set of system specs.

Picture an R420 (~160M transistors, FP24 pixel processing for those too young to remember) or somesuch, with the vertex processing removed, scaled to current high-end transistor counts on a 40nm process (~2B transistors). How wide could it be, 192 pixels per clock, or 256? 1k+ FLOPs per clock? Wouldn't that be fucking awesome?
Shifty Geezer said:
Of course, reality gets in the way and in terms of output per mm^2 silicon, specialised GPU hardware has the edge. That'll probably remain true for a generation or two.
How many generations will there be I wonder. The fabs will probably run out of Si process advances within the next decade, and then different rules will have to apply again. It's always been the universal solution for the PC on many levels: just wait n years and what is now ridiculous bloat will be made viable by the miraculous train of chip miniaturization. But we're going to enter another era where transistors are finite resources, and will have to pull their weight in practice.
 
Last edited by a moderator:
How many generations will there be I wonder. The fabs will probably run out of Si process advances within the next decade, and then different rules will have to apply again. It's always been the universal solution for the PC on many levels: just wait n years and what is now ridiculous bloat will be made viable by the miraculous train of chip miniaturization.
Yeah, production has to find new ways forwards, but they're looking at it. Silicon sandwich processors and optical processors and the like will get us somewhere.

Computing is interesting for other lessons. We started with mainframes and dumb terminals. Then the terminals got smarter to move workload. Suddenly we didn't need mainframes and everything turned to small CPUs in standalone machines. Then to get better performance we had custom ASICs for graphics and audio. There were more and more custom chips in something like the Amiga. And then PC performance took a leap and where it's 2D performance couldn't compete, not being tied to a graphics paradigm it was able to offer the new 3D games and leave the Amiga with all its fancy custom processing in the dust. Then to get 3D faster we created custom chips, that grew around a paradigm. Now we have people looking at the custom hardware liike the amiga and seeing too many contrictions, and wanting their processors to be more flexible, and we also have people looking at the mainframe idea again!

Had the Amiga got a hardware refresh for custom 3D hardware, it'd have outperformed the PC, but the flexibility of the PC meant it could do things locked-in hardware couldn't. This is the prime motivator for flexible harfdware. At the moment we can only compare flexible hardware with custom GPU hardware at the same types of rendering we have now. We can't compare performance of new, novel rendering techniques with hardware rasterisers, because the new, novel rendering techniques don't exist and won't exist until there's a hardware platform to run them! It's only really on faith that I believe such a platform would have all sorts of clever solutions not possible with GPUs that means it could compete.
 
They're now predicting graphene to be the "next big thing"

1135252733456.jpeg


At least IBM has made some progress on this I believe
 
Status
Not open for further replies.
Back
Top