Predict: The Next Generation Console Tech

Status
Not open for further replies.
If Sony were to continue with an improved Cell and allow backwards compatibility, is it then reasonable to expect PS3 games to have better performance on the PS4 than when played on the PS3?

My 2 cents...maybe yes if cell ps4 came with 16 spus putting everything at 1080P (extra SPUs doing a job to reprocess somethings and upscale) and 3D.
 
And nothing really beats the latency of sending commands directly from CPU L2 cache to GPU. I really wish future fusion chips would support something like that also. Fast CPU<->GPU callbacks would offer a huge array of new options.
Do you use that feature of the 360? If so, can you give an example?
 
Shifty Excellent post!

Let me dream a little more ...
Visualizer cell gpu ... maybe could solve problems with PS3 backwards putting an ARM for cpu for "universe PS4 and NGP" * as GPU :Cell + and SGX as pixel engine ...just a (extremely crazy) thought.

* (some kind boot like "mode 1" = ARM cpu work with SGX and "Mode 2"= cell gpu visualizer with SGX)

Ha ha... I don't dare to dream in that direction to prevent any disappointment. I think Sony will miss their target date again if the system is too complex. I like the idea of mixing SPU and specialized GPU cores though.
 
Given the length of this thread (in time and posts) I am just going to ask away rather than try to dig back through at this point.

Did anyone think of approaching this from the opposite direction? Everyone is trying to guess what will be done. How about we settle on a budget in power, dollars and mm and see what people on here want? Say 200 watts and 400$. You want more RAM, you trade off X mm of GPU or CPU for it. Want eDRAM, you lose $ and CPU/GPU but gain back more heat, etc, etc. There seems to be enough knowledge here to make some educated guesses as to what you could do with a given budget. Why don't we pick a rough date and node and see what people would consider acceptable tradeoffs?
 
The main 'problem' with SGX's peak performance is that it's always in low-power devices. If it scales linearly, there's nothing to stop someone putting loads of cores into a larger, hotter chip. Given that SGX offers the best performance per watt and per degree temperature, all things being equal (which they're not), given the same power and thermal parameters of an nVidia or ATi part, SGX should be competitive. However there are features and all sorts to worry about. I wouldn't write SGX off entirely though. It may not offer the best performance for a PS4, say, but the end result would be useable and if in a unified multidevice architecture, I dare say the added value and development advantages would afford the platform more consumer interest that more raw graphics power would. Sony would ahve more to gain from this than MS. MS can use whatever hardware and their DX software layer to enable cross-device compatibility.

I think Dave or someone else said that as the chips scale up in size more power is expended moving data around to the various units than is actually used to compute the programs themselves! I think if you scale up a lot of smaller units you'll run into a data movement wall as the quantity of information needed to be shifted would scale at least linearly with the number of SGX units expecially as they would be employing tile based rendering...
 
Given the length of this thread (in time and posts) I am just going to ask away rather than try to dig back through at this point.

Did anyone think of approaching this from the opposite direction? Everyone is trying to guess what will be done. How about we settle on a budget in power, dollars and mm and see what people on here want? Say 200 watts and 400$. You want more RAM, you trade off X mm of GPU or CPU for it. Want eDRAM, you lose $ and CPU/GPU but gain back more heat, etc, etc. There seems to be enough knowledge here to make some educated guesses as to what you could do with a given budget. Why don't we pick a rough date and node and see what people would consider acceptable tradeoffs?

Should be something on those points in the last 5 pages :)

But I agree, there is no point in making some super expensive box again. I know that people here want specs to be high as possible, but when you think about it, in the long run going with lower specs might be better. Sony and MS both lost tons of money on the current consoles, that is why they want to keep this gen going on as long as possible. If next time they would launch a system with a bit lower specs but at a price point that means they lose little money, break even or maybe even make some money on the hardware from the beginning instead of aiming for a 7+ year lifecycle as the main console, we could have a new console every ~5 years because nobody has the earn back a couple of billions they trow away in the first few years.
 
360 is still a good powerfull balanced console, but was profitable after a year (or at least very soon)
the rrod was the source of the financial problem, but if wasn't for this it would be a perfect box, so i think that we must/can espect a console with the same budget and philosophy from microsoft

the ps3 was extremely expensive due to hight price/low yelds of the cell, and the extremely hight cost of the bluray laser diode, so i think that we must not count it because it will hardly happen again

so no sony wii or ms wii :O
 
The next gen consoles should be powerful, what is the point of new consoles if they are not going to look much better than current gen?

I want games to look like Epic's Samaritan demo, if 3D gate tech does what Intel claims it does I hope that by mid 2013 a console will come out that uses it and is at least as fast as a GTX580 (if IBM, GF, TSMC, AMD and Nvidia also catch on with the tech).
 
think if you scale up a lot of smaller units you'll run into a data movement wall as the quantity of information needed to be shifted would scale at least linearly with the number of SGX units expecially as they would be employing tile based rendering...
That's not how it works in our architecture (linear scaling of 'data' requirements with MP), and I'm not sure what TBDR has to do with it either. Can you tell me what you were thinking of specifically?
 
Err, looking at the way PowerVR progresses with each generation, I am very doubtful Series 6 will reach 1 TFLOPs even in MP16 config @ 800 MHz. And the core is tiny, in 200 mm^2 you'll likely can fit more than 16.
You're very wrong about the flops, but very right about what can fit in that area ;)
 
The next gen consoles should be powerful, what is the point of new consoles if they are not going to look much better than current gen?

I want games to look like Epic's Samaritan demo, if 3D gate tech does what Intel claims it does I hope that by mid 2013 a console will come out that uses it and is at least as fast as a GTX580 (if IBM, GF, TSMC, AMD and Nvidia also catch on with the tech).
They won't before their 14nm process.
Nvidia has no foundry by the way.
 
Hey Rys, if you're here with your expert opinion - what do you think about the idea of SGX cores embedded in a GCPU architecture, specifically Cell with SPUs and SGX? Does that sound workable to you both as a hardware platform and a scalable architecture that'd fit different devices? What would be the negatives?
 
You're very wrong about the flops, but very right about what can fit in that area ;)

:) I hope I am very wrong too. I really would like to see TBDR return in next gen console. The advantages is all really suitable for console. Lets hope Sony see that after NGP when they're designing PS4.

Though tell me something, how does TBDR handles tesselation and displacement maps ?
 
Hey Rys, if you're here with your expert opinion - what do you think about the idea of SGX cores embedded in a GCPU architecture, specifically Cell with SPUs and SGX? Does that sound workable to you both as a hardware platform and a scalable architecture that'd fit different devices? What would be the negatives?
I think it's a great idea, and there's nothing inherent to either Cell or an architecture like SGX that would stop you wanting to pair them on the same die other than memories and how they'd talk to each other efficiently. It'd be a very workable hardware platform (although it'd need a really good memory controller/arbiter and on-chip memories) and would scale just fine down (and up) from a fixed console-strength part.

Negatives? I'm not sure there are many, at least if you're not the hardware designer trying to integrate the two (along with everything else the chip would need). It'd be really nice from a game developer's perspective, and it should be really quite efficient (back of a napkin calcs for that, and assuming Rogue).

The hard part would be communication, and you'd really want Rogue to be able to issue commands to the SPUs.
 
Though tell me something, how does TBDR handles tesselation and displacement maps ?
Just fine :LOL: Did you have anything in mind there where you think a TBDR wouldn't do so well when manipulating or amplifying/deamplifying geometry?
 
The next gen consoles should be powerful, what is the point of new consoles if they are not going to look much better than current gen?

That's a bizarre question, when the Wii demonstrated that adding motion control proved to be enough to outclass its competitors in the market place in spite of barely improving on the graphics of its predecessor at all! The Kinect accessory alone have move millions of 360s, even though the graphics obviously didn't change.

Why people look to consoles selling for a few hundred bucks for technical innovation is beyond me. Their graphics is a means to an end, and that end is entertainment, and people care about graphics quality only insofar as it improves the entertainment value. While the goal posts certainly move with time, over time the trend still cannot be other than that the minutiae of graphics rendering will become increasingly less relevant to the entertainment provided.

Technology fetishists will have to look elsewhere for their kicks.
 
That's not how it works in our architecture (linear scaling of 'data' requirements with MP), and I'm not sure what TBDR has to do with it either. Can you tell me what you were thinking of specifically?

I heard I believe from Dave that moving data for a modern ATI graphics chip was more expensive in terms of power consumption than the actual calculations themselves. I assumed that the same would apply to all graphics hardware given they seem to follow many similar basic principles.

So even if the data quantity increased linearly, maybe its the size of the chip and the distance each bit would need to be sent which would cause the power use to scale up faster?

I always wanted to see if your graphics cores would be viable in a console... I was hoping that there'd be some kind of Microsoft ARM console due to their hoping for some synergy with Windows 8...

P.S. What does Molly! mean?
 
That's not how it works in our architecture (linear scaling of 'data' requirements with MP), and I'm not sure what TBDR has to do with it either. Can you tell me what you were thinking of specifically?

Doesn't the GPU-dedicated RAM need to increase in bandwidth as you increase the number of cores? Not theoretically of course, but practically.


You couldn't get a "high-end" version of your GPUs to "infinitely" scale linearly with increasing the number of cores without increasing memory bandwidth, right?
 
I have to add though, that one area where consoles can be interesting from a technical standpoint is how they can improve efficiency vs. more modular PC solutions due to the device being conceived as a whole, and by having clearly defined targets (games are the applications, with a given target resolution at that).

Efficiency and intelligent compromise can certainly be interesting - simply "adding more" is less so.
 
Status
Not open for further replies.
Back
Top