Predict: Next gen console tech (9th iteration and 10th iteration edition) [2014 - 2017]

Status
Not open for further replies.
I still like Nvidia for a wild card possibility for next gen.

Assuming they still have this lead and assuming they're willing to price competitively sure. But I'd assume AMD would be more willing to take a hit than Nvidia, although obviously things can change in 5 years or so.
 
Will AMD still exist :O
The FY 2014 report is due any day now. 2014 Q1 was 17th April, Q2 was 17th July, Q3 was 16th October. Their stock has been tumbling since July and it's just dipped again because some are expecting bad news and are pre-emptively selling.
 
Whats the old saying? Buy on rumors sell on news.
I thought it was Wall Street has no fucking idea what they're doing.

Every quarter before Apple release a financial statement, I buy some Apple stock when it dips pre-announcement. 1-2 days after the announcement I sell it once it's recovered and pocket anywhere between £300 to £800. I've been doing this every 3 months the the last four years.

Please Wall Street, never change!
 
I thought it was Wall Street has no fucking idea what they're doing.

Every quarter before Apple release a financial statement, I buy some Apple stock when it dips pre-announcement. 1-2 days after the announcement I sell it once it's recovered and pocket anywhere between £300 to £800. I've been doing this every 3 months the the last four years.

Please Wall Street, never change!
That just proves they know exactly what they're doing :mrgreen:
 
Nvidia 960 would have been a good GPU for this gen, 2.3 teraflops and on a 128 bit bus. Nvidia's bandwidth optimizing tech making it possible.

I suppose that would have limited to 4GB RAM, but Samsung has double density chips coming that would have enabled 8GB.

I still like Nvidia for a wild card possibility for next gen.
It uses too much power for a console.
 
Nvidia 960 would have been a good GPU for this gen, 2.3 teraflops and on a 128 bit bus. Nvidia's bandwidth optimizing tech making it possible.
Yes, and after next Radeons are released they would have been, same with next GeForces and so on.
The bandwidth saving techniques you're referring to exist on AMD's side too, Tonga featured the "same" (= similar, both delta colour compression techs)
 
Would it not be reasonable to expect the next Xbox and PS5 to be built on 10nm FinFET silicon, given the likely 2018 - 2020 timeframe?

http://www.extremetech.com/computin...-plans-massive-16b-fab-investment-report-says

Two interesting pieces of news regarding fabs and foundries today. First, TSMC is planning a $16B mega-fab installation at the Taichung science park in Central Taiwan. The planned investment would be even larger than the expected growth in fab costs through the 10nm node and suggests that TSMC is sending a message to its rivals at Samsung, GlobalFoundries, and to some extent, Intel — the company is willing to spend whatever it takes to regain its lost ground and grow its market share.

TSMC has projected sales of around $7 billion for the first quarter of 2015 and expects to spend between $11.5 to $12 billion on capital expenditures for 2015, as compared to $9.5 billion in 2014. Little information about the new plant is available at this time, but we can make some educated guesses. Given how long it takes to build a fab (typically 2-3 years from announcement to production), we can safely assume that this new facility would likely come online at 10nm or below. There’s still a small chance that TSMC might use the new facility to do some 450mm wafer production or to launch EUV, but the company hasn’t spoken to either plan. With 450mm wafers reportedly dead for now and EUV still mired in uncertainty, it’s difficult to predict whether either option will be available. If EUV does come online, it’ll likely be at a brand-new facility like this first, and installed as the default manufacturing option rather than retrofitted.

http://www.theregister.co.uk/2015/02/11/tsmc_record_revenues/

Chip giant TSMC, flush with record sales, plans $16bn fab build-out
The race to 10-nanometer process is on



 
If they aim for a 100-150W box again, I think they have to be on 10nm.

Going from 28nm to 14/16nm, is effectively a traditional node shrink (other than density 28nm->20 hasn't scaled well for power and performance), I don't think that's enough room to offer a generational improvement. Couple a 14nm SOC with HBM, maybe you can get 3-4x performance improvement over last gen? (Assuming a 100-150W console). I don't know if that's good enough, maybe if they're willing to go to a 200-250W box for the first rev and then shrink to 10nm later.

I think the following process nodes beyond 10nm will play into that decision as well. If they're starting out on 10nm, what options and at what cost will they have for further shrinks. Will 7 and 5nm nodes actually happen and be economical?
 
I think Microsoft need to do almost exactly what they did last gen; separate CPU and GPU (both large chips thankyouverymuch), fast RAM, and a separate pool of super-fast memory slapped off chip (don't wanna waste that valuable CPU space.)

The CPU chip can have a bunch of fast desktop cores surrounded by a huge pool of mobile processors. I don't care for 'balance'. :p
 
I think the days of separate CPU and GPU, for anything other than high end and expensive machines, are long gone.
 
I think the days of separate CPU and GPU, for anything other than high end and expensive machines, are long gone.
Agreed, if anything we can expect even more tightly together working "CPU" and "GPU"
 
What about getting totally rid of the CPU? :runaway:

Could they use the GPU compute for all CPU tasks instead? Would that be doable and reasonably efficient?

Maybe the GPU is less efficient than CPU for CPU tasks but async compute (and others GPU unified pipeline stuff) could automatically optimize all available resources. No more CPU bottlenecked games or GPU bottlenecked games.

They would obviously keep an ARM for background tasks (network, HDD and others I/O tasks) and standby mode.
 
GPUs just aren't that good for serialized code, and not everything can be parallelized. But AMD is trying to paint a simpler future by already calling their APUs having "12 compute units" which of 4 are CPU cores and 8 GPU/GCN-"cores"
 
Could they use the GPU compute for all CPU tasks instead? Would that be doable and reasonably efficient?
No. GPUs are basically only good in running tight "inner loops" (lots of iterations over the same code). GPUs lack the ability to run serial code efficiently (GPU latency hiding breaks in that case) and GPUs lack the ability to spill registers to stack. This makes running complex code bases with lots of function calls impossible on GPU. Even simple things as implementing efficient recursive functions can be hard problems on GPUs (http://stackoverflow.com/questions/14309997/how-to-implement-deep-recursion-on-cuda). In order to be able to work alone, GPUs would need to be able to change the virtual page mappings, handle page faults, system interrupts and IO. None of this is supported by the current designs.
 
No. GPUs are basically only good in running tight "inner loops" (lots of iterations over the same code). GPUs lack the ability to run serial code efficiently (GPU latency hiding breaks in that case) and GPUs lack the ability to spill registers to stack. This makes running complex code bases with lots of function calls impossible on GPU. Even simple things as implementing efficient recursive functions can be hard problems on GPUs (http://stackoverflow.com/questions/14309997/how-to-implement-deep-recursion-on-cuda). In order to be able to work alone, GPUs would need to be able to change the virtual page mappings, handle page faults, system interrupts and IO. None of this is supported by the current designs.

Thanks for the insight Sebbbi. I have a question though. As GPU compute moves forward and improves, beyond the current (and perhaps even near future) GPU compute capability, will GPU designs ever reach a point where they will be able to efficiently cover the serial and complex code processing demands of current CPUs? Essentially will GPU's and CPU ever converge into a sort of hybrid design, and if so how far away are we from that?

Or are the essences of each design so diametrically opposed that the future of GPUs and CPUs is just GPUs and CPUs more closely coupled in a tighter package?
 
Thanks for the insight Sebbbi. I have a question though. As GPU compute moves forward and improves, beyond the current (and perhaps even near future) GPU compute capability, will GPU designs ever reach a point where they will be able to efficiently cover the serial and complex code processing demands of current CPUs? Essentially will GPU's and CPU ever converge into a sort of hybrid design, and if so how far away are we from that?

Or are the essences of each design so diametrically opposed that the future of GPUs and CPUs is just GPUs and CPUs more closely coupled in a tighter package?

I am not a graphic developer but just a Java developer on banking project and I think CPU are there to stay. They are perfect complement to GPU in game development.
 
Thanks for the insight Sebbbi. I have a question though. As GPU compute moves forward and improves, beyond the current (and perhaps even near future) GPU compute capability, will GPU designs ever reach a point where they will be able to efficiently cover the serial and complex code processing demands of current CPUs
I'm not sure they could without diluting the parallel power. You've two discrete workloads, serial and parallel. GPUs are as good at what they do as they are because they focus on parallel strengths. If you add more silicon to the GPU cores to handle serial workloads, you'll decrease their parallel power by having less cores.

The idea of a single processor to rule them all is a highly parallel CPU with great serial strength per core that works on graphics workloads effectively by using complex algorithms. You'd need such a paradigm shift in graphics code for this to ever be feasible. As long as graphics is dependent on massive throughput and game logic is dependent on lots of conditions and function calls, it doesn't make sense to mix the two. You'd end up with a processor that's Jack of all trades, master of none, trumped in both serial and parallel workloads in the same amount of silicon spent on dedicated cores (Cell...).

The future of the GPU is to move away from 'graphics' and become 'parallel processor', so we have SPU and PPU for serial processing unit and parallel processing unit, and the two to work together with greatest efficiencies.
 
Status
Not open for further replies.
Back
Top