Predict: The Next Generation Console Tech

Status
Not open for further replies.
The console makers would need to get together and agree to go just as far ahead of the curve as the other, or find someone willing to pay the money to implement the changes.
I'm not sure about that, look at what Dice made on FB2 on the ps3, if the market share are some developers are willing to go through some pain. I actually think Dice state so in one of their paper.
I think the issue is more risk from a competitive pov, the risk that early showing under perform which looking at the money involved I agree is a a massive hurdle.
The direction the money is going seems to favor being able to make use of what we already have or will soon have, which leverages existing investments and expertise.
Indeed especially as we speak about big money. But it is a case where the market dynamic hold on progress. I mean Sebbbi, Andrex Lauritzen and I guess plenty of others are toying now with software rendering, possibly intellectual curiosity for the former, for the latter the same applies and he is a researcher so well :)
It seems that some pretty effective languages are available now, actually I wonder at this point if the issue is more hardware, there is no proper hardware. There is also no market if no consoles manufacturers is interested in jumping forward. In the software rendering thread people seems have to agree that I won't before 5/6 years that thing could change and it is only a "could" as market dynamic and the choices made by the big actors in the field will set the tide.
Without consoles (at least one) there is no market for such a device.

Thing is investment to make that happen would be massive as form what I get from the discussion going on in the aforementioned thread, you need both sane single thread performances, quiet possibly 4 hardware threads, a solid throughput forn the SIMD units, quiet some cache and icing on the cake lot of on chip bandwidth to cope with the requirement of the various stages in 3d rendering. Quiet an extensive list of requirement that call for an engineering jewel/marvel.
ISA concerns are generally secondary to things like implementation and design, unless something is truly difficult to implement well, like x87 floating point instructions.
Well I can't disput your povm I stated that based on Keldor's comments on the matter (same thread as above).
A single SIMD unit would have made Haswell worse at existing workloads. The inflexibility of a single vector pipeline could conceivably make it worse overall for most loads that get by with shorter vectors.
That i an interesting though, I would bet that Intel works on larrabee replacement, I wonder what they will come with.

Speaking of consoles and what possible now, I wonder if as some stated (always the same thread) if 'well rounded' throughput CPU cores (I mean by well rounded CPU cores that don't give up on anything but peak SIMD performances ala Larrabee) back up by a tiny IGP could have been doable.
Especially looking at what Dice did on the ps3, it may have lessen the pressure on the CPU.

Honestly I won't go further as I can't contribute in any sensible to the aforementioned topic and you and others had a really interesting discussion on the matter. For some reason I think that ultimately CPU are superior, that if you have dedicated units (as graphic cards now, or the video engine in GPU or Intel procs, or sounds card, or whatever accelerator you may found in for example in a PowerEN) those devices should bring great bang for bucks (both power and area efficient).
There is something that I don't like is that graphic workloads get more complicated and GPUs tries to deal with more general purpose type of tasks, in the mean time CPU (whether they are widely available or not) also improved their throughput and still have quiet some room. To me it doesn't look "efficient", you have 2 type of resources (on which you spend quiet some silicon, both burns power, etc.) that "conceptually" fight for the same workloads (which should it should be easier to avoid that workloads fighting for the same resources).
From a software pov it looks like a consistent headache to have both things to works together, they have different strength, load balancing should prove hard, you have to improve code on two different architecture (part could be hidden but still is there on the shoulder of the driver teams), etc.
To me it looks quiet like a dreadful situation at this point, it is not the same as say questioning the validity of having video processing units.
I think the bulk of the computations should be move to the CPU (possibly still to be designed) cores.

At the same time I don't realy agree with Nick, I don't see the future (as any time soon) consisting of plenty of massive cores (like Haswel and its successors). Though to me it doesn't conflict with the idea of having the bulk of the computations done on CPU cores.
I kind of have a reverse position, I would more easily question how many of those cores are needed in the personal realm), looking at what a 360 achieve with pretty slow CPU, the tasks run by the average user, I would think if costumers needs many CPUs cores, they don't need many "big cores". For example if I look at how flash is accelerated by GPU, it looks like quiet an effort on software side, actually if they have this working on GPU, the result would be greater on may "well rounded CPU cores".
I do agree with you when you answered Nick that we do not need +16 (I think the number was 24) haswell kind of cores, though I'm not sure that it discard the need for more CPU cores, neither that GPU should completely disappear anytime soon but they cold focus on the stuff they are massively faster at.

Edit for example looking at that post make me really wonder the extend to which one should invest silicon on GPU (not discard it altogether).
 
Last edited by a moderator:
So.. From what I'm gathering here in terms of 'generational leap' the Durango is going to offer fairly mediocre performance? Or will the jump (from what is most commonly speculated) be greater than from Xbox ---> 360?

Also, why all of this love from MS for APU's? Why not just stick a regular chip from AMD in there? I mean apart from power benefits the performance has to suck by comparison..
 
I don't get it ... why go all this trouble to assist an abomination of a gpu and don't just get a better gpu in the first place .

Because there is a limit to how good a traditional gpu can be in a console due to TDP constraints. You won't be able to get GTX 680 performance by squeezing in more and more cus on a console. You make a trade off by sacrificing raw compute power for dedicated graphical power, thereby having gpu that is less flexible but capable for having games that look GTX 680 + level good.

Just my two cents.
 
So.. From what I'm gathering here in terms of 'generational leap' the Durango is going to offer fairly mediocre performance? Or will the jump (from what is most commonly speculated) be greater than from Xbox ---> 360?

Also, why all of this love from MS for APU's? Why not just stick a regular chip from AMD in there? I mean apart from power benefits the performance has to suck by comparison..


APUs performance is low because the GPU inside is low power, not because it is an APU.

Actually given the size of the image I doubt it's all that lowres, and large image means lots of lumens which generally doesn't translate to cheap, nor particularly robust, though it may be feasible with some of the newer projector technologies.
The projector has to be mounted with a complete view of the front wall, That probably means a ceiling mount, how many people are in a position to do that?

I guess it can be an add-on.
 
So.. From what I'm gathering here in terms of 'generational leap' the Durango is going to offer fairly mediocre performance? Or will the jump (from what is most commonly speculated) be greater than from Xbox ---> 360?

Also, why all of this love from MS for APU's? Why not just stick a regular chip from AMD in there? I mean apart from power benefits the performance has to suck by comparison..

Greater but with much more time in between. As a we stand now, we have:
~x4 CPU-wise
~x16 RAM-wise
~x8 GPU-wise

over the Xbox360. That doesn't take in account the overall efficiency and features gained in the last 8 years. So, i guess that in the end we will get our ~x10 increment, even if the speculated specs are a lot behind a high-end pc (which by the way, consumes 3-4x more)
 
So.. From what I'm gathering here in terms of 'generational leap' the Durango is going to offer fairly mediocre performance? Or will the jump (from what is most commonly speculated) be greater than from Xbox ---> 360?

Also, why all of this love from MS for APU's? Why not just stick a regular chip from AMD in there? I mean apart from power benefits the performance has to suck by comparison..

Nobody knows, we're going off of rumors and speculation, that's what this thread is about.

And MS has had a good experience with combining parts from their slim, they reduced costs a lot with that model and solved a lot of issues with heat dissipation ect, so going that way with their next console is a reasonable assumption.

An APU set up does not automatically mean it has to be weak or low powered, that has just been the case with mainstream off the shelf APU's so far because of cost concerns. We have never encountered a custom built APU.
 
What do you think about this bike bkilian? A worthy upgrade to my older model or should I change for better tires? ;)
8 metres of suspension travel, huh? That'll soak up them bumps! However, I'd rather have shorter travel but more response from the suspension. 400 mm of travel with a complete motion in 250 ms is way better for handling potholes than than long, slow forks.
 
So.. From what I'm gathering here in terms of 'generational leap' the Durango is going to offer fairly mediocre performance? Or will the jump (from what is most commonly speculated) be greater than from Xbox ---> 360?

Also, why all of this love from MS for APU's? Why not just stick a regular chip from AMD in there? I mean apart from power benefits the performance has to suck by comparison..

It's cheaper from a lot of perspectives. You only have to produce, tape-out and yield one chip. It's likely you'll get more APUs per wafer than you'd get CPU + GPU per wafer. Your motherboard design simplifies. Your cooling solution simplifies. Your memory hierarchy simplifies. Your need to redesign for die shrinks goes from 2 chips to 1. You don't need to convert to a monolithic chip like they did with the 360.

8 metres of suspension travel, huh? That'll soak up them bumps! However, I'd rather have shorter travel but more response from the suspension. 400 mm of travel with a complete motion in 250 ms is way better for handling potholes than than long, slow forks.

I'd prefer the suspension built right into the tire. Then you can make the whole tire be the shock absorber rather than relying a few cm^3 to do the job. Plus it's fewer parts so it's simpler in a way.
 
Not sure about the 1.0 ghz cus, that's very hot for a console.

Doesn't Cape Verde XT also have 10 CU @ 1 GHz with a 80 W of TDP*? I guess 12 CU @ 800 MHz would make more sense from a power consumption point of view, though. Cape Verde is 123 mm^2 which paired with ~50 mm^2 for the CPU would leave a decent amount of space for SRAM in a 250 mm^2. By the way, if these specifications are accurate, the "secret sauce" better be good or I will consider myself disappointed.

* http://en.wikipedia.org/wiki/Southern_Islands_(GPU_family)#Chipset_table
 
Please stop. The old Sega died with the Dreamcast.

I dont think that has anything to do with hopes of Sega becoming a big player again.
If thats a possibility I mean...sure why not? Its not that much big of a deal.
Sony and Namco were sharing technology between console and arcade cabinets too. Nothing special

But you know its quite funny that in the link with the Orbi logo he posted as possible confirmation the guy uses a spoiler tag and says "Probably some iOS game" but I doubt he saw it :LOL:
 
Forgive me if was discuss before..I wonder why we have not heard rumors in next box consoles with a pitcairn as 7970M class, since it offers since april last year = ~ 2.2 TFLOPS with only 75Watts .. may be an excellent option if custom, even though APUs 1 to 1.5Tflops are very efficient due to HSA,ESRAM,memory control etc ... I believe that these apus being very efficient but maybe unable* to achieve excellence of 2.2Tflop levels and pitcairn may also have been improved for even better clocks (850MHz to 1GHz) or still more "cool" than 75 watts counting mature improvements on 28nm process.


* My 2 cents here..
 
Please stop. The old Sega died with the Dreamcast.
It's not claiming a new home console, but an arcade board based on PS4, which we know happens (PS hardware in arcades). It'll be worth seeing if the dates are true and we learn anything. If so, we can look to the arcade for more info on Orbis.
 
Doesn't Cape Verde XT also have 10 CU @ 1 GHz with a 80 W of TDP*?

That 80 Watt for the entire card, I think the chip itself will be a bit less than that, probably between 50 and 70 Watts. Just think of the power that's taking by the graphics memory and the power converters (PWMs).

@Heinrich4 - The reason why we're not talking about the 7970M is mainly because they get such good results through binning, something that's not possible with console chips.
 
That 80 Watt for the entire card, I think the chip itself will be a bit less than that, probably between 50 and 70 Watts. Just think of the power that's taking by the graphics memory and the power converters (PWMs).

@Heinrich4 - The reason why we're not talking about the 7970M is mainly because they get such good results through binning, something that's not possible with console chips.

On top of that, 7970M was a 100W mobile card (this includes board and memory too).
 
That 80 Watt for the entire card, I think the chip itself will be a bit less than that, probably between 50 and 70 Watts. Just think of the power that's taking by the graphics memory and the power converters (PWMs).

@Heinrich4 - The reason why we're not talking about the 7970M is mainly because they get such good results through binning, something that's not possible with console chips.

Forgive my lack of technical knowledge, but is there anything to stop AMD taking a 7870 desktop edition and reducing the clocks down to the 7970M level, add a sprinkling of Sony customisation and ta da! 2+ tflops?

Too simple?
 
(First of all,this is all rumors)
This is continue of 384 bit rumor from #1 AMD China guy,he had new respond few hours ago,all about GPU part.
He said there are 2 indicators of durango GPU,one is 384bit,and he said he can't say the other one,because if he say it,everyone will know durango based on which GPU.
The other indicator is TDP.
 
Forgive my lack of technical knowledge, but is there anything to stop AMD taking a 7870 desktop edition and reducing the clocks down to the 7970M level, add a sprinkling of Sony customisation and ta da! 2+ tflops?

Too simple?


Die size and TDP could both hinder that plan. A 7870 binned for 7970M is lower power than a normal 7870. They shave off 30W by reducing clocks and binning.
 
(First of all,this is all rumors)
This is continue of 384 bit rumor from #1 AMD China guy,he had new respond few hours ago,all about GPU part.
He said there are 2 indicators of durango GPU,one is 384bit,and he said he can't say the other one,because if he say it,everyone will know durango based on which GPU.
The other indicator is TDP.

There you have it, Durango clearly must be using GK110. Makes perfect sense.
 
Status
Not open for further replies.
Back
Top