Predict: The Next Generation Console Tech

Status
Not open for further replies.
Yes, but there is no hope in hell anyone has ever gotten anywhere near that on a real load.

The Jaguar cores are absurdly more efficient. When there was a lot of talk about jaguar cores possibly being in the next console, I went and bought myself a bobcat minilaptop to practise on the cpu. The more code I write for it, the more impressed I am of it.

In throughput, it's somewhere pretty close to a modern Intel dualcore.

Thanks for explaining the relative efficiencies of Xenon and Jaguar.

I never posted that. Sheesh people, I'm now getting ripped for stuff I didn't even post.

Me too, but the 7770 @ 1ghz might actually be feasible. It's 80 watts TDP. The CPU's shouldn't add up to much, maybe 30 watts. You can probably then fit the whole console in 150 watts, 50 less than 360, which sounds about what I think MS would aim for (gotta be green and all that baloney ya know).

Theres passively cooled 7770's even, granted it's one giant heatsink http://www.tomshardware.com/news/sapphire-radeon-gpu-7770-heatsink,15927.html

I still think more CU's lower clock is more likely, but 1ghz is probably doable as is.

Ok, well it could be a 7770 then, it's in the TDP and flops ballpark (and has 10CUs @ 1ghz like this new rumour). What's the provenance of they guy who's posted that anyway?

Does anyone know if there were any previous rumours of note that mentioned the 7770?

Also, going back to the leaked devkit shot, could it be possible that the GPU is some other card besides the HD6870/6950 we originally thought? Possibly around the 1 to 1.6 TF mark? What does the backside of a 7770 look like?

2.jpg
 
This thing is crazy efficient. Without any optimization beyond what is done by the compilers, it gets near 1 IPC. Add the same level of effort we put into the consoles last time, and we are talking about something like 1.7 IPC. And that's x86 instructions that have memory read operands baked into them.

So while a xenon typically ran at something like 20% of it's capability (two threads at 0.2 IPC for a 2-wide core), this thing gets to 85% capability with one thread.

By these numbers a Bobcat thread would be something like 4 times faster than a Xenon thread. And that's for raw flops, which are the strongest point by far of the Xenon, and the weakest point of the bobcat. Simple integer stuff that's always needed like branching is just ridiculously faster on bobcat.

Two thoughts -

1) 1.7 typical IPC for the dual-issue Jaguar, even on highly optimized code strikes me as ridiculously optimistic. Not sure what your experience has been with optimizing x86 but could you expand on how you think you can get such a big gain here? I mean we're talking about IPC specifically, not overall program efficiency (so it can need more instructions so long as it can keep them running, and they can be inferior instructions too). IPC will be improved by scheduling to avoid stalls, which the OoO hardware would mostly negate, scheduling to better balance execution unit utilization, which there isn't much to gain from on something of this issue width, and reducing L1 stalls and branch-mispredicts - probably something you wouldn't influence that much strictly at the ASM level.
2) You give the 0.2 IPC number for Xenon to make your comparison but then say it'd be much worse for simple integer stuff. That's double-penalizing it. 0.2 IPC was never specifically for the FP heavy streaming code (raw FLOPs) which Xenon was best at.

This is purely hypothetically speaking - not trying to make a real world case here at all - but if you do happen to have an algorithm that you can run with extremely deep software pipelining, is highly FMADD dominated, and extremely prefetch friendly, I think you can on average sustain higher throughput on a Xenon core than a Jaguar one despite having the same peak FP (never mind Bobcat which has half). The reason I say this is because you need two instructions for FADD + FMUL on Bobcat vs one FMADD on Xenon, which can be paired with FP loads and stores which are AFAIK designed to stream L2. On a wider processor this wouldn't really matter much but when comparing two-wide vs two-wide it can make a difference. Also, the huge number of registers can help. Although you don't really need them for scheduling purposes on Jaguar some FP kernels can still make better use of > 16 registers.
 
I actually recently wrote a S3M player that ran on a machine I no longer have access to, just for fun. Played Second Reality and Axel F pretty well. I used it for testing audio output on each code release :)
Is that using 128 bit wide SIMD or 256 bit wide?

128 bit wide, which is what Jaguar has per core. Unless it's some sort of custom Jaguar core to that sort of extent, which would be pretty impressive but probably unlikely.
 
He said #1 AMD guy's info is true,8cores CPU,8GB RAM,HD8xxx,win8 core custom OS,640GB HDD
While another people ask "8900 or 8800 series?",he said "8880"<-i think it just typo,he mean 8800 series.

Well, it doesn't look like a typo anymore. HD8880 is probably reserved for a shiny new GCN2 part released in Q2 2013, while HD8870 becomes an OEM card based on 7000 series...


AMD press release from today (January 7, 2013): "AMD Delivers Enhanced Gaming and Improved Application Performance with Latest Mobile and Desktop Graphics Technology"... links to an overview and specs of the... 8000 series OEM desktop GPUs ...
[which] appear to have the same specs as those of existing 7000 series desktop GPUs...
  • HD 8970 = HD 7970 GHz Edition
  • HD 8950 = HD 7950 with Boost
  • HD 8870 = HD 7870 GHz Edition
  • HD 8760 = HD 7770 GHz Edition
  • HD 8740 = HD 7750 (rev 2, 900 MHz)
 
No, he's wrong because he's contradicting what bkilian and lherre have said about 2 & 2.5 TF GPUs.

Bkilian did not say anything about the number of TFLOPS.He just said do not expect a monster GPU.Imho a monster GPU has 3/4 TFLOPS while a normal GPU has 2/2, 5 TFLOPS.
After 8 years of 360 I do not see why MS would not be able to easily create a 2/2, 5 TFLOPS console.
Would not make sense to create a console already old before entering the market unless they want to do a big favor to Sony;)
 
After 8 years of 360 I do not see why MS would not be able to easily create a 2/2, 5 TFLOPS console.
Would not make sense to create a console already old before entering the market unless they want to do a big favor to Sony;)

Maybe they got too greedy and want to profit from day one .
 
Would not make sense to create a console already old before entering the market unless they want to do a big favor to Sony;)
Unless you want to enter at a much lower pricepoint, or you are adding cost in other systems (Kinect 2). As I've said many a time before, without knowing the business model and proposition, we cannot use any such assumptions to accurately determine hardware targets.
 
Bkilian did not say anything about the number of TFLOPS.He just said do not expect a monster GPU.Imho a monster GPU has 3/4 TFLOPS while a normal GPU has 2/2, 5 TFLOPS.
After 8 years of 360 I do not see why MS would not be able to easily create a 2/2, 5 TFLOPS console.
Would not make sense to create a console already old before entering the market unless they want to do a big favor to Sony;)

I dont know how MS thinks now, but they took that risk with the 360. They wanted to release earlier than Sony to beat them on the launch race.
What Sony chose for its PS3 specs were unknown for MS at the time. As a good business it is very likely they did consider the possibility that Sony might have come with a more powerful console later.
MS must have had a "phew" moment when that scenario case did not come to fruition.
Instead a good chunk of the cost budget for the PS3 went to a different disk based medium and other bits and pieces rather than a tangible performance boost which also made the retail price relatively uncompetitive and missed a proper launch window.
It was quite amazing how MS have invested in campaigns and media to make those PS3 "extras/additions/advantages" look less relevant while Sony was investing to make them appear relevant to convince people ;)

If they want to launch earlier again they might release a less powerful console and who knows, perhaps they might focus instead on augmented/new gaming experiences to leverage for the performance difference.

edit: Based on some rumors that appeared a few months ago, both are investing on augmented experiences. I am sure we are going to be really surprised in that area at next E3 ;)
 
Maybe they got too greedy and want to profit from day one .
Well,about this...
http://semiaccurate.com/forums/showpost.php?p=174851&postcount=1060
There is a long interview with Sony CEO Hirai Kazuo(The former president of SCEI) on the future direction of Sony and SCEI.

He spells out two principles that will direct the path that Sony will take.

1. Cutting loss is much more important than holding onto market share. <= This would mean all future Sony hardware will be sold at a profit.

2. Gaikai streaming is at the center of Sony's content strategy, where Gaikai streaming will be made available to Bravias and Blu-Ray players. This is a shift away from "Dedicated gaming hardware" to "Casual streamed gaming everywhere on any device".

Based on Hirai's statement, I don't think Sony would invest heavily in the PS4 hardware, which is nothing more than a temporary stopgap.
 
Unless you want to enter at a much lower pricepoint, or you are adding cost in other systems (Kinect 2). As I've said many a time before, without knowing the business model and proposition, we cannot use any such assumptions to accurately determine hardware targets.

In this case I see hard for MS.The kinect is hated by the vast majority of 360 users.If MS will not give these millions of users what they want
they will go elsewhere(pc and ps4).We just have to wait for the presentation of the new xbox.
 
Whoo! Although I always preferred the GUS. And I built my own resistor DAC soundcard when I was poor.

I still remeber this with the GUS http://www.youtube.com/watch?v=XtCW-axRJV8

I can imagine what three blocks they're talking about, and at least two of them will help with graphics, although one is more general purpose. I have a special fondness for the third, since I spent many hours coming to terms with it's idiosyncracies.

You safe yourself because you are not in range so I can bite you :devilish: , I hate so much mistery :smile:
 
If Ms are lumping a Kinect in with the xbox experience they won't need to be most powerful or anywhere near orbis for that matter. If they are going for casuals, where the real money is, then they would only need xbox 400 as long as they improved Kinect.

We could have 3 very different beasts, Wii u going for the gimmick crowd, 360 for the casuals/gimmick and PS orbis (which my phone auto corrects to penis lol) going for the hardcore.

What is the latest on orbis? Last Dec kit didn't have a discreet gpu as far as I remember opting for the a10 APU could this be why some are saying xbox might be more powerful? I imagine a PS orbis with APU and similar gpu would be more powerful than the solution Ms are rumoured to be going with.
 
Casuals=/=cheap,Casuals=/=not powerful

If you wanna sell something to casuals,what you need is good marketing

And no one will going for just "A/B/C" in next-gen,Nintendo,MS,Sony,they all want everyone,including cores,including casuals.

For me i think MS and Sony next console's power level will be very close,the main difference between both will be different peripherals.
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top