Predict: The Next Generation Console Tech

Status
Not open for further replies.
8 gb gdrr5 + 2gb edram - seems ridiculous even if a gtx 690 can have upto 8gb gddr5 ram . But the gpu rumours are coming true - hd 7770 is actually a 1.2tf gpu which is as rumoured 1+TF .
 
All that RAM with only a 7770? Doubt it. Also, even 2 Gb of eDRAM doesn't fit in any reasonable silicon budget.
 
Last edited by a moderator:
Aigies recently stated that he heard IBM is completely out of the question for next gen. I think we will find out more in the next few months. I would love if they made it future proof for DirectX12 whenever it comes out but not sure what kind of hardware is necessary for that.
 
Just one question. Is x86 pretty much confirmed for both Durango/Orbis or there is still a possibility that they might stay with the PowerPC based?

Nothing has been confirmed. All the rumors do suggest that AMD is supplying both CPU and GPU for both consoles. Does that necessarily mean x86? perhaps. But until we get some real leaks, I would take anything with a grain of salt.

The way AMD has been touting their APU is that you could design one with custom components. In theory, someone like IBM (or MS themselves) could provide the core and the rest of the APU work and plumbing could be done by AMD. Those IBM and AMD have worked a lot together in the console space (Nintendo and 360) so I wouldn't discount a custom job for at least one of them.
 
Credible? Is there really a beta?

that was going to be my question as well.:smile2:

I'm assuming a beta dev kit would consist of early engineering samples of the actual CPU and GPU? Maybe they need to do another stepping, but this is close enough to let dev's play with?

I've been thinking, with all the rumors of devkits and developers still being completely oblivious to any rough estimates of the specifics makes me wonder if anything's really started yet. A lot of companies have geared up their engines but still are making requests to get on with the next generation already.

if this xbox 3 beta devkit is true that it's really going to be an 8000 class ati gpu; then that's very good. hopefully it's got plenty of power but most of all consume a lot less heat than most of the series of GPUs out there. a custom 8 series would really propel the console well away from old heat concerns that started with the current gen.
 

Did anyone else notice that is says the CPU is "power architecture-based processor includes IBM's latest technology in an energy-saving silicon package. the IBM Power7 processor has six cores, and four threads per core, for a total capacity of 32 simultaneous threads."

Here's some quick math: 6 cores x 4 threads per core = 24 simultaneous thread. Either he meant to say 8 cores, or he's a dumbass.

I think that post on teamxbox is a fake. The first dev kit was rumored to have an Nvidia GTX 570, 1.4 TFLOPS, this guy says the new kits have the HD7700, which is 1.28 TFLOPS. No way would they take a step back. It looks more like someone is posing as superdae and posting fake info.
 
Did anyone else notice that is says the CPU is "power architecture-based processor includes IBM's latest technology in an energy-saving silicon package. the IBM Power7 processor has six cores, and four threads per core, for a total capacity of 32 simultaneous threads."

Here's some quick math: 6 cores x 4 threads per core = 24 simultaneous thread. Either he meant to say 8 cores, or he's a dumbass.

I think that post on teamxbox is a fake. The first dev kit was rumored to have an Nvidia GTX 570, 1.4 TFLOPS, this guy says the new kits have the HD7700, which is 1.28 TFLOPS. No way would they take a step back. It looks more like someone is posing as superdae and posting fake info.

AMD and NVIDIA FLOPS aren't comparable
 
superDAE says that he didnt post on http://forum.teamxbox.com/showthread...=681409&page=2

A-nOiGiCAAEzNjk.png:large


A-nQIvjCEAMr3RN.png:large
 
Last edited by a moderator:
[0017] The GPU 103, 104 receive input (e.g., data and/or instructions) resulting from the computations performed by the program 106 and further process the input to render the one or more images on a display 110. Each of the GPU 103, 104 may have a corresponding associated video RAM (VRAM) 107A, 107B. Each VRAM 107A, 107B allows the CPU 101 to process an image at the same time a GPU 103, 104 reads it out to a display controller 108 coupled to the display 110. By way of example, the VRAM 107A, 107B may be implemented in the form of dual ported RAM that allows multiple reads or writes to occur at the same time, or nearly the same time. Each VRAM 107A, 107B may contain both input (e.g., textures) and output (e.g., buffered frames). Each VRAM 107 may be implemented as a separate local hardware components of each GPU. Alternatively, each VRAM 107 may be virtualized as part of the main memory 102.

interesting vram allocation description from the patent which i expect to be used in ps4 !

http://appft.uspto.gov/netacgi/nph-...68".PGNR.&OS=DN/20120320068&RS=DN/20120320068
 
the gpu in the apu could be hd 8790m !
http://techreport.com/review/24086/a-first-look-at-amd-radeon-hd-8790m

it was tested with i7 3770k rated 77w and the total power consumption of the system at load was 94w . So the hd 8790m is around 17 watts ! or lower

Is it really likely that the APU will have an 8 series gpu and you would assume an 8 series as the main gpu?

Surely it's more likely to be a relatively weak gpu in the apu that can decode 1080p blurays and maybe run the OS with a stronger gpu for games especially if that patent is ps4 related
 

No not likely at all. Please read the Background of Invention:

BACKGROUND OF INVENTION

[0003] Many computing devices utilize high-performance graphics processors to present high quality graphics. High performance graphics processors consume a great deal of power (electricity), and subsequently generate a great deal of heat. In portable computing devices, the designers of such devices must trade off market demands for graphics performance with the power consumption capabilities of the device (performance vs. battery life). Some laptop computers are beginning to solve this problem by introducing two GPUs in one laptop--one a low-performance, low-power consumption GPU and the other a high-performance, high-power consumption GPU--and letting the user decide which GPU to use.

[0004] Often, the two GPUs are architecturally dissimilar. By architecturally dissimilar, it is meant that the graphical input formatted for one GPU will not work with the other GPU. Such architectural dissimilarity may be due to the two GPUs having different instruction sets or different display list formats that are architecture specific.

[0005] Unfortunately, architecturally dissimilar GPUs are not capable of cooperating with one another in a manner that allows seamless context switching between them. Therefore a problem arises in computing devices that use two or more architecturally dissimilar GPUs in that in order to switch from one GPU to another the user must stop what they are doing, select a different GPU, and then reboot the device. This is somewhat awkward even with a laptop computer and considerably more awkward with hand-held portable computing devices such as mobile internet access devices, cellular telephones, hand-held gaming devices, and the like.

[0006] It would be desirable to allow the context switching to be hidden from the user and performed automatically in the background. Unfortunately, no solution is presently available that allows for dynamic, real-time context switching between architecturally distinct GPUs. The closest prior art is the Apple MacBook Pro, from Apple Computer of Cupertino, Calif., which contains two architecturally distinct GPUs but does not allow dynamic context switches between them. Another prior art solution is the Scalable Link Interface (SLI) architecture developed by nVidia Corporation of Santa Clara, Calif. This architecture lets a user run one or more GPUs in parallel, but only for the purpose of increasing performance, not to reduce power consumption. Also, this solution requires the two GPUs to be synchronized when the system is enabled, again requiring some amount of user intervention.

[0007] It is within this context that embodiments of the current invention arise.

And the other kicker, it's almost 4 years old:

[0001] This application is a continuation of and claims the priority benefit of co-pending U.S. patent application Ser. No. 12/417,395, filed Apr. 2, 2009, the entire contents of which are incorporated herein by reference.
 
No. That patent is talking about switching between high- and low-power GPUs in the same device to save on power draw when doing light workloads.

The meat of any patent is present in it's first claim:
1. A computer graphics apparatus, comprising: a) a central processing unit (CPU), wherein the CPU is configured to produce graphics input in a format having an architecture-neutral display list for a sequence of frames; b) a memory coupled to the central processing unit; c) first and second graphics processing units (GPU) coupled to the central processing unit, wherein the first GPU is architecturally dissimilar from the second GPU; and d) a just-in-time compiler coupled to the CPU and the first and second GPU configured to translate instructions in the architecture neutral display list into an architecture specific format for an active GPU of the first and second GPU, wherein the just-in-time compiler is configured to perform a context switch between the active GPU and the inactive GPU, wherein the active GPU becomes inactive and the inactive GPU becomes active to process a next frame of the sequence of frames, and turn off the one of the first and second GPU that is inactive after the context switch.
Pro Tip : To understand any patent, jump straight to the first claim, and never take any journalist's interpretation of a patent as fact. ;)
 
the gpu in the apu could be hd 8790m !
http://techreport.com/review/24086/a-first-look-at-amd-radeon-hd-8790m

it was tested with i7 3770k rated 77w and the total power consumption of the system at load was 94w . So the hd 8790m is around 17 watts ! or lower

I was looking at the benchmarks on Tom's for the 8790m. AMD indicated that it should fit in the same power envelope as the 7690m which was in the 20-25W. If that's true. then we're looking at 384 GCN shaders @ ~900MHz in about 25W as well.

Assuming a linear scaling, 384 shaders in 25 Watts, 768 in 50 W, 1152 in 75W, and 1536 in 100W. Depending on the power budget I think anything from 768 to 1536 is doable in a console.

If both console manufactures were targeting 100W for CPU/GPU SOC, I could see something like this: 4 Jaguar cores + 1152 shaders or 8 jaguar cores + 768 shaders.
 
Status
Not open for further replies.
Back
Top