NVIDIA Maxwell Speculation Thread

Thanks, my experience is mainly with mobile SoCs that usually have a pretty lengthy time that the customer of the SoC has with the samples to do all their testing and product development. so 12 months isn't unheard of from tapeout to being in the hands of a consumer.
 
Those specs are awfully similar to GK106.. I dont think this SKU has a Maxwell chip.
And I've heard elsewhere that there are two card coming not one... The Maxwell might still be a different SKU than GTX750Ti,,,
If the specs in the GPU-Z are real then I agree they are definitely consistent with GK106.

That being said, I find that the FLOPS/bandwidth ratio is quite high, which is a bit strange since they've been going lower in many recent releases (650 Ti BOOST vs. neighboring cards, 760 vs. 660 Ti). The memory speed seems weirdly low too.

It might be that there are two cards called "750 Ti," one of them with a Maxwell chip and one with a Kepler chip, the latter may be an OEM part.

EDIT: Found this, which claims Maxwell, on the AnandTech forum thread:

VCZ6l4P.png


I still find the memory speed weird. I would have expected 6 Gbps, even 7 Gbps, instead of the 5.4 Gbps in the GPU-Z.
 
Last edited by a moderator:
The Denver cores will do nothing for speeding up texture compression (it may enable compatibility for some stuff, but it won't be that fast). One would prefer specialized hardware to get to the required speed to compress/decompress on the fly, not a few general purpose (super)scalar cores. Or it will be a quite limited use as with the move engines in the XB1 (which aren't that fast at de-/compressing neither).
 
Last edited by a moderator:
Maxwell probably has some new lossless compression hardware that is much more efficient than previous gen GPUs. Also I think ASTC hardware support is implemented like Mobile Kepler GPU SoC so they don't have to go and implement it in Mobile Maxwell GPU SoC again.

Though the China benchmarks aren't really reliable, Maxwell being faster than Kepler in the graphics tests yet lower score. I'll wait and see when it's launched in February, really want to know what changes are in Maxwell.
 
And I've heard elsewhere that there are two card coming not one... The Maxwell might still be a different SKU than GTX750Ti,,,
It might be that there are two cards called "750 Ti," one of them with a Maxwell chip and one with a Kepler chip, the latter may be an OEM part.
Perhaps both cards are Maxwell? From Sweclockers: "Geforce GTX 750 Ti is joined by the GTX 750 - requires no additional power supply" (original).

(Google Translate) said:
Both newcomers are said to be equipped with 2GB of GDDR5 memory and does not require any external power supply. This means that these can get by with only the connection to PCI Express, providing a maximum power output of 75 W. Despite this, the Maxwell-based graphics cards perform about 25 percent better than its predecessors.
If the translation and the GPU-Z specs linked earlier are correct then that would imply at least 28 SP GFLOPS/W theoretical peak for the 750 Ti. :oops: Any upcoming mobile variants could be quite awesome.
 
What if there were a large LLC, enough to make texture cache misses quite rare, and the ability to cause a texture cache miss to invoke some kind of handler function on (one of) the general purpose CPUs?

I'm thinking of treating (some portion of) the LLC more like RAM, and the RAM more like swap, sort of like what happens for Linux in-kernel memory compression. In this scenario, the general purpose core would read compressed memory pages and decompress to LLC. It certainly wouldn't be fast at decompressing relative to dedicated HW, but in this scenario it doesn't need to be. The goal would be just to save memory bandwidth. The general purposeness of the cores would allow you to use lossless algorithms that might be difficult to implement in hardware, and you could change which algorithm gets used depending on the game (or more likely, the texture).

Not being a graphics HW engineer or game developer, I have no idea how large the typical per-frame texture footprint is, or how much locality of reference there is inside the rendering process for a single frame. Would my scheme require too much LLC to be worth it? Is texturing a major consumer of bandwidth in a modern game? Do games mostly already use some form of texture compression, so that the gains of applying something simple enough to decompress quickly enough (LZO say) aren't worth it?
 
I doubt they taped-out that long ago.

Nope. Developers usually don't get CPUs that far ahead let alone GPUs.

At ATI the average time from tape out to production was ~7 months. Fabs take longer now, but if a GPU takes a year from tape out to being on the shelf you've screwed up and had too many bugs. At least for desktop parts. Mobile parts take longer than desktop parts to begin selling to end users.

Maybe G80 was a special case but I remember some having them something like a year before 8800GTX launched.
 
Maybe G80 was a special case but I remember some having them something like a year before 8800GTX launched.
g80 was from fab early june 2006. released to the public early november. don't remember tapeout date, but fab time was much shorter back then, around 1 month.
 
NVIDIA Maxwell SteamOS Machine with up to 16 Denver CPU Cores and 1 Million Draw Calls

So just what is the 16 Denver cores toting Maxwell beast capable of? My source told me one number, 1 Million draw calls in DirectX 11 and OpenGL 4.4. Just for reference, AMD claims that their upcoming low-level API Mantle will be able to issue up to 150,000 draw calls. Presumably NVIDIA's new hardware beast will be able to obliterate AMD's Mantle API, and this with no code changes required by game developers as it will all be done in hardware.

You ask yourself what game developer would need so many draw calls? This is the maximum number of draw calls that the 16 Denver cores enable, but they can be used for much more. NVIDIA is working on integrating the Denver CPU cores into their GameWorks code library that game developers can integrate freely into their games. They are porting the library to OpenGL and SteamOS.
 

Presumably NVIDIA's new hardware beast will be able to obliterate AMD's Mantle API, and this with no code changes required by game developers as it will all be done in hardware.

NVIDIA is working on integrating the Denver CPU cores into their GameWorks code library that game developers can integrate freely into their games.

It only took 2 sentences and a rhetorical question before he contradicted himself, that is, if you believe his "source."
 
Last edited by a moderator:
How about a speculative estimate how many clusters NV would have to sacrifice from the biggest chip of the family in order to get 16 ARM cores in there?
 
Denver is the CPU, but maxwell generation of GPU it will be used as controller.
Seems that would probably in maxwell-generation on the GPU side small OS of its own work, and thereby resolve the draw call.
Change of use of an existing game is not required.

It will be able to effectively maxwell generation new features gameworks library a new development.
 
It only took 2 sentences and a rhetorical question before he contradicted himself, that is, if you believe his "source."

I don't see any contradiction. :rolleyes:
On the first sentence he is talking about drawing calls.
On the second sentence he is talking about specific game effects.
 
He/she also have moles:
..."I've learned from a friendly mole at Microsoft that they've silently gave the go ahead for AMD to release their low-level GPU access API Mantle for their Graphics Core Next (GCN) GPU architecture for PC... "
http://www.onlivespot.com/search?up...-max=2014-01-01T00:00:00-08:00&max-results=34

Are we talking about delusional now?

I don't get what is wrong about that sentence, other than being redundant. It is not talking about using Mantle on XBone, but on PC, which was already a given. Although I would consider the writing style a bit anti-AMD...
 
Denver is the CPU, but maxwell generation of GPU it will be used as controller.
Seems that would probably in maxwell-generation on the GPU side small OS of its own work, and thereby resolve the draw call.
Change of use of an existing game is not required.

It will be able to effectively maxwell generation new features gameworks library a new development.

Is this google translatish? :p

Also, while Charlie said it is using a controller instead of CPU, it does not make it true. He has been right on many things in the past, but also wrong on many others. Especially when new architectures are introduced:
- Fermi: it was originally a compute only design that was adapted to graphics when the original plan failed... riiiiight;
- Kepler: its just Fermi on steroids... riiight; it has special hardware for Physix that accelerates games... riiiight;

He only had that information right about Fermi because nVIDIA fucked up the implementation on the first version. And he hilariously said there was absolutely no GF110 ("no chip, just spin!"), since Fermi was not fixable one week before it launched to great effect :p

In summary, take what he says with a lot of salt.
 
Last edited by a moderator:
Back
Top