NVIDIA Maxwell Speculation Thread

I've already started to move to h265 for my server, converting some of the larger x264's to x265 resulting in about half the size. Considering all the data is mirrored on my server, going from 25Gb to 12Gb nets me 25Gb space. My main HTPC is fine with CPU-decoding in Kodi (A6-3650) but of course my E-350 has a fit.

I'd much rather stay with an APU though, or an Intel iGPU decode.
 
You're re-encoing an already lossily encoded file? Or do you work with the original data files?
 
You're re-encoing an already lossily encoded file? Or do you work with the original data files?
Depends but if I have the original source then I re-encode from that instead. A lot of Blurays to process again.

For the 264->265 I've found the quality comparison to be fantastic considering the size difference.
 
You're re-encoing an already lossily encoded file? Or do you work with the original data files?

I believe "original" files, as in, say, the Blu-Ray of a movie, are already lossy encodings of a master video file to which mere mortals never have access. So one way or the other, you're encoding from a non-exact source.

This doesn't necessarily apply to personal videos, if your camera supports lossless recording. But that would probably be pointless with any recording equipment worth less than a few thousand dollars.
 
If you make your own videos, you might record with frame-by-frame lossy compression. Well, I don't know what cameras do this and what kind of storage they use but there's h264 for video stored as a series of still images and the historical MJPEG, with some MPEG stuff in between.
 
I've already started to move to h265 for my server, converting some of the larger x264's to x265 resulting in about half the size. Considering all the data is mirrored on my server, going from 25Gb to 12Gb nets me 25Gb space. My main HTPC is fine with CPU-decoding in Kodi (A6-3650) but of course my E-350 has a fit.

I'd much rather stay with an APU though, or an Intel iGPU decode.


Have you tried strongene's OpenCL decoder for AMD GPUs with your A6? I know the E-350 is just way too old to support it (DX10 era), but if this decoder works well, then a Beema/Mullins solution might be the cheapest and lowest-power x86 solution for H265 decoding.
 
Have you tried strongene's OpenCL decoder for AMD GPUs with your A6? I know the E-350 is just way too old to support it (DX10 era), but if this decoder works well, then a Beema/Mullins solution might be the cheapest and lowest-power x86 solution for H265 decoding.
Would that automatically work irrespective of the client used? I believe Kodi would need to support this in the software.
 
Have you tried strongene's OpenCL decoder for AMD GPUs with your A6? I know the E-350 is just way too old to support it (DX10 era), but if this decoder works well, then a Beema/Mullins solution might be the cheapest and lowest-power x86 solution for H265 decoding.

It does have a DX11 GPU, Radeon HD 6000 VLIW5 era. Pretty close to Richland APU, which are Radeon HD 6000 VLIW4 ; or architecturally, its mostly identical to the A6-3650.
Of course the E-350 might be lacking in CPU, GPU or bandwith for the task but it would be interesting to try.
 
Last edited:
From Videocardz: "NVIDIA GeForce GTX 980 Ti performance benchmarks."

The 980 Ti reportedly has 2816 CCs (22 SMMs) and the same base core and memory clocks as the TITAN X.

NVIDIA-GeForce-GTX-980TI-R9-300-Hawai-3DMark-FireStrike-Performance.png


Also, the following seems worrisome, if true.

From what I’ve been told, GeForce GTX 980 Ti should not be as fast as TITAN-X. It is very close though (the difference is likely related to cut-down chip. I asked Jensen if I’m right, and here is his response:

"Jensen: As far as I know something was disabled, but our engineers are still figuring out what exactly."
 
From Videocardz: "NVIDIA GeForce GTX 980 Ti performance benchmarks."

The 980 Ti reportedly has 2816 CCs (22 SMMs) and the same base core and memory clocks as the TITAN X.

Also, the following seems worrisome, if true.

From what I’ve been told, GeForce GTX 980 Ti should not be as fast as TITAN-X. It is very close though (the difference is likely related to cut-down chip. I asked Jensen if I’m right, and here is his response:

"Jensen: As far as I know something was disabled, but our engineers are still figuring out what exactly."

This has to be a GTX 970-related joke.

But I thought the GTX 980 Ti was supposed to be some sort of overclocked Titan with half the RAM—this is a bit underwhelming.
 
But I thought the GTX 980 Ti was supposed to be some sort of overclocked Titan with half the RAM—this is a bit underwhelming.
They gotta make room in the product line-up for a 980 Ti Black model, which will be the full dealie with higher clocks; this is their die salvage product. ;)
 
This has to be a GTX 970-related joke.

But I thought the GTX 980 Ti was supposed to be some sort of overclocked Titan with half the RAM—this is a bit underwhelming.

Hum, i could be wrong, i think the article have been write in 2 phases, the first was something have been disabled ( the 2 SMX ) and then they have learn it was that.
 
They gotta make room in the product line-up for a 980 Ti Black model, which will be the full dealie with higher clocks; this is their die salvage product. ;)

Huum im really not sure you will see this one, untill AMD spank hardly the 980TI..
 
Huum im really not sure you will see this one, untill AMD spank hardly the 980TI..
Yes, that's usually how Nvidia operates. Path of least resistance unless pushed by outside forces.

Then again, much the same can be said about other market leaders as well; Intel, Apple and so on. It's how publically owned companies run in a capitalist economy; it's really only about maximizing stockowner returns. Not providing the best possible product.
 
NVIDIA and every other well-managed company in the world does believe in providing the best possible product. But not necessarily at the best possible price. If there is no competition, the additional efficiency is still useful to increase profits, and there's nothing wrong about that. It simply means the public benefits from a strong competitive landscape, and it should be the job of the government to encourage this (within reasonable constraints).

Regarding the original topic - I am curious whether we will see a 14/16nm Maxwell before Pascal. NVIDIA has historically done this as a way to test the process (e.g. GF117). If so, I wonder what segment of the market that product would fit into; maybe a shrink of GM206 as it's quite big for a 128-bit GPU?
 
Back
Top