You finally convinced me: because *all* pre-rebranding members of the 8xxx family had the exactly the same features wrt video decoders, perf/W, number of ports etc. Right?Alexko said:People may not care about the number of so-called CUDA cores, but perf/W, number of supported displays, video decoding/encoding engines do matter.
Here's a shocker:
Nobody who buys a GT620 or whatever cares about perf/W, except HTPC geeks who care about fan-less operation, but they will need to (and do) research this stuff in depth anyway, so it doesn't matter.
As for ports: I'm pretty sure a GTX580 has different ports than a GT520, yet it is silicon of the same family. So what kind of consistency are you talking about? That fact that a hypothetical Kepler based GT610 may support Surround + extra monitor? Now there's a huge population to intentionally deceive! Don't different board vendors have different port configurations anyway? That makes the argument even more irrelevant.
I couldn't tell what kind of video decoders (and, a fortiori, encoders) any GPU has. Haven't they all supported some form of H264 since like forever? If not, I'm pretty sure that the feature differences cross silicon family lines anyway, so this also doesn't matter either.
Edit: from the GeForce 8 series wiki pages, on G84:
Terrible, terrible, how different g8x chips had different video decoder features.NVIDIA introduced 2nd-generation PureVideo with this series. As the first major update to PureVideo since the GeForce 6's launch, 2nd-gen PureVideo offered much improved hardware-decoding for H264 and VC-1 video.
Last edited by a moderator: