It's just a spin that tries to break away from a circlejerk. The current circlejerk is "Eurogamer can do no wrong".
Eurogamer was wrong about 2GHz CPU 1GHz GPU, but "eurogamer can do no wrong" so those were
obviously just the limits on the silicon and the completely different clocks had an excuse.
Then eurogamer was wrong about 1600MHz memory and max. 300MHz GPU in handheld mode, but "eurogamer can do no wrong" so those were obviously "just clock tweaks".
Tomorrow eurogamer comes up with something else, but eurogamer can do no wrong so it's just something perfectly excusable again.
All these developers must be having a blast, with ever-changing memory bandwidth, GPU and CPU clocks that take enormous jabs at whatever low-level optimizations they're trying to achieve, up to a week from launch.
Any other outlet would have been scrutinized for these kinds of maneuvers, but eurogamer keeps changing their goalposts and somehow for you people that makes them
more credible and not less.
I love Digital Foundry just as much as the next B3D'er, but this doesn't make them perfect.
If you want to go on believing there can be completely different hardware, X2, 512 cores, 800 GFs, Burst Processing, whatever, that's your prerogative, but there's no argument here against DF's credibility when the only thing 'wrong' about their reports is a usual clock tweak.
I already stated more than once that they're most probably right, but I just don't put 100% of my trust into anything. CPU clocks going from 2GHz in July to 1GHz in December in is anything but
usual clock tweak.
To be honest, this
wanting to believe is starting to look like a bit of an accusation.
This thread already has a resident
troll eurogamer fan who only posts in this thread and half of his/her posts are about insulting anyone who dares question the Almighty Eurogamer Truth (together with typically trollish lack of punctuation and upper case efforts). That's one too many IMO.
Some specs are subject to change, clocks being the last such that when XBox One got its upclocks (and other platforms have had downclocks) it came as no surprise. Because that's just a setting.
Cutting memory bandwidth in portable mode from 25GB/s to 20GB/s (20% less) in shared GPU/CPU resources at the last minute is
just a setting?
Please do explain how this wouldn't be a clusterfuck for developers trying to squeeze as much as possible from both operating modes.