Any review of an APU by self yet? Haven't found one my-self.
Literally the first article when look for A10-7850K Mantle review
http://hothardware.com/News/AMD-Mantle-vs-DirectX-Benchmarks-with-Battlefield-4-and-Star-Swarm/
Any review of an APU by self yet? Haven't found one my-self.
Supersampling should work trough .INI tweak. (haven't had time to check it though.)Not only that, but they haven't done optimizations yet any sensible developer would do before shipping his game (i.e. guaranteeing basic playability, using a form of Anti-Aliasing and/or motion blur technique that does not increase the already extreme batch count five-fold.
That's happen when you test in cpu demanding scene/game(drivers are responsible for this, NVidia dx drivers better utilize many cores but sucks on 2 cores cpu).Yeah that's a little weird, in those pclab.pl benchmarks in DX 11.2 the 780 seems to stomp the 290x, which I haven't seen in benchmarks before.
the reason why older AMD cards are not supported by Mantle has nothing to do with GCN even though people keep repeating this over and over across tech forums. AMD made it clear a few months ago that Mantle is NOT tied to GCN and support for older hardware and even nVidia hardware can be added in the future.
proof: http://mygaming.co.za/news/wp-content/uploads/2013/11/22-Mantle-on-other-architectures.jpg
About the Star Swarm demo...someone posted here his GTX 780 scores wo motion blur...it was about 80fps...well my 290X wo motion DX is about 65fps.. and...90fps with Mantle...what does it says? They did not optimise as much for AMD DX and focused on AMD Mantle?
Sigh, more useless single player numbers. Once again the European (specially the German) hardware sites put all others to shame with their quality of reviews and testing.
It's going to be SP numbers, so it's best to set your expectations accordingly.bit-tech has been specially bad for the last couple of years, and yes that article is useless through and through.
I'm still on hold for anand's real article on Mantle, though.
Enabled tile-based compute shader lighting optimization on Nvidia for improved GPU performance (already active on AMD GPUs)
Kepler and 290X are able to draw more than one million separate (animating) objects (using different mesh each) per frame at 60 fps using bog standard DirectX. What state changes do you actually need between your draw calls when you are rendering to g-buffers and you use virtual texturing and virtual geometry?Regardless, for such benchmarks you always have to define what states are changing between the 250k calls If you don't change any, you can already get that in DX of course.
Even worse, DICE and AMD has been crippling performance on Nvidia hardware. Both are shameless companies.
It's going to be SP numbers, so it's best to set your expectations accordingly.
No one outside of DICE has a reliable way of measuring multiplayer reproducibly. I have yet to see any test methodologies or numbers that I would trust even with a pretty large error bar. People are either ignorant or have an agenda if they claim they can reliably benchmark BF4 multiplayer.Sigh, more useless single player numbers. Once again the European (specially the German) hardware sites put all others to shame with their quality of reviews and testing.
Pretty sure he was speaking more to cross-IHV portability than backwards compatibility there, but he can feel free to clarify.That slide is from DICE's Johan Anderson(repi), it is NOT from AMD. repi is wrong about it being not tied to GCN, if it wasn't it would work on HD5000/6000.
There are clearly things in Mantle completely unrelated to VLIW (I don't get why folks are so fixated on that with the assumption that nothing else changes between GPU architectures) that can't be supported pre-GCN. Wasn't there even confirmed stuff earlier in this thread? Read back guys.the reason why older AMD cards are not supported by Mantle has nothing to do with GCN
I'm obviously on the same page here (see the other thread ), but whether or not it's useful in your engine, it's still worth noting how much performance is left on the table for folks who want to write code in a more "traditional" CPU-fed way. Whether or not that remains common in the future is up for debate, but "free performance" is never bad really, nor is pursuing both potentially useful methods of rendering in the future.Kepler and 290X are able to draw more than one million separate (animating) objects (using different mesh each) per frame at 60 fps using bog standard DirectX. What state changes do you actually need between your draw calls when you are rendering to g-buffers and you use virtual texturing and virtual geometry?
Why get hung up on that? Other sites went and played the game for a good number of minutes and showed their findings. I thought we were all trying to get away from pre-canned repeatable benchmarks which are known to be optimized by IHVs. What I want to see is a 30 minute session in a 40+ player populated map focusing on frametimes. Is that unreasonable?No one outside of DICE has a reliable way of measuring multiplayer reproducibly.