No DX12 Software is Suitable for Benchmarking *spawn*

Crysis (2007) retro GPU/CPU test
April 20, 2020
Well, Crytek officially introduced the remaster of the original Crysis, and we, in turn, decided to test the original game of 2007. And although the game has serious problems in the performance on Windows 10, we have successfully solved them and will share their experience with you ...

zrFReco.jpg

https://gamegpu.com/action-/-fps-/-tps/crysis-2007-retro-test-gpu-cpu-2
 
Neon Noir Benchmarked: Crytek's Ray Tracing Benchmark Tool Tested
June 30, 2020
Crytek uses a software-based approach for ray tracing in CryEngine, meaning the RT cores inside of the Turing GPU are rendered useless, as too will the new RDNA 2 architecture from AMD. CryEngine's ray tracing technology runs on compute shaders through DX11, meaning it'll work on most graphics cards on the market.
...
Crytek has simply pulled the veil back on software-based ray tracing technology inside of CryEngine here with its Neon Noir benchmark, but as we get closer to the reveal of AMD's next-gen RDNA 2 architecture and the next-gen Microsoft Xbox Series X and Sony PlayStation 5 consoles... ray tracing is going to become an even bigger topic for gamers.
https://www.tweaktown.com/articles/...-ray-tracing-benchmark-tool-tested/index.html
 
Curious what existing tools or new tools will be used for next-gen GPU testing.

TomsHardware - Tools of the trade
Sept. 3, 2020

PresentMon is a command-line interface for logging frametimes. This is the least user friendly option and we don't recommend it (unless you really like text interfaces), and it's been supplanted by OCAT (Open Capture and Analytics Tool) and FrameView — both of which are based off of PresentMon's core functionality.
...
OCAT was created by AMD engineers and is fully open source, while FrameView comes from Nvidia. There are minor differences in the interfaces and functionality, with the biggest being that FrameView logs power data.

We've tested graphics cards power consumption using in-line hardware to measure precise loads, and the Nvidia power figures are accurate to within a few watts for Nvidia GPUs. AMD GPUs however report GPU-only power consumption, which can mean a difference of anywhere from 10W to as much as 100W, depending on the specific GPU (Vega being the worst offender). Otherwise, all three of these tools spit out the same general file format that gives frametimes, clock speeds, and a bunch of other details. We've standardized on using OCAT for our GPU benchmarking, but you can use FrameView or even PresentMon if you prefer.
...
We generally skip the Asus, Gigabyte, ASRock, Sapphire, etc. utilities and just use MSI Afterburner, or maybe EVGA Precision X1. Afterburner works with pretty much any GPU made in the past decade or more, while Precision X1 only works with Nvidia GPUs, which means we typically prefer Afterburner.
https://www.tomshardware.com/features/gpu-benchmarks
 
The power data logging aspect of FrameView is rather notable in my opinion if it's used to present power data in conjunction with performance data. If we're really interested about real efficiency, power consumption and to some extent utilization behavior that is the type of data needed.

Currently with how power data is mostly tested and presented it's really just showing the behavior of the power governor and dynamic clocking mechanism of modern GPUs and CPUs.
 
How NVIDIA’s PCAT Is Changing The Way We Test Graphics Card Power
September 8, 2020
NVIDIA has developed their PCAT system, or Power Capture Analysis Tool in order to be able to capture direct power consumption from ALL graphics cards that plug into the PCIe slot so that you can get a very clear barometer on actual power usage without relying on hacked together methods
...
The PCAT Power Profile Analyzer is the software tool provided to use to capture and monitor power readings across the PCI Express Power profile. The breadth of this tool is exceptionally useful for us here on the site to really explore what we can monitor. The most useful metric on here to me is the ability to monitor power across all sources, PCIe power cables (individually), and the PCIe slot itself.
...
We're focused on 4K performance per watt here for this discussion so we're using DOOM Eternal and taking the GeForce RTX 2070 SUPER and the Radeon RX 5700 XT and pitting them against each other in the same scene. Mind you this scene is static so we're not looking for FPS performance per se, but rather how much power the cards pull, how much the total system pulls, and then how many Watts it takes to generate one FPS when locked to a 4K 60.
...
And there you have it, we're able to objectively identify that while under full load both cards are using the same amount of power in this title when unleashed but when constrained to a target performance metric where both cards will reach the same performance number we see that the GeForce RTX 2070 SUPER is able to do it while using 2.8 Watts-Per-Fram and the Radeon RX 5700 XT is doing so at 3.2 Watts-Per-Frame.
https://wccftech.com/how-nvidias-pcat-is-changing-the-way-we-test-graphics-card-power/
 
Back
Top