Could you see if Arc can run this CFD benchmark, please -- https://github.com/ProjectPhysX/FluidX3D/releases
done. Why are you interested in knowing the result in this test if that's not asking much? Just curious...Could you see if Arc can run this CFD benchmark, please -- https://github.com/ProjectPhysX/FluidX3D/releases
Intel has their reported clocks (and thus TFLOPS) pretty conservatively, it's the "average clock where they usually hang out", but depending on load the difference can be hundreds of MHzs in either direction (not sure if I have ever heard anyone reporting clocks going under in practice though)done. Why are you interested in knowing the result in this test if that's not asking much? Just curious...
19.661 tflops..., I thought peak processing power was around 17 teraflops.
do you mean Supreme Commander Forged Alliance? Dunno what the H stands for. I remember playing the game 3-4 years ago, I have you in my friends list in SteamCyan are you going to run FAH on it?
Folding @ Home .do you mean Supreme Commander Forged Alliance? Dunno what the H stands for. I remember playing the game 3-4 years ago, I have you in my friends list in Steam
as for old games, darn, I gotta redo the ENTIRE message with all those games that don't work in theory.I was hunting for a NUC for a home file and media server and ended up with this. Pricier than the smaller NUCs but was also keen to try Arc so thought why not, plus can use it to beam games around the house via Steam. It has a 16GB 770M (full G10 die as used in the 770 LE but is TDP restricted). My experience for those curious:
- default Windows 11 VGA driver is very choppy (had to use this before I got Wifi working) with hitching and long pauses
- earlier Arc drivers do not include Arc Control. I had to update to the latest 'launch day' driver to get it
- no problems with video playback or Windows so 2D is fine
- Idles at ~32W according to Arc Control
- Tried Civ 6 first as it's a quick download. No problems in DX12 mode, haven't tried DX11 yet
- 770M boosts to 2000MHz @ 0.996V by default. Memory running at 16GHz (512GB/s effective)
Looks like lots of issues in old titles as per Cyan above. Really interested to see how it progresses over time.
Edit: Arc overlay seems very confused between the Xe graphics included in the 12700H and the 770M. I've had to use HwINFO to properly see clocks and TDP data. 770M is TDP limited to 120W in balanced mode.
Someone was saying the issue with performance in games like CS: Go seems like it's due to lack of parallelism. The micro-level benchmarks done by chips and cheese reflect that. In that particular instance the A770 was on the level of A380.
-Higher execution unit latency
-Low memory controller and cache performance, and bad at hiding them
-Difficulty scaling memory bandwidth with threads, yet it especially needs it to make up for above mentioned deficiencies.
So how much can be solved by software and how much by hardware? They also mention part of the problem is due to the "iGPU" mentality.
Some of the aspects are not just a generation behind, but two, three, or four generations behind. Some micro level tests put it on par with AMD's Terascale 2 GPU!
I know it might sound nitpicking, but nothing in those memories work anywhere near that kind clocks. You mean 16 GT/s or Gbps
- 770M boosts to 2000MHz @ 0.996V by default. Memory running at 16GHz (512GB/s effective)
Indeed, this CFD simulator uses lower data format for memory storage, i.e. during load/store operations. This might explain underutilization bottleneck cases for some GPU architectures, related to the memory subsystem. Here is a 1080Ti, gaining quite an advantage with FP16C packing:Am i right with assuming no NVidia GPUs benefit from fp16?
I would be happy if there is at least benefit from reduced LDS usage or maybe less register pressure. If anyone can tell.
Edit: Reading about the project reveals fp16 only is about memory conpression, all arithmatic is done with fp32.
But Intel has twice fp16 tf over 32, so i guess it's similar to AMD, while NV lacks the ALU advantage.
Edit2: I see Turing had double fp16 rate, but neither do Ampere nor Ada. Too bad. Seems they concluded it's not worth it.
More tests with OLD GAMES or those that have a certain "age", and I'll leave it there.
Call of Juarez DX10 ingame (smooth but framepacing not good, 4K maxed out).
Call of Juarez Gunslinger. 4K maxed out. Good framerate but framepacing issues.
Metro Last Light Redux. 4K maxed out. SSAA x 0.5 -that is, a little more than 4K native internally-. Smooth as silk. Auto HDR.
GAMES RUNNING WELL (resolution set to 4K, 60fps, maxed out settings)
Supreme Commander Forged Alliance. Superb.
thanks for the explanation, didnt know that, as I though 0.5 SSAA would be equal to a 50% increase in pixels, giving it's usually an up-sampling effect. Knowing that, Metro 2033 Redux and Metro Last Light Redux they run perfectly fine at native 4K 60fps.Great reports btw, very interested to see these.
Pretty sure they both have framepacing issues on Nvidia too btw, just general behavoir with the games. Need rivatuner/forced vsync to remedy.
Actually no, it's an oddly named setting - 0.5 means half res, not "native + SSAA". So you're more likely rendering at 1080p internal. Turn SSAA off entirely to just render at native res.