To me it is a split between them, but the prices of the new 1060 makes it less desirable to the 580 (where at least there is some chance to get 'moderate' priced custom 580s relative to latest custom 9Ghz 1060).
Benchmarks are a quirky business, especially when looking for best performance and testing uncapped/without frame limiting.
HardwareCanucks that also uses PresentMon (and also FCAT) shows Titanfall 2 actually marginally faster on the custom 1060 to the custom 580 where Steven W. at Techspot has it 10% faster for the 580.
Both sites using TSAA and highest settings.
And HardwareCanucks shows Gears of War 4 pretty even between custom 580 and custom 1060, yet Steven W. has 1060 over 10% faster.
The difference could be down to methodologies such as duration window and number of runs or down to different maps and chapters.
I assume the problem with greater duration tests is a higher likelyhood of inconsistent and loss of comparable benchmark runs between the various GPU review tests done by a site, but the problem is that games are far from consistent with dips or peaks lasting for a minute and a possibility to capture those rather than the actual overall performance.
It is a pain but these days seems to me the capture performance window needs to be around 3 minutes and due to the variation of whats on screen for such a window size needs to be repeated probably around 5 times minimum.
This shows a classic example where one can be caught either at the low or high point if duration is too short with uncapped frames, also shows the headache of reducing benchmark measurements just down to an fps score.
Suggests to me one needs at least 2
reliable sources-sites for correlating/validating results between manufacturers if reducing the numbers down to some kind of absolute number; more sites is better as less chance they all use same sequence/maps/etc for runs.
Cheers