Intel don't have the edge. Nvidia do. Ryzen has been shown to dominate Atoms and Skylake at super low draw (sub 20W) and there is no low draw Kaby.
Pascal is so many miles ahead I think it will take more time to catch it from AMD. The reason it is less efficient may also be the reason it is so good at coin mining however, so it may be against AMD's interests.
It's just a matter of time before Ryzen starts coming out in mobile form though.
the reason they "appear" power efficient, is because Nv chopped a great deal out of Pascal to make it sip power (who knows as the power meters read the digital footprint and Nv uses fancy digital circuitry and would not be first time the played BS games) but alas, of course it uses less power, there is not as much to power, so instead they chose to chop a large chunk of things away to open up the power budget some and clock the transistors up more.
Clock per clock, density vs density, Polaris and Vega are by far a more complete package BUT this comes at more power to drive it all. if AMD could clock polaris or Vega up to similar clock speeds they would royally own Pascal big time, but they cannot.
At least Polaris/Vega are not built as "cheap as possible" and still demanding an "branding tax" like Nv does with their products where they cheap out on Vreg design, use lower quality capacitors, are that much more prone to shorter life (lower thermal threshold)
anyways, would really love to see prices stabilize some, AMD and Nvidia both need to stomp on their customers (Asus/MSI etc) so their customers (me and you) get them closer to the price they should be...putting an extra $50-$120 on top of an expensive product not good for anyone except the caviar eating execs ^.^
Vega 56 already outperforms GTX 1070.
Also bear in mind that when Vga 56 is undervolted at the core, and overclocked at the memory, its performance gets close to or exceeds GTX 1080 (with lower power draw than 1080).
AMD overvolts their GPU's to increase yields... hence why undervolting fixes power efficiency and performance (as it also removes thermal throttling).
But AMD is also making their GPU's on a manuf. process that's not suitable for high clock speeds. Nvidia has access to Samsung's manuf. process that allowed them to make Pascal (which is nothing more than an overclocked Maxwell).
Finally, Vega's architecture apparently is not optimized for games per Raja Koduri's statement, and there are several features in Vega that need to be used by developers in order to see more performance.
Nvidia uses TSMC NOT Samsung, GF (Global Foundries) uses/shares IBM/Samsung 14nm designs, whereas TSMC uses their own 16nm process, the main reason why Nv 1000 series is "so fast" they optimized the design to cut out all the "extras" to focus purely on gaming tasks not the advanced stuff that can be found in hashing or "true" DX12 product stack which polaris/vega are capable of far far more than GTX 1000 series.
if Nvidia were to use more or less the same transistor density and extras to get all that DX12 offers without relying on software tricks/hacks to make it "seem" like they are fast as they appear, their apparent efficiency would go into the toilet as would their raw clock speed advantage drop like a stone....they can force transistors to operate at a higher frequency if they are "lean" read optimized for clock speed alone, but, if they are "fat" to be able to do more they simply cannot clock as high.
Polaris/Vega may not be able to clock as high, but, they have much performance for the clocks they CAN hit, much like Ryzen may have a clock speed deficit vs various core i models they compete on near even footing at a lower clock speed might suffer some on "optimized" high IPC apps/games just like GTX1000 series does.
anyways, long story short, Vega was meant to be a workhorse just like pretty much every radeon ever released, whereas GTX cards for years now have been going more and more towards gaming grunt with less "fat" design.
Also to guy below me, supposedly it is NOT Volta that was rumored to use GDDR6 (they were supposed to be using either HBM2 or HBM3 once AMD "first access" is gone) but likely it will be GDDR5x for Nv at faster speed and AMD to use GDDR6 first.
Historically AMD used the newer memory standards and much of the newer features with various DX/OpenGL versions and Nv came AFTER others proved it useful just so Nv could come in and force multi millions of $ down throat of folks like MSFT to optimize/tweak for their specific needs at the cost of gamers/devs.
Tesselation is a prime example of that, had MSFT NOT basically forced AMD to build according to NV whims crying "unfair advantage" after AMD spent many years and many millions of $ implementing it generation after generation they would likely have stomped Nv to the curb, instead, NV was allowed to "trick" software to make them appear so much better at it even if the density of the final image was subpar but faster than AMD, same with PhysX before NV took ability away from Radeons to use it (because it made many tier higher NV cards look crap in comparison)
Radeon tend to be more horsepower/grunt whereas GTX tend to be (for many years now) all about tuner type in comparison.
See that is not 100% true, the ONLY card that Nvidia currently has that 100% supports DX12 is the 980Ti, the rest are some support here, some support there. DX11.1 is essentially a subset of DX12, to be fully compliant with DX12 one also has to support 11.1 to be a DX12, 12.1, 12.2 level support, was looking at this yesterday, Nvidia chose not to support DX 11.1 so thereby many of the features of DX12 WILL NOT be anything but software driven, not via hardware, so something like the advanced tiling used for tessellation will be usable for many of Nvidias cads but the fact of the matter is the MOST IMPORTANT of DX12 features are currently only available and drive by ALL GCN based products and basically just the 980Ti cause Nvidia basically decided to pick and choose what they will support and not support, so for them to even remotely say FULL DX12 support is an outright lie, whereas AMD CAN SAY THIS without at all lying they choose not to use some of the basically unneeded things in DX12 but besides that all GCN can and are dx11.1 compliant so by virtue are also getting the lions share of what DX12 brings to the table, and that's fact, not software driven but HARDWARE which is bar none always better.
guess you should go tell them they are lying......
shouldn't be that surprising really when it's Nvidia and Epic that MS has been working with for nearly every DX12 demo.