I had to tune my upright piano yesterday and was delighted to discover pianoscope 3.0 is out!
I shared my experience with pianoscope 3.0 on PianoTell.
The two features I was interested in didn't really pan out for me:
All in all, while my piano sounds nice after tuning it with V3 (despite using the wrong tuning style), it always did, and I'm very much neutral to cold on this change. I sway a bit negative because I have no idea what AI means exactly in this context or how it's implemented. Is it deterministic or non-deterministic? Is it farmed out to the cloud? Will the results be repeatable?
I had no trouble tuning the piano while the TV was playing using the multi-mic feature. The app correctly prioritized the piano sound most of the time. However, my son's singing still throws it off. My son sounds nothing like a piano, but he's always been able to throw off pianoscope since day one... and the multi-mic feature didn't help at all with that. Sadly, overall, this is a huge regression from my contact mic. I do have the option of getting multiple contact mics and seeing if the multi-mic feature can take better take advantage of that setup.
I'm super happy to see so much investment in pianoscope! I never dreamed I'd be able to tune my own pianos and have them sound so good.