I had to write the scripts to implement this concept myself and it wasn't a quick and easy task. It would had gone along a lot easier if I was able to abstract away some of those queries with a tool like this.
We use index mappings at TaskRabbit, and it does make things a lot easier. It helps with 0-downtime mapping changes (as you point out), and it also lets us combine large indexes. We segment large data collections by time (ie: events_year_month) which we roll up in an alias. This allows us to delete/add large collections without downtime as well.
What's cool about the elsasticsearch api (and therefore the elasticdump tool) is that with aliases, you can read/write to and from an index just like you can an alias.
snapshot/restore doesn't allow applying new mappings.
Also I use our similar custom tool* for dev purposes where you might want to kill the copy after a while and just work with a subset of the data... or explicitly apply a filter to copy a subset of data.
on top of that you can apply transformations to the data if you use scroll and bulk, but not with dump/restore. On the flip side, dump/restore allows incremental transfer which is a bit of work to do if you're implementing your own copy script.
It really needs some cleanup and some extractions for public use, but we have a set of rake tasks to do all that and a bit more (e.g. reindexing into a new index and flipping the associated alias)
Replicating this could be a good way for me to exercise the (okay) index and (very poor at the moment) mapping support in https://github.com/bitemyapp/bloodhound
I had to write the scripts to implement this concept myself and it wasn't a quick and easy task. It would had gone along a lot easier if I was able to abstract away some of those queries with a tool like this.