Beyond radio comms, I think a telepathic "Hey [Siri/Alexa/etc]..." could have some startling social implications.
It might even be the killer app to make elective brain implants mainstream. (Of course if this could be done without hazardous and expensive implants, all the better.)
That's something I've been thinking about a lot recently. Not so much accidental orders, but rather who these software assistants will be owned and controlled by in in a foreseeable future where this sort of tech creates tighter couplings between software assistants and our own minds.
If the software assistants are sufficiently useful and tightly coupled with the human mind, I think it quite likely that the line between self and software might get blurry for some users. The ability to think a question and hear a correct answer as a voice in your head is the sort of profoundly powerful user experience that I think might plausibly alter the assumptions people make about what it means to be themselves.
If these software assistants become apart of the users' own mind in their own perception of themselves, what responsibilities do the owners/operators of those systems have to their users?
I guess we'll cross that bridge when we get their, but the relative immaturity of FOSS software assistants is starting to unnerve me. In 2040 when Amazon starts selling "god in a box" to the general public, a two-way telepathic connection to a state of the art quasi-AGI living in the cloud, will there be a viable FOSS alternative?
It's already blurry; humans are already used to using chunks of the environment as parts of their state of mind. I have personal experience with this: as I've been dragged from my previous highly-customized Linux desktop into a more software-conformity-centric world, it's had pretty distinctly constraining effects on the way I'm able to use my mind, making various forms of easy external fluidity close to impossible. I've tried to fight against it, but the environment evolves so that people expect you to have the executive function of all their devices combined, and the only way that you're allowed to is by having either the same or similar devices or orders of magnitude more resources.
Toddlers are, today, growing up with iPads, YouTube, and Alexa/Siri as just a natural part of the world. As far as I've observed—admittedly not much—educators and parents are far behind, I would speculate both because grasping the indirection-of-agency that this sort of technology creates is a heavy abstract task of the kind that doesn't seem to filter through those parts of civilization well and because the technology has the ability to change too quickly to counter any attempts to pin it down. And pinning it down in too static a fashion could have its own horrible effects.
“FOSS” in the original sense is largely a distraction here (even though it is still an important idea), given that we've wound up in the “programming is specialized” world. The dynamic characteristics involve how agency flows through systems, and in the presence of highly distributed and often SaaSS (Service as a Software Substitute, as the FSF describes it) systems, being able to alter the source code isn't a solid defense even at “skilled programmer” speeds: going against a rushing current just means you get torn apart as soon as you touch the world. I think we need a new word for what you probably meant but which I don't know how to articulate well.
Now, think about children having these augmentations activated from birth. Their "self" would be essentially enmeshed with the external service. Scary.
Edit: strange I didn't think about it directly, but that is the Borg from Star Trek.
In a way, we're already there, no? We listen to what Yelp reviewers tell us. We stop trying as hard to memorize facts as we offload our cognitive functions to Google search.
I guess the tighter coupling than what it is now is what makes the idea repulsive.
Also, as far as FOSS alternative goes, I wouldn't count on it. It's not so much the code - in time it'd be the huge data-crunching that counts, something only big corporations are able to do.
Even if we just had reliable FOSS voice recognition without the rest, I think hackers could create powerful user experiences. But alas, even that seems to be asking too much. There are some FOSS efforts to implement state of the art solutions with lots of training data, Mozilla has been working on this from what I understand, but last I checked nothing was really ready yet and the stuff Mozilla is working on needs really beefy server hardware to run, which I think unfortunately disqualifies it as a viable competitor to the commercial offerings (which also use expensive hardware, but don't require end users to know anything about it.)
Surely that won't be the only option, and you'll be able to run, say, Arch Linux on your implant. Just be sure to read the news before a full system upgrade, lest you end up in a coma and need a hard reboot with a live usb image.
I applaud them for trying hard to make the operation as approachable as a lasik operation, but puncturing a hole through my skull, no matter how tiny the hole is, and sew some tiny threads to my white matter, is a no-no for me.
But then again, the thought of a laser cutting your cornea was probably incomprehensible too at its inception.
At a very high level, through BMI and direct brain to brain communication it will be possible to solve incredibly complex problems and create an exponential acceleration in human evolution.