One thing a lot of people haven’t cottoned onto in regards to the release of Apple’s AirPods is that this isn’t just the end of cords on headphones, but the beginning of mass-produced implants.

It sounds like the stuff of science-fiction, but so does having the entire world’s collective information stored on a brick that slips into your pocket – and tracks your footsteps at the same time. Humans are quick to adapt to ideas that seemed ludicrous months earlier, which is why you will see scores of people craned down staring and swiping at phones on public transport and think nothing of it. Let’s not get into Pokemon Go.

Things are going to get a lot more ridiculous in the musical world soon, with the introduction of voice-controlled headphones – technology with the ominous name of ‘His Master’s Voice’.

It’s being developed by a company called Speak Music, and they have already figured out how to use these voice commands with Spotify, iHeartRadio and various other streaming services. They have teamed with Monster, who have developed two tiers of headphone: Elements (roughly US$199) and Clarity (US$90) – both expected to ship in the first half of 2017.

“There are literally thousands of different voice commands,” Mark Anderson, CEO of Speak Music, explains to Mashable. “Play any song, any artist, any playlist, any radio station, play songs by mood or activity. It’s really empowering the next generation of headphones to make them personal music assistants.”

What he means is that you will be wearing a robot on your head.

“Since most people are listening to music through headphones, that really needs to be the command-and-control center of your music experience,” Anderson continues. “The idea is to untether yourself from this small screen. Because it’s very difficult to find what you’re looking for and stop what you’re doing to find the music you want to play.”

Again: a robot on your head.

It seems fraught with problems. While Siri has proven that voice-commands can be a useful tool, she/it has also showed how unreliable they are. Aside from the problem of varying accents, speech impediments, and all the other variables, there is also the fact that users will need to know the exact titles of songs and artists, something that can be approximated then checked when searching a playlist on a screen. The program will also need to differentiate between the hundreds of songs named, for example, ‘Cry’, ‘Blue’ or ‘Shine’ in order to choose the correct song. Will it read back a list of options, choose the most popular of those songs, or simply self-implode like Lingu when faced with an insurmountable command?

How sensitive is the microphone? When headphones are used in a gym: a notoriously noisy place and one in which this technology would seem ideal, will it catch every snippet of conversation and start programming songs? Or will it require the user to repeatedly yell “Boombastic” in a crowded train carriage like a lunatic? Couple this software with cordless earphones, and you can imagine the chaos and confusion that will ensue in public places.

No doubt these worries will become redundant as the technology is perfected and the wrinkles are ironed out. And, of course, you cannot halt the inevitable march of progress – but you can certainly complain about it. Let’s hope whatever robot you’re complaining to understands your accent.

Get unlimited access to the coverage that shapes our culture.
to Rolling Stone magazine
to Rolling Stone magazine