Unfortunately, Android has an Achilles’ heel — actually, more like Achilles’ entire leg. To issue spoken commands, you have to tap the microphone icon on the Google search bar. And it’s only on the home screen or the Google Now screen (swipe up from the bottom). So you can’t speak commands when your phone is locked, or when you’re in another app.
On the iPhone, you hold down the Home button or the clicker on your earbuds cord, so the voice command feature works when the phone is asleep or in any app.
In other words, to use an Android phone’s speech features, you frequently have to pick it up, and you always have to look at it, which defeats much of the purpose. The exception: Motorola’s new phones, like the Moto X, can be set to listen all the time.
I think Mr. Pogue wrote a thorough and well represented comparison between Apple’s Siri and Google’s Android speech recognition functionality. This is a very important feature for any “smart” devices right now and going forward. Both are relatively new and I am sure we all see vast and exponential improvements around the corner.
A very important key point is human-computer interaction. And right now I think Apple is doing it right.
UPDATE: Pogue has just released a 90-second video clip with the comparisons. Very informative.