Following hot on the heels of my article about voice, touch and usability, I found a short video published today on Mashable about a system for tailoring voice recognition according to imputed emotion. That is, if you're angry, your voice recognition could interpret what you say with that in mind.
At the moment, it's intended for call centre systems. Sounds to me like it could be the start of a much more nuanced approach to voice control for computers.
Here's the link to the academic research behind the excitement: http://www.uc3m.es/portal/page/portal/actualidad_cientifica/noticias/computer_system_emotional