The Quiet Future of Thinking Out Loud

Apple just spent a billion and a half dollars acquiring Q dot ai, an Israeli startup I'd never heard of. On the surface, it's an audio AI company. But buried in their patent filings is something that caught my attention: technology that can detect speech from facial micromovements alone. No sound required. Just the tiny movements of your face when you mouth words.
It got me thinking about a real problem I've been wrestling with. I'm convinced that dictation is the future of knowledge work — I can be messy, think out loud, be specific in ways typing never lets me be. Large language models understand conversational mess just fine. But there's friction. Social friction. I feel awkward dictating in public. There's something uncomfortable about hearing my own voice, or having others hear me thinking out loud. So I don't do it enough. I type instead, which forces me to be lazy, to shorthand my thoughts.
But what if that could change? What if I could get all the benefits of dictation — the mess, the specificity, the thinking out loud — without the social cost? What if I could mouth the words without saying them?
It's not a natural skill yet. But I think it could be. I think in the near term, before we get to Elon's brain chips, silent dictation might become something I learn. A new muscle for how I work. And the fact that Apple's already betting billions on this technology tells me they're thinking about it too.