Stop calling your AI product an “assistant"
The words we use to describe AI might be holding it back.
We need to stop calling AI products assistants.
They’re tools.
And that distinction isn’t just semantics. It’s a mindset shift that changes how we build, adopt, and trust AI.
Over the last few years, we’ve anthropomorphized AI. We’ve given it human names and personas — assistants, agents, copilots. But why?
AI is confusing to most people, especially those who don’t live and breathe tech. And I think there’s some deep psychology behind this framing. When we tell people that AI is “human-like,” we’re trying to make it relatable, but we’re also making it scarier and less accessible.
Humans don’t want to be replaced. They want to use tools. Especially at work.
And tools have always amplified us. From the first flint blades to the wheel, to computers and smartphones, tools have always extended human capacity. They don’t replace it.
That’s what AI should be too. Not an “assistant” or an “agent.” A very complex tool that happens to understand human language.
Once you adopt that mindset, it changes how you design for AI at work.
I've been thinking about this a lot, because this perspective shapes what we've built so far at Granola:
- One core job, done exceptionally well. AI that helps you write clearer, more useful notes.
- Person in the middle. You jot, you edit, you think. AI accelerates — it doesn’t decide for you.
- Control by default. Editable, enhanced notes — not locked summaries you can’t use.
A lot of people tell us that Granola is the first AI tool they actually get.
They’ve been told to “adopt AI into their workflow,” but never really knew what that meant. Then they try Granola, and it clicks. And I think that might be because we treat AI as what it truly is: a tool to help you get a job done, not an assistant.
Language matters
When you call something an assistant, you subtly tell people it’s separate from them — that it might make decisions, or even act autonomously.
When you call it a tool, you remind them they’re in control.
That framing affects how people use it, trust it, and even how they feel about it.
And maybe that’s the key to helping more people finally get AI: stop pretending it’s a person, and start showing that it’s a powerful tool, for them.