
Follow Lilach

When to Use ChatGPT, Claude, Gemini, Perplexity, and Copilot
If you are trying to work out when to use ChatGPT, Claude, Gemini, Perplexity, or Copilot, welcome to the club. Membership is automatic the moment you realise ChatGPT is both incredibly useful and, occasionally, deeply annoying.
Most people use ChatGPT for everything. Writing emails. Thinking through ideas. Research. Planning. Asking questions they could probably answer themselves if they weren’t already tired. It is the default AI, the beige cardigan of large language models. Comfortable. Familiar. Goes with everything. Until it doesn’t.
Because at some point, you notice the output has a personality. Slightly overconfident. Slightly long-winded. Very keen to help. Less keen to stop talking. You ask for clarity and get enthusiasm. You ask for a straight answer and get context, caveats, and a short essay on best practices.
This is usually when someone mentions Claude. Or Gemini. Or Perplexity. Or Copilot. Often in passing. As if it should be obvious why one is better for writing and another is better for research. As if everyone else got a memo you somehow missed. These tools are all built on large language models, usually shortened to LLMs, but they behave very differently once you start using them.
So you do what most sensible people do. You ignore the noise and keep using ChatGPT, while quietly wondering whether you are making life harder than it needs to be.
The problem is not that you need more AI tools. The problem is that different AI tools are good at different jobs, and almost no one explains that part.
Writing an email is not the same task as researching a topic. Thinking out loud is not the same task as getting a clean answer. Working inside Google Docs is not the same task as working inside Word or Excel. Lumping all of that together and calling it AI is like using the same knife for butter, bread, and a pumpkin and then blaming the knife.
Some AI tools are better when you want to think on the page. Some are better when you want a factual answer and nothing else. Some only make sense if you already live inside Google or Microsoft all day. Use the wrong one and everything feels heavier than it should. Use the right one and it feels almost suspiciously easy.
This article is a practical guide to when to use ChatGPT, when Claude usually works better, when Gemini makes sense, when Perplexity is useful, and when Copilot quietly does its job without getting in the way.
No pretending one tool will fix everything. Just enough judgment to help you pick the right assistant for the job you are actually trying to do, instead of forcing the same one to do all of them badly.
Why Everyone Ends Up Using the Wrong AI
The reason this feels confusing is not because AI is complicated.
It is because most people use one tool for several jobs that look similar but are not even remotely the same.
When people say they use ChatGPT all day, what they usually mean is that they use it as a catch-all. Writing assistant. Research assistant. Thinking partner. Occasional sounding board for ideas that should probably stay inside their head.
And to be fair, it will happily attempt all of that.
The problem starts when you assume that because it can do everything, it should do everything.
That is where the friction creeps in.
Writing is a fuzzy job. You are often not sure what you think until you see it written down. You need something that can follow half-sentences, tolerate contradictions, and not panic when you change your mind mid-paragraph.
Research is a different job. You usually want clarity, not companionship. You want answers, sources, or at least a sense of whether something is worth digging into further.
Business and marketing work sits somewhere else again. There is often context, tone, positioning, and an unspoken sense of what will and will not land with real people. Getting that wrong is more expensive than getting a paragraph slightly wrong.
Yet most advice about AI treats all of this as one category. Ask a better prompt. Be more specific. Add more context. As if the issue is user error, not task mismatch.
That advice is comforting, because it suggests you just need to get better at using the tool you already have open.
But it misses the point.
Different AI tools, even when they are built on the same large language models, behave differently because they are optimised for different kinds of work. Some are better when you want to think out loud and explore an idea. Some are better when you want a straight answer without the extra commentary. Some are only really useful if you already live inside a particular ecosystem, like Google or Microsoft, and feel clunky everywhere else.
When you use the wrong one, the output is rarely terrible. It is just irritating enough to slow you down. Too verbose. Too cautious. Too confident. Too generic. Not wrong, just not helpful.
That is why so many people describe AI as both impressive and oddly frustrating in the same sentence.
They are not bad at prompts. They are not missing some secret technique. They are just asking the wrong assistant to do the job.
This is also why debates about which AI is best never go anywhere. People are arguing from different tasks without realising it. One person is talking about writing. Another is talking about research. Another is talking about working inside spreadsheets. They are all right, and all slightly annoyed that everyone else is wrong.
Once you stop looking for a single best AI and start thinking in terms of jobs, the noise dies down.
You stop trying to force one tool to behave like five.
You stop assuming frustration is the cost of entry.
You stop wondering whether you should switch everything or add yet another tab.
Instead, you make a quieter decision. Which AI assistant is most likely to be helpful for this specific piece of work, right now.
That is the difference between AI feeling like a clever extra brain and AI feeling like an intern who will not stop sending you memos.
Why ChatGPT Is Fine and Why Claude Often Feels Better
Most people use AI to write before they use it for anything else.
Emails. Posts. Notes. Rough ideas that are not quite ideas yet. Half-arguments you want to see on the page so you can decide whether they are any good or not.
This is thinking out loud, just with a keyboard.
And this is where a lot of frustration with ChatGPT starts.
ChatGPT is very good at producing text. It is fast, confident, and eager to be helpful. Give it a vague brief and it will happily turn that into something that looks finished. Polished, even.
The problem is that when you are thinking, finished is not what you want.
You want something that will follow you while you change your mind. Something that does not rush to tidy things up before you have worked out what you are actually trying to say. Something that can sit with a half-formed thought without immediately turning it into a LinkedIn post.
ChatGPT tends to tidy too soon.
You ask it to help you think through an idea and it responds like it already knows where the idea is going. You ask for a draft and it gives you something that sounds oddly confident about a message you have not decided on yet. It is not wrong. It is just slightly ahead of you, which is the opposite of helpful when you are still feeling your way through something.
This is why so many people describe ChatGPT writing as a bit much. Too many words. Too much certainty. Too many conclusions you did not ask for.
Enter Claude.
Claude is not better in a dramatic, benchmark-winning way. It is better in a quieter, more irritating-to-quantify way. It is more tolerant of mess. It will let you ramble. It does not rush to land the plane before you have finished circling the runway.
If you paste in a rough paragraph and say this is not quite right, help me fix it, Claude tends to respond by actually engaging with what you wrote. It mirrors your tone more easily. It asks fewer imaginary questions. It feels less like it is trying to impress you.
That difference matters if you write for a living, or even if you just write a lot as part of running a business.
Here is a simple way to think about it.
If you already know what you want to say and just need help getting it out of your head and onto the page, ChatGPT is usually fine. Sometimes excellent. Especially if you are happy to edit.
If you are still working out what you think, Claude is often the easier place to start. It is more comfortable with uncertainty. It lets you talk to yourself without immediately turning the conversation into a presentation.
This is also why people often say Claude sounds more human.
It is not that it is magically better at writing. It is that it interferes less while you are thinking.
Where people get tripped up is assuming this means they should switch tools entirely. They should not.
ChatGPT is still very useful for writing when the job is clearer. Summaries. Rewrites. Variations. Tightening something that already exists. It is good at being decisive when decisiveness is what you want.
Claude is useful earlier in the process, when you are still unsure, still exploring, still trying to find the shape of the thing.
Using ChatGPT for early-stage thinking can feel like being interrupted mid-sentence. Using Claude for late-stage polishing can feel slightly vague and underpowered.
If writing with AI has ever felt like more work instead of less, this is usually why. You were not doing anything wrong. You were just asking the wrong assistant to sit with you while you thought.
Why Perplexity Saves Time and ChatGPT Often Gets in the Way
Research is where a lot of people quietly lose patience with AI.
You ask what feels like a simple question and get an essay. You ask for sources and get confident guesswork. You ask for a comparison and get a motivational speech about how everything has trade-offs and you should follow your goals.
Most people try to do research in ChatGPT because that is where they already are. It feels efficient to stay in one place. And sometimes it works. If you are exploring a topic casually or just trying to get oriented, ChatGPT can be helpful.
The trouble starts when you actually want answers.
Research is a different job to writing. When you are researching, you usually want one of three things. A clear explanation. A short list of options. Or confirmation that something is or is not true.
You do not want speculation. You do not want filler. You definitely do not want a confident answer that turns out to be based on nothing in particular.
ChatGPT struggles here because it is designed to continue the conversation. It wants to be helpful in a general sense, which often means adding context you did not ask for and smoothing over uncertainty instead of flagging it.
This is where Perplexity earns its keep.
Perplexity behaves much less like a thinking partner and much more like a very fast research assistant. You ask a question and it answers the question. Then it shows you where the answer came from. That sounds like a small thing. It is not.
When you are researching, knowing where something came from is the difference between moving on and opening six more tabs to check whether you can trust it.
Perplexity is particularly good when you want to:
- understand a topic quickly
- compare options without the unnecessary detail
- check whether a claim is real
- see what sources are being used
It does not try to sound clever. It does not try to help you think. It does not try to turn your question into a broader philosophical discussion about best practices.
It just answers the question.
This is also why some people bounce off it at first. If you are used to ChatGPT’s conversational style, Perplexity can feel abrupt. Almost rude. Like it is saying here is your answer, what more do you want from me.
That bluntness is exactly why it is useful.
If you use ChatGPT for research, you often end up doing extra work. You read the answer, then you double-check it. Then you ask follow-up questions to see if the first answer holds up. Then you quietly wonder whether you should trust any of it.
If you use Perplexity for research, you tend to do less of that. You can see the sources. You can decide quickly whether it is good enough or whether you need to dig deeper somewhere else.
This does not mean Perplexity replaces ChatGPT. It means it replaces the part where ChatGPT gets in your way.
A simple rule that works surprisingly well is this.
If you are trying to think, write, or explore an idea, ChatGPT or Claude make sense. If you are trying to find out whether something is true, what your options are, or where information came from, Perplexity is usually the better place to start.
Using ChatGPT for research often feels like having a conversation with someone who has read a lot but cannot quite remember where. Using Perplexity feels like asking someone to show their working.
Why Copilot and Gemini Feel Boring and Why That Is the Point
Once AI moves out of writing experiments and into actual business work, people’s expectations change.
You are no longer looking for inspiration. You are looking for something that fits into the way you already work, does not slow you down, and ideally does not require a new habit, a new tab, or a new system to remember.
This is where a lot of excitement about AI quietly disappears.
Not because the tools are bad, but because novelty wears off very quickly when you are sending emails, reviewing documents, updating spreadsheets, or trying to get through a normal workday without adding more friction.
This is also where people tend to dismiss tools that are actually doing their job.
Microsoft Copilot is a good example.
Copilot rarely feels impressive. It does not try to be your thinking partner. It does not want to brainstorm your next big idea. It mostly shows up inside Word, Excel, Outlook, or Teams and does small, sensible things. Summarising a document. Drafting a reply. Cleaning up notes. Helping you find something you already have.
People often describe it as boring. That is usually a compliment.
If most of your day lives inside Microsoft tools, Copilot makes sense because it does not ask you to change how you work. It meets you where you already are. The value is not in how clever the output is. The value is that you did not have to leave the document you were already in.
This is also why Copilot tends to disappoint people who expect it to behave like ChatGPT. It is not trying to have a conversation with you. It is trying to help you finish the task you are already doing and get out of the way.
Gemini sits in a similar category, with slightly different trade-offs.
Gemini makes the most sense when your workday is built around Google Docs, Gmail, Sheets, and Drive. It is less about clever responses and more about context. Drafting inside a doc. Pulling from existing files. Helping you organise or rewrite something that already exists in your Google world.
This is where a lot of people get frustrated with Gemini.
They open it expecting ChatGPT and get something that feels restrained. Less chatty. Less flexible. Occasionally underwhelming if you try to use it in isolation.
That is not a failure. It is a design choice.
Gemini is not trying to be your general-purpose assistant. It is trying to be useful inside Google’s ecosystem. If you live there, that constraint is helpful. If you do not, it feels limiting.
This is a pattern worth paying attention to.
The more tightly an AI tool is integrated into your everyday work environment, the less impressive it tends to feel in isolation. And the more useful it becomes over time.
ChatGPT and Claude shine when you are thinking, writing, or working through something ambiguous. Copilot and Gemini shine when the work is already defined and you just want help moving it along.
People often get this backwards.
They try to use Copilot or Gemini as thinking partners and feel disappointed. They try to use ChatGPT as an invisible background helper and feel distracted.
If you match the tool to the job, a lot of that friction disappears.
Use conversational tools when you need to think.
Use embedded tools when you need to do.
That simple distinction explains most of the mixed reviews you will hear.
What None of These AI Tools Are Actually Good At
This is the part people tend to skip, usually because it is less exciting.
It is also the part that saves the most time.
None of these AI tools are good at judgment.
They can sound decisive. They can summarise options. They can even argue convincingly for one approach over another. But they do not know what matters to your business, your audience, or your tolerance for risk.
They do not know what has already failed.
They do not know what you secretly do not want to do.
They do not know which constraint is real and which one you are using as an excuse.
That is not a flaw. It is just reality.
AI is very good at producing output. It is much worse at deciding whether that output is a good idea.
This shows up most clearly in strategy and decision-making work.
Ask any of these tools for a marketing strategy and you will get something that sounds reasonable. Ask them for a sales plan and you will get something that looks structured. Ask them for advice and you will get a confident answer delivered with the same tone whether it is brilliant or completely impractical.
The danger is not that the advice is bad.
The danger is that it is delivered without hesitation.
AI does not pause. It does not say this depends. It does not ask whether this fits your context unless you force it to. It fills the gap with hand-waving and moves on.
This is why people sometimes feel more confused after using AI for big decisions than before. The output sounds finished, but the thinking is not.
Another area where all of these tools struggle is accountability.
They can help you write a plan. They cannot tell you which part you will avoid once things get uncomfortable. They can outline options. They cannot tell you which trade-off you will regret later.
They are also not very good at knowing when to stop.
If you ask for more detail, they will give it to you. If you ask for alternatives, they will keep going. At no point will they say this is probably enough information to make a call.
That boundary has to come from you.
This is why the most effective use of AI is often narrow and boring. You use it to speed up parts of the work, not to replace the thinking. You let it draft, summarise, compare, or organise. Then you step back in and decide.
People who get the most value from AI are not the ones asking it to run the show. They are the ones using it like a competent assistant who works quickly and never gets tired, but still needs direction.
Once you expect judgment, taste, or accountability from these tools, frustration is almost guaranteed. Once you stop expecting that, they become much easier to work with.
Simple Way to Choose Without Overthinking It
If all of this still feels like too much to remember, that is fair.
You do not need a system. You do not need a framework. You do not need to become someone who debates AI tools on the internet.
You just need a default decision that works most of the time.
Here is one that does.
If you are writing or thinking out loud, start with ChatGPT or Claude.
If you are researching or checking something, start with Perplexity.
If you are working inside Google Docs or Gmail, use Gemini.
If you are working inside Word, Excel, or Outlook, use Copilot.
That alone will eliminate most of the friction people blame on prompts.
The second rule is just as important.
If the output feels irritating, do not fix the prompt. Switch tools.
Too many people try to wrestle one assistant into behaving like another. They keep adding context, tightening instructions, and wondering why the result still feels wrong. That is usually a sign you are asking the wrong AI assistant to do the job.
The right tool rarely needs persuasion.
When you match the assistant to the task, the work tends to feel quieter. Less impressive. Less dramatic. More like it is getting out of your way.
That is a good sign.
You will also notice that you do not need to use all of these tools every day. Most people settle into two or three that cover the majority of their work. The rest are there when the job changes.
That is the point.
This isn’t a smarter setup. It’s fewer arguments with tools that were never on your side to begin with.
Once you stop looking for the best AI and start opening the most appropriate one, the whole experience gets noticeably less annoying.
Which, for most people, is the real win.
FAQ
Do I need to use more than one AI tool?
No. You need more than one option, not more than one routine. Most people end up with two tools they reach for depending on the job. One for thinking and writing. One for research or embedded work. Everything else is occasional.
Is ChatGPT enough for most people?
For a lot of everyday work, yes. That’s why it becomes the default. Problems start when people expect it to behave like a researcher, an editor, and a background assistant at the same time. It can attempt all three, but it does none of them especially gracefully.
Why does Claude often sound better for writing?
Claude interferes less. It is more comfortable with unfinished thinking and awkward first drafts. If you are still working out what you think, that matters more than clever phrasing. ChatGPT often sounds more decisive, which is useful later but unhelpful earlier.
Which AI is best for research?
If you want answers you can check, Perplexity is usually the better place to start. It shows where information comes from and does not pad the response. ChatGPT is fine for orientation, but it is not ideal when accuracy and sourcing matter.
What is an LLM?
LLM stands for large language model. It is the underlying technology behind tools like ChatGPT, Claude, Gemini, and others. For most business users, the specific model matters less than how the tool behaves in day-to-day use.
Why does ChatGPT sometimes give confident answers that feel wrong?
Because it is optimised to be helpful, not cautious. It fills gaps smoothly, even when the information is incomplete. That confidence can be useful for drafting, but it is risky when you are researching or making decisions based on facts.
When does Gemini actually make sense?
Gemini makes sense if your workday revolves around Google Docs, Gmail, Sheets, and Drive. Its value comes from context inside those tools, not from being a standalone thinking partner. Used outside that environment, it can feel constrained.
Is Microsoft Copilot worth using?
Copilot is useful when your work is already well defined and lives inside Microsoft tools. It is not there to brainstorm or challenge your thinking. It is there to help you move through emails, documents, and spreadsheets with less friction.
Why does AI sometimes feel like more work than doing it myself?
Because you are trying to force one tool to behave like another. When the output feels irritating, verbose, or unhelpful, the issue is usually tool choice, not prompt quality. Switching tools is often faster than rewriting instructions.
Should I learn prompt engineering properly?
Only if you enjoy it. Clear instructions help, but prompt mastery is often overstated. Matching the tool to the task will improve results more than adding another paragraph of instructions.
Do I need to worry about using the wrong AI?
No. There is no penalty. The worst that happens is wasted time. If something feels clunky, switch tools and move on. This is not a commitment or a migration. It is just choosing the right assistant for the next ten minutes.
Which AI is best for business owners?
The one that fits the job in front of you. ChatGPT or Claude for thinking and writing. Perplexity for research. Gemini or Copilot if your work lives inside Google or Microsoft. There is no single correct answer, only better matches.
Can AI replace strategy or decision-making?
No. AI can generate options, structure thinking, and surface patterns. It cannot tell you what matters in your business, what trade-off you should accept, or what you will regret later. Judgment still sits with you.
Why do people argue so much about which AI is best?
Because they are talking about different tasks without realising it. Writing, research, and operational work are not the same job. Debates about best AI usually collapse once you separate those use cases.
How often should I switch tools?
Only when the job changes. If the output feels smooth and useful, stay where you are. If it feels like resistance, switch. You do not need to constantly test new models to get value.
Will this advice still be relevant in a year?
The tools will change, but the pattern will not. Different assistants will continue to be better at different kinds of work. Once you understand that, you can adapt without starting from zero every time something new appears.

Follow Lilach









0
votes
0 Comments
Most Voted