Every week, someone on LinkedIn discovers they’re a prompt engineering genius because they started a ChatGPT conversation with “You are a world-class copywriter” and the output sounded fancier than usual. They screenshot it, write “game-changing mega prompt” in bold, and collect a few thousand reposts from people who’ve never once tested whether it actually made a difference. Maybe you’ve copied one of these yourself. It feels like it should work. Give the AI a role and surely it performs better. Logical, right?
Wrong. Two separate research teams have now put this to the test under controlled conditions, and the results are unambiguous: telling an AI to act as a domain expert does not improve accuracy, and in some cases makes the output actively worse. The most popular AI prompting tip on the internet is, according to the research, dead weight.
If you’ve been looking for AI prompting tips that actually deliver results, this research confirms what to stop doing immediately.
Content Overview
The Research: What “Act Like an Expert” Actually Does to AI Output
In December 2025, Wharton’s Generative AI Labs (GAIL) published “Playing Pretend: Expert Personas Don’t Improve Factual Accuracy”. The team ran domain-specific expert personas across six different AI models on graduate-level questions spanning physics, chemistry, biology, engineering and law, with rigorous methodology and large sample sizes designed to settle the question definitively.
No expert persona reliably improved accuracy on any model. Even when the assigned expertise matched the question domain perfectly, the results were statistically indistinguishable from just asking the question with no persona at all. All that careful role-setting, and the AI performed exactly the same as if you’d typed the question cold.
Low-knowledge personas actively degraded performance. Just to round out the tests they also tried telling a model it’s a “toddler” or “layperson” and accuracy drops measurably. Domain-mismatched personas caused Gemini 2.5 Flash to refuse answering questions entirely, not because the model was confused, but because the role instruction conflicted with the task.
A second study, “Expert Personas Improve LLM Alignment but Damage Accuracy”, published in March 2026, went further and found the underlying mechanism. On the MMLU benchmark, expert personas actively underperformed the base model: 68.0% accuracy with expert personas versus 71.6% without. The researchers found that persona prefixes activate the model’s instruction-following mode instead of focussing on factual recall, meaning the model gets so busy pretending it’s an “expert” that it diverts processing capacity away from actually retrieving knowledge.
All that prompt is doing is make it a better actor, it’s trying so hard to appear like an expert so it won’t focus on or flag inaccuracies, because an expert should know. The true risk is if you’re using AI for tasks you personally wouldn’t know how to do if you had more time to perform them, intuition about what and why it’s doing something is so important.
“We stopped using persona prompts for factual work months ago and switched to laid out skills following the process an expert in that field would follow” says Jack Headford, Operations Lead at Distl. “The output was inconsistent and we couldn’t figure out why until we started testing with and without the persona prefix. If you want accurate information from AI, drop the roleplay and understand what your expoert would do and then do that.”
Where Persona Prompting Actually Works (And Where It Doesn’t)
Before you throw out persona prompts entirely, there is a distinction the research draws clearly. Personas are useless for factual accuracy, which is the thing everyone assumes they help with, but they do have a measurable effect on alignment tasks.
On MT-Bench, a test designed to measure writing quality, reasoning structure and tone, expert personas improved performance in five of eight categories, including Writing, Roleplay, Reasoning and Extraction. If you tell a model to “write like a senior copywriter with 15 years of experience,” you’re shaping how it structures and presents information. You’re influencing style, not knowledge. That’s a genuinely useful effect in the right context.
Personas work for “how to say it” tasks: drafting in a particular voice, structuring a document to match a brief, matching a brand’s tone across multiple outputs. They fail completely for “what is true” tasks: researching your industry, summarising data accurately, fact-checking a claim, or answering technical questions where the answer needs to be correct.
If you’re using AI to draft social media captions in your brand voice, a persona prompt is a reasonable approach. If you’re using the same technique to produce content marketing briefs or research competitor strategies, you’re getting output that sounds more authoritative while being less accurate. Confident errors are the ones that slip through review and end up published, which makes a bad persona prompt worse than no prompt at all.
No new facts are added to a model’s training data by telling it to “be” someone. The AI sounds like it knows more when it’s roleplaying, but it doesn’t, and the false confidence means you’re less likely to double-check the output before hitting publish.
The Bigger Problem: Why “Simple AI Hacks” Keep Failing
Persona prompting is one symptom of a broader problem: the idea that AI is a shortcut machine and the right combination of words unlocks dramatically better results. LinkedIn is saturated with “copy this prompt, get 10x output” posts, and there are entire courses charging thousands of dollars built around the premise that prompt engineering is the skill separating amateurs from professionals.
We get why the idea is appealing. AI tools are powerful and intimidating, and a magic formula feels like a shortcut past the learning curve. But the learning curve exists because AI has specific strengths and specific limitations, and the only way to get real value from it is to understand both well enough to know which tasks to hand it and which to keep for yourself.
For SMEs spending real time and real money integrating AI into their operations, the cost of following bad advice goes beyond worse output. A business owner who copies a persona prompt template and publishes the results without review has a worse problem than a business owner who doesn’t use AI at all, because at least the second one knows they need help. The first one is publishing work they haven’t verified, with a tool they haven’t tested, based on advice nobody has proven because they want it to work.
We’ve written before about the difference between content that’s genuinely helpful and content that just fills a page. AI makes it extraordinarily easy to produce the second kind, and persona prompts make it sound like the first. Google’s systems are getting better at spotting that gap every quarter, and your audience can already tell.
What Actually Works: Better AI Prompting Tips for Real Output
If persona prompting doesn’t improve accuracy, what does? The answer won’t fit in a LinkedIn screenshot, but everything here is backed by how these models actually process information rather than what gets engagement on social media.
Build out an expert’s workflow into as many individual steps as you can, not the persona. Instead of “You are a world-class marketing expert,” try “Analyse these three competitor landing pages and identify what conversion elements they share.” Giving the model a concrete task produces useful output. Giving it an identity produces confident-sounding fluff, because the model responds to what you ask it to do, not who you tell it to be.
Provide context, not character. Feed the AI the information it needs to do the job well: your brand guidelines, your audience data, your product details, your existing content strategy. If you had to choose between handing a new team member a detailed brief or just telling them they’re an expert, you’d hand them the brief every time.
Use AI for structure, not for facts. AI is excellent at organising your thinking, drafting from your notes, generating variations and spotting patterns across large sets of information. It is unreliable for factual claims, statistics and industry-specific knowledge, and that unreliability isn’t a flaw you can prompt your way around. If your AI output includes specific data points, verify every single one before it goes anywhere near a client or a publish button.
Match the tool to the task. Persona prompts for tone and style, direct questions for information retrieval, chain-of-thought prompting for complex reasoning. There’s no single technique that works best for everything, which is precisely why the “one weird trick” approach keeps producing disappointing results for the people who actually measure their output quality.
Test your own workflows. The Wharton study tested specific models on specific benchmarks, and your use case is different. If you’re deciding between two prompting approaches, run both on the same task and compare the results against each other. Your own data beats LinkedIn advice every time.
The Bottom Line for Australian Businesses Using AI
AI adoption among Australian businesses is accelerating, and the advice ecosystem around it is struggling to keep up. Most of what’s circulating about how to prompt AI comes from people selling courses and templates rather than people testing their claims against published research. The persona prompting studies are one of the first cases where a wildly popular technique has been properly examined and found to be worthless for its intended purpose, and we expect more of these myth-busting results as the field matures.
If you’re running a business with limited time and budget, focus on using AI where it saves genuine time: drafting, structuring, brainstorming, and pattern recognition. Don’t hand it tasks that require expertise, fact-checking or strategic thinking and assume the output is ready to go. And next time someone shares a “magic prompt” on LinkedIn, ask them for the data just for fun.
“The businesses getting real value from AI aren’t the ones with the cleverest prompts,” says Jack Headford. “They’re the ones who stopped looking for shortcuts and started understanding the tool. That’s not as sexy as a viral LinkedIn post, but it’s the difference between AI that actually helps your business and AI that just makes you feel productive.”
If you’re working through how AI fits into your digital strategy, or you want someone who’ll tell you straight what’s working and what’s a waste of your time, let’s have the conversation.