Welcome back!

This week’s edition is a special one, and we’re confident you’ll never use AI the same after reading.

Today, we’ll be covering the hidden art behind how to get anything you want from AI models.

Everyone wants to maximize their AI productivity, yet almost no one knows the most important step: Prompt Engineering.

This is a full masterclass in prompt engineering. But we're not covering the basics. We're diving deep into the top strategies most people aren't using and the ones that yield mind-blowing results.

We’ll outline seven specific prompting techniques and guide you through how to use each one to maximize your AI productivity.

You won’t find many of these strategies on YouTube or X, and they work across any LLM/AI platform.

For each prompting strategy, we’ve included a cheat sheet - be sure to save those somewhere safe.

#6 and #1 are complete game-changers - if you only implement a couple of techniques from this whole list, make it those two.

Be sure to stick around until the end for the Midweek Edge (our curated section dedicated to weekly AI research) and our Looking Ahead section, where we cover the top emerging AI trends (built to keep you ahead of the curve).

#7: Chain-of-Thought Prompting

Chain-of-Thought Prompting

99% of LLM prompts are just users asking for an answer. Something like "Give me financial advice based on x.”

While this style of prompting certainly works, the best move is to get your AI to reason in sequences and actually explain what it’s thinking before it gives any output.

This is exactly what CoT prompting does, forcing the model to reason through a problem before it even responds.

This allows you to intervene on any part of the LLM's logic that doesn’t make sense.

At the end of your prompts, add phrases like "reason step-by-step, and fully explain each phase of your thinking back to me.”

CoT prompting cheat sheet:

#6: Socratic Prompting

Socratic Prompting

This is the single most powerful prompting upgrade you can make right now.

Socratic prompting can be broken down into two phases:

[explain goal] + [ask the LLM to gather all the context it needs to complete goal]

For example: “I’m working on building a Claude Skill that builds SOPs exactly how I need them. Based on this, what context do you need to accurately execute this goal?’

Instead of bogging down your AI’s context window, you hand it only exactly what it needs.

Socratic prompts to use now:

#5: Interview Prompting

Interview Prompting

Stop writing long prompts.

Instead, just let your AI do the work for you.

Interview prompting is simple.

You tell the LLM to ask you a set number of questions before it does anything.

This could range from 10 to 50+ questions, depending on the task's complexity.

Your AI interviews you, you answer, and by the end, it has exactly what it needs to execute properly.

Simple, yet highly effective, and a technique that works very well in dictation mode (voice prompting).

#4: Drip-Feed Context Management

Drip-Feed Context Management

Ok, now we’re getting to the more advanced prompting strategies.

As you may know, dumping everything into one prompt is one of the most common AI mistakes.

AI models have a “context window” (basically, how much text they can process), and the more you cram in upfront, the more the signal gets diluted.

Drip-feed prompting is the solution to this problem, allowing you to provide AI with many inputs without ruining your outputs.

With every prompt, start broad, then narrow down as the chat progresses.

Example:

Essentially, you’re working through a task step by step as the model processes each input.

Drip-feeding context also makes it super simple to transfer chats to new LLMs, as each piece of context is broken up, helping the model remember all the steps you worked through.

#3: Role + Task + Constraint

Role + Task + Constraint

This is the simplest structural upgrade you can make to any prompt, and the absolute bare bones of what every single AI prompt should contain.

Role: give the LLM an identity. "You are a direct response copywriter."

Task: tell it exactly what to produce. "Write a 5-email welcome sequence for my newsletter."

Constraint: set the rules. "No fluff, no filler, each email under 150 words."

Put them together, and your model has a complete brief instead of a vague request.

Of course, you can add many other elements to this, like [goal], [example outputs], etc., but we wanted to keep the structure to just three pieces.

#2: JSON Tags

JSON Tags

If you’ve been experimenting with AI long enough, you may remember how viral JSON prompting went last year - and for good reason.

It’s a massive upgrade to how AI models read your text inputs, and it can even cut down on token usage (saving you money).

Instead of writing your prompt as a paragraph, you structure it as key-value pairs.

Example:

{
  "role": "direct response copywriter",
  "task": "write a 5-email welcome sequence",
  "audience": "AI-curious professionals",
  "tone": "sharp, no fluff",
  "length": "under 150 words per email"
}

AI models read this format like a brief, without all the English-language fluff.

Building these prompts may seem complicated and time-consuming, but all you really have to do is have a dedicated AI chat that builds JSON prompts for you.

Just say something like, "You are my dedicated JSON prompting chat; turn any text I send into a usable JSON tag prompt.”

#1: XML Tags

XML Tags

Ok, this one is a complete game-changer, and we’re excited to share it with you.

What most people don’t know is that many LLMs (like Claude) are trained on “XML tags.”

XML tags are similar to JSON tags, except they use <>.

Example:

<role>Expert strategist</role>
<goal>Help me grow my business</goal>
<context>I run an online consulting business 
</context>[your context]
<format>Bullet points with a summary at the end</format>
<constraints>[your constraints]

XML tags are the actual structure that LLMs are used to seeing, and when we prompt them in this style, they’re able to produce significantly better responses.

As with JSON prompts, we recommend using a dedicated chat for building XML prompts - this makes the process super easy.

Some advice: Instead of using JSON/XML tags for every prompt, just use them for complex tasks and when you need the best AI responses possible.

Use JSON tags when:

  • You're running the same prompt repeatedly and need consistency

  • You want to save and reuse prompts as templates

  • The task is complex with lots of distinct inputs

Use XML when:

  • Your prompt has long-form content inside the tags (paragraphs, examples, documents)

  • You're mixing instructions with content (e.g., tagging a piece of text you want Claude to edit)

  • You're pasting in external content like articles, transcripts, or briefs

There you have it, our top seven prompting strategies that will change how you use AI.

Feel free to test them now!

Now, let’s dive into the Midweek Edge - our manually curated list of only the most important AI updates to ensure you stay up to speed.

🚨 Meta Launches Muse Spark 🚨

Meta just released Muse Spark, its first major new AI model from its Superintelligence Labs in over a year.

It is multimodal and built for practical tasks like shopping and trip planning, available on Instagram, WhatsApp, Facebook, and Ray-Ban smart glasses.

Try it here:

🚨 Perplexity Computer x Plaid 🚨

Perplexity Computer now connects with Plaid to link bank accounts, credit cards, loans, and more to Perplexity Computer (their AI agent).

You can now build visualization models on your financial data, track spending, vibe-code budget tracking tools, and more.

Watch the full demo & test it here:

https://x.com/perplexity_ai/status/2042256932397019368?s=20

🚨Connect Google Home to your OpenClaw🚨

A new project just dropped that allows you to connect your Claw agents to Google Home.

Control your OpenClaw with Google Mini, voice control, and more.

Run it locally here:

https://x.com/justLV/status/2043729786116452743?s=20

🚨Qwen Code v0.14.0 is Live 🚨

Qwen Code v0.14.0 is an open-sourced AI agent that runs in your terminal and has Remote Control access.

Test it here:

Looking Ahead - The top AI trends, leaks, and news we’re closely monitoring

👀 Anthropic is aiming to “kill” Lovable 👀

New leaks have just surfaced showing that Anthropic is building an in-house version of Lovable.

Soon, you’ll be able to easily build and ship full-stack apps with Claude with a Lovable-like UI/UX.

Expect to see this new feature in the coming days.

👀 Elon Musk versus OpenAI Trial Starts April 27 👀

The $134B fraud trial kicks off.

Musk alleges OpenAI "assiduously manipulated" and "deceived" him into donating $38M.

An interesting AI story that we’ll be watching closely, and you may want to take note of.

👀 Grok 5 from xAI Expected Q2 2026 👀

Grok 5 is rumored to have 6 trillion parameters, roughly double Grok 4’s current limit.

Expect this model to arrive in the next few months.

👀 Google Announces I/O 2026 Dates 👀

Google confirmed its developer conference for May 19-20, with heavy focus on Gemini 4, new Veo 4 tools, and more.

This will be a massive day in AI - mark your calendar and watch the keynote for free online.

Final Thoughts

If you made it this far, thank you for reading, and we hope you found this week’s edition valuable.

If you did, please share this with someone who you think would benefit from our publication.💙

Our promise to you: Every Wednesday, at 7 am EST, we’ll cut through the AI noise and send you human-curated AI content, tool guides, workflows, and more to make sure you stay ahead of the AI curve - straight to your inbox and 100% free.

Recent & upcoming content (a sneak peek into what we’re cooking):

YouTube

“I tested 100+ AI Tools, These 7 Will Make You Dangerous” - live now!

https://youtu.be/VkR3UsDLcfI?si=7GLq1pe0z3AuNEby

X (Twitter)

“How to Connect Claude to TradingView” - article coming to Miles’ main in the coming days:

https://x.com/milesdeutscher?s=20

“9 Claude Skills That Will Change Your Life" (resources included) - article live on AI Edge @ Wednesday 12 pm EST:

https://x.com/aiedge_

Instagram community

  • 7 AI Tools PDF playbook - a full cheat sheet from Miles’ latest YouTube video

For all the AI prompts and assets mentioned on our YouTube, feel free to grab them by joining Miles’ personal Instagram community here:

Keep Reading