Technology & AI

AI best practices: If at first you don’t succeed, hurry, hurry again

The AI ​​notification screen, as reimagined for Google Gemini.

[Editor’s Note: This is the third in a series by Oren Etzioni about AI usage and best practices. See also “AI Coach or AI Ghostwriter? The Choice Is Yours,”  and “How to read with AI.”]

A friend asked ChatGPT to apply to an active issue and received a straightforward, unequivocal response. I suggested he try a different approach: ask for 15 different ideas, scan them, pick the two he feels most promising, and ask ChatGPT to refine them. He came back happy. ChatGPT didn’t get smarter, but it got better at telling.

This is my favorite game: ask the AI ​​for more options, go deeper into what you promise, and most importantly, if at first you don’t succeed, hurry up, hurry up again!

The following is practical advice on how to use AI as a power tool rather than a gambling machine. For a simple request, pass, but if you’re serious about asking, go ahead.

Anthropic’s guide to introducing Claude contains some helpful hints: treat the model like a smart but realistic new employee on his first day. They know it. They are new too. They will do what you ask, so you have to ask exactly what you want.

The golden rule of the Anthropic team is to show your information to your colleagues without context and ask if they can follow it. If the answer is no, the model can’t either. This principle produces a few practices that raise the quality of the output immediately, before any more advanced techniques begin.

However, one piece of advice from me: don’t think of a model as a person. That’s not the case. The “smart new job” framework is a useful starting point, but it is a metaphor, not a reality. The new hire asks follow-up questions, remembers what you said yesterday, and notices when an instruction is dumb. Claude doesn’t do that automatically. Rely on metaphor to remember to clarify and provide context, but drop it when you start expecting human judgment that isn’t there.

Here is the playbook, organized as a list for easy reference and periodic updates.

Be specific about the format, length, audience, and limitations.

Unclear instructions produce unclear results. Getting ready to say what you really want.

  • Previously: Write about marketing trends.
  • After: Analyze the three most important B2B SaaS trends from the past six months. Alternatively, provide an example of one company and a one-sentence assessment of whether the trend will accelerate or plateau. Write it as a 400-word short on a non-technical board.

Rapid quality improvement often means obstacles. Implicit instructions produce safe, fenced, encyclopedic answers because the model has no signals about what to prepare and automatic discovery. Direct information produces insightful responses, which are useful because constraints eliminate safe but useless options. Asking for “three” instead of “some” forces the standard. Asking to “accelerate or plateau” forces a call. Asking for a “short board” determines what gets cut. Each constraint you add is a decision the model can no longer escape.

Give a few examples.

This is a very high move to appreciate every day. Models pick up patterns from examples faster than from descriptions.

  • Previously: Turn these meeting notes into action items.
  • After: Turn these meeting notes into action items. Match this format: Example 1: Note: “Sarah will look into the pricing question and get back to us next week.” Action item: Sarah → research pricing options → due next Friday. Example 2: Note: “We agreed to push the launch.” Action item: Team → review implementation timeline → due before Monday’s get-up. Now do the same for these notes: [paste]

Tell the model what to do, not what not to do.

Negative precepts are easier to break than positive ones. Reframing in the affirmative gets you cleaner results.

  • Previously: Don’t be too formal. Don’t use jargon. Don’t make it boring.
  • After: Write in a warm, conversational tone, how your smart colleague would explain this over coffee. Use simple English and short sentences.

Match the style of your information to the style of the output you want.

This surprises some people. If your input is full of bullets and bold text, the model will return bullets and bold text. If you want flowing prose, write flowing prose.

These practices sound polite. But when they work together, they take information from the level my friend was working on, where ChatGPT seemed useless, to the level where AI is generating assignments left and right. The techniques developed throughout this section build on this foundation, but will not release information that fails at the basics.

Beyond the basics, here’s a set of practical practices from guidance from OpenAI, Google, working developers, and people who build AI production systems for a living. These are not strategic strategies so much as workflow commands.

One more time; treat the notification as a test run.

Your first input is a draft. Experienced practitioners create small sets of test cases (inputs they care about), pass their information across, and refine until the output is consistently good. Several open source tools exist to formalize this loop.

  • Previously: Write a message. Try it in one example. It looks good. Submit it.
  • After: Write a message. Choose from five inserts, including unusual edge cases. Run the command on all five. Where it fails, change one thing quickly and test again. Keep the version that works in most cases.

Specify the meaning of what has been done.

OpenAI’s own GPT-5 guidance emphasizes telling the model what is important as a finished answer. Otherwise, the model decides for itself, usually by stopping at the first sound response.

  • Previously: Help me fix this Python error.
  • After: Help me fix this Python error. You’re done when: (1) you’ve identified the root cause, (2) you’ve proposed a fix for the modified code, and (3) you’ve explained why the first one failed. If you are not confident in any of those three things, say it clearly rather than guess.

Measure effort at work.

Modern thinking models have effort or thinking dials. Low extraction and triage effort; high for synthesis and strategy. Most of the users leave them by default and pay for severe problems.

  • Previously: Summarize this 80-page report.
  • After: Put effort into thinking up. Read the entire report. Identify the three most important findings, two weak claims, and one question I should ask the authors. Cite page numbers.

Enter the current or proprietary context directly.

Be careful to avoid jargon and abbreviations unknown to the model (instead of the abbreviation PMO, say “Project Management Office”). Models cannot access your internal documents. Paste the material.

  • Previously: How should I organize the related work section by comparing my outline with the previous proposals for managing the agent?
  • After: Below is my section of the work related to the current draft, along with PDFs of the three papers I am against (attached). Based on these sources alone, find overlapping points that I have not yet acknowledged and any claims in my draft that the cited papers cannot support.

Build a personal library.

This is a power move for a professional. Patterns that worked yesterday will likely work tomorrow. Stop rewriting them from scratch. Save the commands that always produce the best results, sorted by task type. Treat them as living documents, not just one-off efforts.

  • Previously: Open a new conversation. Type the outline, constraints, examples, and question from memory. Look at yourself and forget two of them.
  • After: Open your information library. Copy the “draft memo to my boss” template. Stick to today’s special topic and resource. Run.

Here are some key don’ts:

Don’t tell thinking models to “think step by step.”

Models like the OpenAI series and GPT-5 logic already do that internally. Adding discipline can do more harm than good. Save it for everyday models.

Don’t rely on “don’t” or “never” commands for everything.

Models, especially Gemini, can over-focus on negative issues and detract from basic thinking. Choose a good frame: tell the model what to do.

Don’t trust polished prose as proof of righteousness.

Hallucinations are very dangerous if they are well documented. As I mentioned in How to learn about AI, you should carefully verify the output of AI.

Do not use aggressive language (“MUST: MUST…”).

Today’s models are very responsive to regular commands. Aggressive words can elicit a guarded output and cause rejection. Use common language.

Do not include undefined acronyms in your resume.

Reduce the output level by balancing. For research on the impact of rapid changes see this recent Brittlebench paper.

Don’t change three things at once when repeating.

If the prompt doesn’t work, change the exception, test, and then change the next one. Otherwise you don’t know what helped you.

Do not assume that the same information applies to all models.

Different model families require different information. The same instruction can help one and hurt another. The temperature and effort settings that work for GPT are not what work for Claude or Gemini.

Do not treat the first answer as the last.

Failure to replicate is a common failure mode in the everyday use of AI. Here’s a trick to make the AI ​​better at multi-step tasks: after each attempt, have the AI ​​write a short critique of what went wrong and commit that note to its memory for the next attempt. There are no advanced mechanics, just a model that “talks to itself” in plain English. In the next attempt, it learns its previous thinking and corrects it. This loop can produce significant benefits over just one piece of information.

The people who benefit most from AI aren’t the ones with the best information templates. They are the ones who treat the model as a powerful tool to improve their work. You don’t need to be completely clear about what you want. A good conversation can take you there, bringing up options and questions that you would have missed on your own. What he can do is recognize the right answer when it comes up. That part is up to you.

To learn more

Supplier documents:

  • Anthropic engineering best practices.
  • A quick engineering guide to OpenAI.
  • OpenAI GPT-5 Quick Guide.
  • Google Gemini promotion strategies.
  • Information design for Google Vertex AI.

Operational resources:

  • Promptfoo (test-driven prompt developer).
  • Encourage (Preset, test-driven information).
  • PromptHub (success criteria and evals).
  • GitHub for agile engineering practice.

Editor’s Note: GeekWire publishes guest comments to encourage informed discussion and highlight diversity of opinion on issues that shape technology and the startup community. If you’re interested in submitting a guest column, email us at [email protected]. Submissions are reviewed by our editorial team for relevance and editorial standards.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button