Dashboard/Phase 2: Core Workflows
📖 Concept7 min10 XP

Lesson 11: Prompting Claude Code Effectively

Day 11 of 50 · ~7 min read · Phase 2: Core Workflows


The Opening Question

You give Claude a prompt. Fifteen seconds later, it's written elegant code, perfectly formatted, exactly what you wanted.

You give Claude a similar prompt to a colleague. They produce something that misses the mark, requires three revisions, and wastes an hour of your time.

What changed? The task was the same. The difference is how you asked.

Here's the real question: Why do some prompts produce magic and others produce garbage? And is it really about how smart Claude is, or something about how you frame the problem?


Discovery

Let's reason through what makes a prompt effective for an agent.

Question 1: What does Claude need to succeed on a task?

Think about the last time you gave detailed instructions to someone (a junior developer, a colleague, a friend). What made those instructions clear?

It wasn't just the task. It was the context around it.

Consider two prompts:

Prompt A: "Add error handling to the login function."

Prompt B: "The login function currently just calls authenticate() and returns the result. If authenticate() throws, the whole app crashes. Add try-catch error handling, log the error to our standard logger, and return a 401 status with a message like 'Invalid credentials' if authentication fails. Use the patterns in src/logger.js to match our style."

Same core task. But Prompt B is dramatically more likely to produce the right code on the first try.

Why? Because Prompt B gives Claude:

  • What the current behavior is
  • What the problem is
  • What the desired outcome looks like
  • A hint about style (point to logger.js)

Pause and think: What information is Claude missing if you just say "add error handling"?

Claude has to guess: Which errors matter? How should we handle them? What's the style of error handling in this codebase? Should we log? Should we retry? The blank space you leave gets filled with Claude's best guess, which might not match your project.

Question 2: How does specificity affect what Claude produces?

Here's a hard truth: vagueness isn't freedom. It's ambiguity.

When you say "refactor the authentication module," Claude could:

  • Rename variables for clarity
  • Extract helper functions
  • Restructure the code for readability
  • Change the entire architecture
  • All of the above

Each one consumes context and time. And you might hate the result.

When you say "refactor the authentication module to extract the password validation logic into a separate function," Claude has a clear target. It can see what success looks like.

But here's the key: specificity doesn't mean micromanagement. You're not saying "on line 47, do this." You're saying "the outcome should look like this."

Pause and think: What's the difference between being specific about the goal versus the steps?

Question 3: How should you reference files and functions by name?

Claude Code has access to tools to search and read your codebase. But it has to know what to search for.

If you say "the authentication logic," Claude might find several files. If you say "in the auth.js file, the authenticate function," Claude knows exactly where to look.

This isn't about being verbose. It's about removing ambiguity.

Compare:

  • "Make the API faster" — vague, Claude guesses at the bottleneck
  • "Add caching to the /users/:id endpoint in api/routes/users.js" — specific, Claude knows where to look and what to do

Naming things specifically does something crucial: it lets Claude load the right context. Instead of reading 10 files wondering where the relevant code is, it reads 1 file with confidence.

Pause and think: If Claude has to guess which file you mean, how does that affect the context window?

Every wrong guess loads unnecessary information. Being specific saves context.

Question 4: How do you iterate when the first attempt misses the mark?

Even perfect prompts sometimes produce outputs you don't like. That's not failure. That's iteration.

When Claude produces code you don't want, here's what you do:

Don't just say: "No, that's wrong. Try again."

Do say: "That's not quite right because [reason]. I want it to [desired outcome] instead. Here's an example of what I mean: [example or reference file]."

You're giving Claude new information to adjust its approach. This is how you train Claude Code to match your project's style and your expectations.

Here's a real example:

You: "The error handling should include retry logic."

Claude: [produces code with exponential backoff]

You: "I see what you did, but that's too complex for our use case. We just need to retry once on network errors. Check the utils/http.js file for how we handle retries elsewhere."

Claude now has:

  1. Your actual preference (simple, one retry)
  2. Where to find the pattern (utils/http.js)
  3. Why the first approach was wrong (too complex)

This dramatically improves the next attempt.

The pattern: Rejection + reason + reference = learning.


The Insight

Here's what effective prompting really is:

Prompting Claude Code isn't about being clever or writing prose. It's about being clear about what you want and why, referencing specific files and functions so Claude can load the right context, and iterating with specificity when the first attempt misses. The better your prompts, the less context Claude wastes, the fewer iterations you need, and the better the code you get.

The mental model: Think of prompting as briefing a colleague before a task. You wouldn't tell a junior developer "make it better." You'd say "this performance is a problem because users complain about load times. Here's what I want: the dashboard should load in under 2 seconds. Here's the current flow [points to code]. Let's focus on the database queries first." Clear brief = better work.


Try It

You're going to practice prompting with increasing specificity.

  1. Start a small project conversation: claude

  2. Give a vague prompt:

    • "Clean up the code in the main file"
    • "Improve the error handling"
    • "Make this function better"
  3. Watch what Claude produces. Does it do what you expected? Probably not perfectly.

  4. Now reject it and iterate: Use the pattern from Question 4 above.

    • Name the specific issue: "You added logging, but we don't use console.log in this project"
    • Point to a reference: "Look at src/logger.js to see how we do logging here"
    • Clarify the outcome: "I want the error logged but not printed to console"
  5. Try again with a specific prompt from the start:

    • "In utils/validation.js, the validateEmail function should reject emails with + characters because those don't work with our mail service. Add validation for that."
  6. Compare the results. Notice how the specific prompt produces better results on the first try, and iteration with references teaches Claude your style.

This is the difference between prompting Claude like a chatbot versus working with Claude Code like a pair-programming partner.


Key Concepts Introduced

ConceptDefinition
SpecificityClarity about the goal, desired outcome, and constraints — not micromanagement of steps
Context in promptsExplaining the "why" behind the task so Claude understands the full picture
File/function referencesUsing exact names and paths so Claude loads the right code to read
IterationRejecting early outputs and guiding Claude toward your vision with specific feedback
Reference patternsPointing Claude to existing code that shows your project's conventions and style

Bridge to Lesson 12

So far, you've been thinking about Claude Code working on individual tasks — a function here, a file there. But real projects are messy. Changes ripple across multiple files. A refactor in one place breaks something in another.

Next lesson's question: How do you orchestrate changes across an entire project?

We'll explore multi-file operations — how Claude handles cascading changes, refactoring at scale, and the agentic loop when the work spans many interconnected files.


← Back to Curriculum · Lesson 12 →