Best Practices and Limits

Best Practices and Limits

This final lesson is about using Cursor sustainably: getting high-quality output consistently, avoiding the common pitfalls, and knowing when AI helps and when it gets in the way.

The most common mistakes

Accepting without reading

This is the number one mistake. Cursor is fast and the suggestions look plausible. It is tempting to press Tab or Accept on everything. Do not.

AI makes specific classes of mistakes:

  • Hallucinated APIs: It invents a method or parameter that does not exist in your version of the library.
  • Wrong edge case handling: The happy path is perfect; the empty input, the null value, the concurrent request breaks.
  • Stale knowledge: The model's training data has a cutoff. API changes after that date (a new Next.js App Router convention, a Pydantic v2 change, a breaking change in a library) may be wrong.
  • Deleted code: In Composer, the AI sometimes removes code it thinks is unused. It is sometimes wrong about this.

The habit: After any AI suggestion larger than a single line, read it before you accept it. For Composer diffs, scroll through the entire diff. This takes 30–60 seconds and saves debugging time.

Not running tests

Always run your test suite after accepting a Composer change. If you do not have tests, at minimum manually test the changed behavior. AI is not infallible. Tests are the safety net.

Vague prompts

"Fix this" or "make it better" produces vague results. The more specific you are, the better the output:

VagueSpecific
"Fix the bug""The function crashes when items is an empty array. Add a guard that returns an empty array in that case."
"Make this cleaner""Extract the three validation checks into separate functions. Name them validateEmail, validatePhone, validateAge."
"Add error handling""Add try/catch around the fetch call. On network error, throw a custom NetworkError. On non-2xx response, throw ApiError with the status code."

Overusing AI for logic

For simple, well-defined tasks (boilerplate, CRUD, test cases, formatters), let AI do the work. For complex algorithmic logic, state machines, security-critical code, or anything where you need to reason through every step — do it yourself first, then use AI to clean it up or test it.

AI is a great first drafter. It is a poor architect.

The review checklist

Before accepting any non-trivial suggestion, go through this mentally:

  • Does it do what I asked? Read the code, not just the comment or summary.
  • Does it handle edge cases? Empty input, null values, network failure, concurrent access.
  • Does it follow the project's patterns? Compare to similar existing code.
  • Does it introduce new dependencies? Check imports. An import { something } from "new-library" means a new dependency you may not want.
  • Does it touch code I did not ask it to change? Sometimes Composer "helpfully" refactors things you did not ask it to. Check for unexpected changes.
  • Will the tests still pass? If you have a test suite, run it.

Prompting patterns that consistently work

Pattern: Specification prompt Describe inputs, outputs, and rules completely before asking for code:

code
Write a TypeScript function: - Input: an array of { id: number, amount: number, status: string } - Output: the sum of `amount` for items where status === "completed" - Return 0 if the array is empty or has no completed items - Do not use reduce; use a simple for loop for readability

Pattern: Reference existing code Attach an existing file or function and ask the AI to follow it:

code
@authMiddleware.ts @validateUserMiddleware.ts Write a new middleware that validates the request has a valid `X-API-Key` header. Follow the exact same structure, error handling, and TypeScript patterns as the existing middleware files.

Pattern: Incremental build For complex features, build step by step instead of asking for everything at once:

  1. "Add the database model for Order."
  2. "Add a repository with findById, findByUserId, create, and delete methods."
  3. "Add a service layer that uses the repository."
  4. "Add the route handlers."
  5. "Add tests for the service."

Each step is small enough to verify before moving on.

Pattern: Rubber duck debugging Paste the broken code and your current understanding of the problem:

code
This function is supposed to deduplicate a list of users by email. It returns the right count but the wrong users — it keeps the last occurrence instead of the first. Here is my current understanding: the reduce builds a map keyed by email, and later entries overwrite earlier ones. Is that right? How should I fix it to keep the first occurrence?

Explaining the bug often helps the AI pinpoint the fix, and the explanation process itself often reveals the answer to you.

When to put the AI down

Not every task benefits from AI assistance. Sometimes it slows you down:

  • You know exactly what to write. If you could type the function faster than you could explain it to the AI, just type it.
  • The context is so project-specific that the AI cannot help without reading half your codebase. Sometimes it is faster to write it yourself.
  • The task requires deep reasoning you cannot shortcut. Designing a database schema, choosing an architectural pattern, debugging a race condition — these benefit from your own thinking first. Use AI to sanity-check your decision, not to make it for you.

Privacy and security

A few rules to work by:

  • Never put secrets in prompts or attached files. API keys, database connection strings, private keys — if these are in the file you attach, the AI (and the cloud service) will see them. Use .cursorignore to exclude .env files.
  • Do not paste customer data. Real user names, emails, or any PII should not go into prompts or attached code.
  • Check your company's policy. Some employers prohibit sending proprietary code to third-party AI services. If in doubt, ask.

Staying current

Cursor, and the models it uses, improve rapidly. Every few months there are new features: better models, new context tools, smarter diff application. Spend fifteen minutes every month or two reading the Cursor changelog. Try the new features on a small task. The skills you build in this course — prompting well, reviewing carefully, using context intentionally — stay relevant regardless of which version you are using.

You have completed the Cursor track. Finish this lesson to earn your certificate.