Cursor Guide
Cursor Guide
I think Cursor is the best AI coding agent/IDE in the market right now. I have tried a few alternatives, Windsurf and Cline, and
be paying customers to both of them, however I still come back to Cursor as my main Coding Agent. I am an early adopter
and heavy user of Cursor. I signed up for a yearly subscription of Cursor in 2023 for my personal use and love it so much
since then. I have been wishing and believing Axon would adopt Cursor at some point because it can increase productivity of
engineering team dramatically. So when Axon start the Cursor trial program, I am dope!! Now I can start using Cursor for my
work. Cursor is the first truly coding agent that could autonomously use tools, like list directory, reading files, running shell
command like find/grep to search for things in the code base, searching the internet for updated documents.
Workflow
1. WRITING DOCUMENT
First I ask Cursor agent to deeply and comprehensively make an analysis and creating a comprehensive Markdown document
that explain a very complex, unfamiliar code base that I am working on so I could understand what the code is doing. If there
are existing documents, I would feed it to Cursor as context since Cursor make it super easy to add files as context. This is
super helpful and time saving because previously I have to open and read many documents and code in many places since
the logic could span in many different systems, files, modules, functions. Now everything is nicely put into one place, easy to
read and understand. If I find anything not clear I can ask it to make another deeper dive to look into that particular part of the
code base and write another doc or update the current doc. Normally the doc will contain a one-click link to jump to the code
so I can quickly jump to the related code to verify or check for more detail.
2. WRITING TESTS
Then I would ask the agent to check if there are any existing tests for the code base that I am working on, normally it should
be included in the previous document. This is important to verify the behavior of the current code base. If there are missing
tests then I would start asking it base on the previous document to add a comprehensive tests to make sure current
understanding of the code base is correct. The more tests and edge cases we can cover the better. Previously this is done by
hand so it took a lot of time and effort. Now with Cursor this could be done with less time. Most of the time is to come up with
the good prompt and context from the related code and the previous document in step 1 for Cursor to understand and write
correct tests. We also still need to check and verify the tests that Cursor created but it still save a lot of time than writing
everything by hand. We could have Cursor to write another Markdown that document all the tests cases, let other engineers,
QAs or PMs who are more familiar with the code base or feature to help verify the two documents that we have created so far
to see if we missed any important or critical edge things.
With the comprehensive tests and document above, we would be easier in modifying or adding more logic to the current code
base and still be confident that everything is working as expected. I would ask Cursor to make another deep and
comprehensive analysis to suggest some good options to modify or adding new logic/features to the current code base. Ask it
to make sure to include all the pros and cons, tradeoffs. Also ask it to suggest the best option base on the criteria that we have
defined, basically ask Cursor to write a comprehensive 1:3:1 document. The agent might not come up with all the options that
we wish it to include because it might not have all the context so we will need to go through some back and forth discussions
and prompt for it to have all the information it need. Make sure to document everything to another Markdown file.
Base on the previous documents. Ask Cursor to do another round of extensive analysis to come up with the best strategy,
design and detail action plan to implement the change that we want to make to the current code base. Make sure to ask it to
identify all the major risks, detailing all the important things that could go wrong, think deeply of all the important edge cases
that we might have missed during the implementation. Do not forget to put everything into another document. This again might
need a lot of back and forth discussion and prompting with the agent until it can produce something that we are happy with.
The important thing here is again to make sure the agent has all the context it need since one of the critical limitation of AI
agent right now is limited context window
Now, we can ask Cursor to start implementing carefully and correctly everything listed in the detail strategy, design and plan.
Of course, we can do it by our own since we are engineers and we love coding but why so when the agent could do it for us
much faster ^^. More often I would break the detail implementation plan even further, to small pieces of task, like implementing
a function or module, with clear interface defined, with input output, ask the agent to come up with some strategies and
suggest the best one base on some criteria, pros and cons, clear tradeoffs. Because of the limited context window, Cursor will
be most effective when giving a small, clear instruction with well defined input and output, acceptance criteria. Make sure to tell
it to always write the tests first before implementing anything so that after it done with the implementation the agent could self
verify its work by autonomously running all the tests, check the results, if anything failed, come back and fix the issue and run
all the tests again, basically closing the development loop autonomously. We still need to verify all the code that is generated
by Cursor, basically doing code review, and verify the test cases, output, and results. Make sure every edge cases are
covered by the agent.
Cursor rules
https://docs.cursor.com/context/rules-for-ai
PROJECT RULES
● Rules specific to a project, stored in the .cursor/rules directory (.cursorrules file in old version). They are
automatically included when matching files are referenced.
GLOBAL RULES
● Rules applied globally to all projects, configured in the Cursor Settings > General > Rules for AI section.
EXAMPLE
This is an example cursor rules of a C++ project for reference. Please make necessary changes to adapt to your project and
workflow
1. Deep mode - make a deep, detail and comprehensive analysis and recommend some most
2. Go mode - implement changes to the codebase based on the deep and comprehensive ana
- You start in Deep mode and will not move to Go mode until user decide to go with a
- You will move back to analysis mode after every response and when the user types `
- When in deep mode always output the full deep and comprehensive analysis in every
- When you encounter any errors, bugs or issues, you will not fix it, but will enter
Code Style
- Based on Google C++ Style with 4-space indentation and 100 column limit
- Use `clang-format` with provided `.clang-format` for consistent style
- Prefer modern C++17 features
- Include order: 1) system headers with angle brackets, 2) library headers, 3) proje
- Try to always create a new file to encapsulate new feature or functionality for be
Naming Conventions
- Classes/Structs: PascalCase
- Methods/Functions: camelCase
- Variables: snake_case
- Constants/Enums: kConstantName
- Files: lowercase with underscores (file_name.cpp)
- Class member variables: camelCase\_ with a trailing underscore (camelCase\_\_)
Error Handling:
- Use structured error responses with success/error states
- Prefer early returns with descriptive error messages
- Use exceptions only for exceptional conditions
- Log errors to appropriate channels (std::cerr for errors)
Best Practices:
- Write comprehensive Catch2 tests for all functionality
- Follow defensive programming principles
- Use smart pointers for memory management
Build Commands:
- Build and run: ./run.sh
- Build project: ./build.sh
- Run: ./run-only.sh
- Run all tests: cd build && ctest -V
- Run single test: cd build && ctest -V -R <test_name> or ./build/src/tests/<path_to
- Format code: find src -name "_.cpp" -o -name "_.h" | xargs clang-format -i
- Run all checks: ./run-checks.sh
Current Limitation
The current limitation of Coding Agents/IDEs (Cursor/Windsurf/Cline etc with Claude/Gemini/OpenAI LLM models):
● Limited context window, this is like you can only have a limited working memory
○ However Google just released Gemini 2.5 Pro model 7 days ago (3/25/2025) with 1M token context window which
could fit an entire code base and it is available to choose in Cursor as a premium model (yay!!!)
■ https://blog.google/technology/google-deepmind/gemini-model-thinking-updates-march-2025/
○ SV startups with big bag of money are tackling this problem, $515M funding raised, plan to scale up to tens of
thousands of GB200s
■ https://magic.dev/blog/100m-token-context-windows
● Have no vision so can not do UI verification
● Do not have long term, permanent memory
● Can't learn new knowledge on its own, currently can only research and return results, but forgets everything once
context is lost, because it has no long-term, permanent memory