Introduction to Cursor - AI Code Editor
Introduction to Cursor - AI Code Editor
least that is what some people who have already used this tool claim praising the
native integration of generative AI on the other hand this tool also faces
significant criticism particularly from Advanced programmers who claim that it is a
useless tool in the hands of a professional programmer instead of wondering which
side is right it is much better to Simply download this editor and then run it on
your project since cursor is built on the basis of Visual Studio code it is an
interface that is already well known to at least some of us I have already
completed the configuration and it's not the topic of this material however if you
work with Visual Studio code on a daily basis there is an option in the settings to
import your profile configuration and extensions however we are talking here about
the settings of Visual Studio code itself while regarding the settings specific to
cursor we will return to them later meanwhile I would like us to take a look at the
project that I am developing here and that we will modify with the help of cursor
the goal of this project is to enable personalized interaction with large language
models such as GPT 4 GPT 4 an additional complication is the fact that I am using
two tools for building this project that I was not familiar with until now the
first of these is the hono framework which is a relatively new JavaScript framework
additionally I set out to learn about the drizzle tool which is an or that I will
be using to interact with the SQ database there is a certain problem associated
with both of them regarding large language models namely I don't know if you are
aware but the knowledge of the models is Frozen in time and does not Encompass all
possible information this means that we can ask a question for example about
drizzle omm and we will get the correct answer however this does not mean that if
we ask a question about specific functionalities we will receive a correct answer
for example I went here to the documentation for the section talking about the
integration of SQ light and ban here is a snippet of code to which we will now ask
a question of the model although the model generates a response that seemingly
looks correct it is unfortunately not correct or rather it is more outdated this
does not change the fact that we have incorrectly generated code here here we see a
comparison of exactly this fragment with the one we have in this place everything
seems similar but the Imports do not match it is particularly important to pay
attention to this because in some situations the models will generate code that is
not up to date while in others they will suggest functionality that never existed
we must therefore keep this in mind and remember not to turn off our minds when
working with large language models interestingly this does not mean that large
language models are useless in this particular case namely if we now return to the
cursor and start a chat here using command ASL we can see that we have the option
to search the internet while generating responses then the model will use not only
its base knowledge to respond but also the results of internet search is however
the support we can provide here does not end there specifically cursor allows us to
add additional context in the form of documentation for selected tools we have both
a library of default documentation available and the option to connect your own
documentation in my case I simply imported the documentation for drizzle omm and
hon Dev here this way I can now reference them by calling them in this manner we
have confirmation here that the documentation has been added to the context and
then we can ask ask our question we see that the cursor searches the internet and
then also searches the documentation and based on all this data it generates a
response if we go down here and now compare this response with the section of the
documentation that we have here we can see that the response is exactly what we
were looking for this does not of course mean that we have solved all the problems
in this way and that the answers generated by cursor will always be correct we
simply see here an example of how significant the context we add to the
conversation is for the quality of the model responses however we must know that
this context is not infinite although it is becoming larger for the latest large
language models the cursor still limits it by default selecting only the most
important fragments of the context that we provide in the form of search results or
as in this case documentation in addition to external data sources we also have the
option to include individual files or directories there is also the possibility to
ask a question for example to a specific pull request or commit ultimately we also
have the option to discuss the entire source code of our project but we must
remember the aforementioned context limitation specifically the questions we will
be asking here will be used to search the project and find the most relevant
sections possible this means that the quality of the responses will depend on the
fragments found and in turn their Discovery will depend on the question we ask here
one can imagine that we are dealing with an advanced search engine that is capable
of searching not only based on keywords but also deepening its search and for
example assessing which of the found fragments are actually relevant from the
perspective of our conversation in other words the functionalities available to us
here allow us to easily provide information about our project and the tools we are
working with making the model's responses much more personalized than for example
on gp4 or other similar chatbots of course it does not guarantee us 100% accuracy
in code generation but it undoubtedly helps a lot now let's return to the
functionality of our project and I will just make sure that the server I configured
here is running it seems that we can send HTTP requests to it which I will do using
the Alice application my interface for interacting with large language models I
have set the address of our server and the appropriate endpoint here which connects
this graphical interface to our server I just need to switch to the appropriate
mode now and if I send a message I will receive a response from the model model
shortly we can confirm this by making a small change here to display the content of
the message however instead of writing console.log here and as you can see
similarly to GitHub co-pilot I receive a suggestion here instead I will use the
option that is suggested here namely I will press the command K button a window has
been displayed here that allows me to enter a simple command the result of which
will be generated code specifically I can request to display the content of the
last message after a moment the code has been generated and I can now accept it by
pressing command y or command enter after saving the changes we can send one more
message and we will see that this time the content of my message has indeed
appeared here this means that without a doubt our connection is configured
correctly here I will now remove this code by pressing command Z and we can
immediately see that I can not only undo the addition of the code but also
potentially go back to the instruction that I typed here it turns out that I have
the option to either go back to that first step or simply comment on the change
made now however we will focus on pushing all the changes we have made here and
concentrate on our main task it will involve exactly ensuring that the interaction
we have here meaning the content of the messages sent in this place is saved in our
database if we now take a look at the contents of the database we will see that I
already have tables configured here inside which I will want to store the content
of messages the structure of this table or tables is of course also defined find on
the code side for example at this point we therefore have general information here
that may be useful for implementing this functionality and providing the
appropriate context to the model thereby increasing the chances that the generated
content will be correct however contrary to what it may seem now I will not
directly ask about implementing this functionality but rather pose a question
regarding the SRC directory this question will not require searching the web but
will take into account the problem we need to solve specifically it is the ability
to save messages exchanged between the user and a large language model while
maintaining the current project structure I would like us to plan the
implementation together and what we can start with for such tasks I always use the
cloud 3.5 Sonet model here because at the time of recording this material it is the
best available model for tasks related to code among others and now we see how the
cursor searches the content of our files and then suggests changes the first point
states that we already have a message Mage structure this is true next we receive
suggestions for creating a service that will include methods responsible for
creating new messages and retrieving threads I will just add that this particular
suggestion does not come directly from the model itself but from the fact that I
actually already had a file with that name created here although it was empty
however this does not change the fact that the suggestion we have here is entirely
correct and is an element of what we are aiming for next we have a suggestion
regarding the controller itself and that this is where we will be saving the
messages although no changes have been made here they are indeed located right here
we see that the model suggested creating a thread identifier here and then added
both messages in this way at first glance
we achieve our main assumptions regarding saving interactions in the database
unfortunately this does not mean that the logic we have here is correct firstly the
fact that we are generating a thread identifier here means that we will only save
two messages in the given conversation this means that the model incorrectly
understood the fact that we are actually concerned with the ability to send the
conversation identifier at this point and only in the situation where it is not
provided will we want to generate it the second problem is the fact that the
response from the model will not always be provided in full the reason is that we
have the option of streaming information here and as a result the outcome will not
yet be available at this stage furthermore we will also want the array of messages
provided here to not be just the last message conveyed in this place but a complete
set of messages present in the given thread in other words we have a few problems
here and although the code written here is correct in itself unfortunately it does
not align with the logic we expect this is a perfect example that a large language
model will not be able to do the work for us unless it receives precise
requirements and now moving forward we also have suggestions regarding service
updates and interestingly the model pointed out an issue concerning responses that
are being streamed unfortunately he suggested a rather uninteresting solution which
involves simply waiting for the streaming to finish meaning the entire response to
be generated and only then sending it in reply of course this disrupts the entire
functionality related to streaming as we want the response to be returned in Parts
allowing us not to wait for it to be generated in full so we need to take all of
this into account in the implementation itself however for this purpose we will not
work in the in chat but we move on to a functionality called composer composer is a
floating window that similar to a chat allows us to refer to the existing context
of the project and engage in conversations the difference is that when we enlarge
them we can paste here the instructions or descriptions of the changes we want to
make I have described here our main goal which is the ability to save the content
of messages and then a few points outlining how I want this to be done so I want to
avoid modifying too much of what we already have and to have the ability to group
messages within threads as well as to stream responses and send them in their
entirety ultimately I want the logic to be divided with the help of services and at
the very bottom I also include the files that I want to be modified or used as a
reference to determine the overall style of writing the code after sending this
query we see that the code is being generated by the model and soon we will see
what the result will be it seems that the files are already ready in the messages
service file a class and methods have been created responsible for saving messages
and retrieving threads it looks fine next we have our chat endpoint where we save
the user's message and in the case of a non-reamed response we save the content
generated by the assistant we also have the ability to retrieve messages from an
existing thread as well as the option to pass an identifier using a header next we
also see the modified response file where we have included the logic respon
responsible for saving the messages that are being streamed and finally we have the
path configuration and everything looks fine although I would like to make a few
changes first of all I accept the changes and then I talk about what I would like
to be changed further unfortunately we have a small problem with the size of this
text field but we see that I am asking for the thread identifier to be passed in
the request object not in the header next in order for this message service not to
be passed in this place it should be created similarly to the r llm service let's
say that for the needs of this application this will be an appropriate solution and
finally I want the thread identifier to be returned in the response object as well
as in the response header I accept the change and I am once again waiting for the
implementation of the changes we can already see here that the thread identifier is
now being retrieved from the request object and if it is not available it will be
generated here this is of course completely correct and in line with our
expectations the rest of the logic in this file remains unchanged while we also see
a change regarding the return of the header containing the conversation identifier
if we would like to make sure that no additional lines have been changed here we
can of course switch the view and compare the individual lines that have been
modified meanwhile we can switch to this file where we see that the message service
has been created as I requested and the remaining parameters have been removed and
finally we also have the recording of the assistant's response here so it seems
that everything is in order I can now close our composer and we can start the
server however we see that there is a small error related to importing the file so
I will quickly fix it and we can see that we have more errors that have slipped
past us here specifically This concerns the path file that I overlooked in our
context therefore we will return to the composer and ask it to fix this error I
will resize this window again to see exactly what will be modified at this stage it
seems that the error has been fixed the server has indeed been started so now we
will test the endpoints using insomnia here we have a request to the server which I
will slightly modify at the very beginning it will contain only a message to our
assistant and it indeed looks like the response has been generated so we should now
move to the database to see that the content has been correctly saved here so we
have a user named Adam and an assistant named Alice we also have an ident
identifier in the thread that we can use to continue the conversation I just need
to provide this identifier here using the threat uid parameter and now we can
immediately see that we are in the same thread as sending the Rew welcome message
generated the response welcome back this means that we actually have additional
messages here that have been linked by the same thread identifier and correctly
recorded in our database we could possibly replace the thread ued here for
consistency with the form used in the rest of the API I propose that we do this
with the help of a large language model and the composer function I will create a
new thread here and request changes to be made in our controller files and
responses we will increase the window here again to see what changes have been made
and we can compare the names of these properties here and thus confirm that
everything is correct once again I will close these changes and check if the server
is working correctly and then I will make one more query and here we have our
identifier which we can pass here and sending another message will cause it to be
linked to the existing thread we have confirmation of this here the identifier
differs from the previous thread which is why everything works correctly here we
can also check the streaming option and here everything also o looks fine this
means that we have implemented the full functionality of saving message content in
our application and interestingly we did this without writing a single line of code
at the same time it is clear here that all of this did not happen without human
involvement I constantly controlled this process and made key decisions while the
language model generated the code in question normally I would have to spend a lot
of time implementing this functionality considering the fact that I am dealing with
Frameworks that I am just getting to know of course this does not exclude the fact
that I need to improve my skills in using these tools to work with them more easily
in the future at the same time the entry barrier which undoubtedly exists here is
incomparably lower compared to the situation if we did not have large language
models it can therefore be said that the use of the cursor allowed me to implement
this functionality much faster and it is of course true because recording this
material took me several times longer than implementing the functionality itself
and now with this fact I can do two things the first is simply to take a break or
move on to the next functionalities and the second could be for example fixing some
bugs or focusing your attention on code optimization and for example to discuss
with the model the best possible practices regarding this specific project unlike
search results on the internet in this case I will receive personalized and
tailored suggestions that I can do anything I deem appropriate with and since we're
already talking about extra time for solving problems we have here the type
incompatibility in typescript namely the objects we retrieve from the database do
not match the type expected by the versal AI SDK this means that we need to perform
some kind of mapping here we can solve this problem in several ways the first of
these involves solving the problem manually and at this point we could really
conclude the topic I show this because despite the fact that the cursor has all
these functionalities it does not mean that we always have to use them however I
would like to show here a few approaches that we can apply first we can use the AI
fix in chat option in that case the error content will be transferred to the chat
and then a response will be generated here in fact the solution we have here is
also an appropriate mapping of our messages which resolves the issue of course I
would like to point out that when it comes to this specific shape of objects if
we want to expand the interaction with the model we will also need to change these
types here in any case this is the second solution that we could apply here the
next step is to continue the refactoring for example with the help of a composer I
just need to highlight this piece of code and then write for example a request to
move the mapping to the message service then the changes I am referring to will be
made here and I can accept them again and continue working on my application so now
let's put it all together cursor is primarily a quite good code editor based on the
very popular Visual Studio code its basic functionalities include code
autocompletion and syntax suggestions similar to what we know from tools like
GitHub co-pilot or super Maven for example next we also have inline prompts
available which allow for editing selected parts of the code we should choose this
option only when the scope of the work being done is indeed very small next we have
a chat available whose main purpose is rather to discuss the introduced
functionalities or to have a general conversation about the project for example we
could ask here about the recently implemented changes specifically I am imported
the current div here which is the list of changes I made since the last commit and
we can for example in connection with this to ask about potential bugs I already
see a general list of suggestions here that we could actually expand on and discuss
further for example regarding the optimization of the connection to the database
however we will not be doing that now it seems to me that the concept of the chat
is obvious here and finally we also have a composer which we will actually use when
we want to make changes in multiple files of course all these interactions with the
model are also stored in the form of a history allowing us to easily return to
individual conversations however ultimately in all these functionalities what is
truly most important is what we know about the operation of the large language
models themselves the mentioned fact of limited base knowledge limitations of the
context window or the ability to maintain attention translates into the overall
understanding of what we can expect from such a model after all we have seen
examples where in sufficient context resulted in for instance to incorrectly
generated code that contained references to non-existent Imports similarly there
may be situations where the composer will make changes that we actually do not want
or that will contain business errors that our linter will not catch moreover
ultimately we take responsibility for the generated code for example despite the
fact that the llm helped me a lot with the implementation of the last functionality
as you can see it left some unused Imports here this is of course a simple example
of some oversight however we must remember that this rapid pace of implementing
changes is in many situations only an apparent benefit because if we were to
generate a lot of code here that might initially meet our needs at some point we
could get lost in it and thus we would need more time to understand what has been
generated making the benefit of using tools like the cursor minimal or even non-
existent as for the tool itself I would like us to go through the settings namely
here we have the ability to manage our account and as you can see a paid plan is
required which gives us access to premium models and fast generation which is quite
expensive and also wears out quickly of course we have the option to upgrade our
plan as well as set limits which also gives us access to various types of models
that are listed here in practice however as I mentioned in almost every case we
will want to use the best possible model which at the time of recording this
material is cloud 3. five Sonet meanwhile returning to the settings we see that we
have defined rules for AI here it is nothing other than the so-call system prompt
which is an instruction added to the chat the content of which influences the
behavior of the model as for me I have simply defined a few rules here that I would
like the model to follow of course this does not mean that the model will always
adhere to them because as I mentioned large language models have a limited ability
to maintain attention especially in extensive contexts I either way it is worth
building such an instruction for yourself or for example getting inspired by those
available on the cursor. directory website here we see examples of instructions
that are tailored to specific tools and Technologies this means that we can choose
a few of them and then use them either here or for example use them within the
cursor rules file all we need to do is place this file in the main directory of the
project and here we can save our instructions interestingly we could just as well
save any other text file here although this one will not be automatically loaded by
the cursor what I mean is that we can save some information here and then during
the interaction with the model refer back to the content of this file as you can
see the model indeed benefited from her content specifically quoting a fragment of
it this means that within such a file we can include either various types of rules
or for example an action plan similar to the one I used earlier I also don't know
if it's clearly visible enough but the ability to work with such text files can be
taken to a slightly higher level for example here I defined a task list and a
section where the data is supposed to appear I can now run the composer and then
write for example a request to complete tasks we see that in this situation the
composer correctly completed all the tasks we had on the list and filled in the
data section such an approach can be applied to the entire project allowing not
only for task execution but also for providing additional information such as for
example provide feedback or ask the model to ask additional questions to which we
will also be able to generate answers ultimately something like this will of course
be directly linked to the issues tab on GitHub or for example with our list of
tasks in any case this only shows that one can think outside the box and use the
cursor not only for faster coding or for introducing individual functionalities but
also for designing an entire workstyle that can be completely tailored to us and
our needs before we go any further I would like to take a step back in history to
the previous conversation I had in the background specifically the this is an
example where we utilize the ability to converse with the entire project code as
well as the capability of large language models to generate structured responses
specifically I asked here to visualize how data flows within the query executed in
the specified controller additionally here in the codebase settings I increased the
number of search Snippets to 400 and activated an additional step related to
reasoning or contemplating the results as a result the files were returned here and
then there content was evaluated here finally we had this stage of thinking where
the model considered how to generate the map as a result we obtained a graph here
which I then pasted into the mermaid tool as you can see this has led to the
rendering of a graph that precisely represents how data flows in our application or
within this specific endpoint something like this can be useful during the project
exploration phase however it is important to remember that the model May Overlook
something or simply make a mistake I experienced this even here because I had to
ask an additional question in which I pointed out a potential error and requested
an update to the graph this second response was already correct here and at least I
don't see any mistakes here anymore of course I suggest looking at this not only
through the lens of this specific example but through the lens of the concept we
applied here which involves converting a regular model response into an
automatically generated graph still on the topic of the chat itself I would like to
point out that we can work not only with text here but also with images all we need
to do is simply paste a screenshot here and we will also be able to ask related
questions I described a practical example of such a situation in one of my threads
on XI specifically I was implementing a fix in the interface involving visual
changes and appropriate State Management at a certain stage I received responses
here that were partially correct as not all requirements were met at that time in
addition to the message describing the problem I also attached a screenshot that
illustrates it here again it should be noted that large language models and their
ability to interpret images is also limited we can read about this in this
publication which contains examples discussing issues such as detecting
intersecting lines these are scenarios that we can easily encounter during for
example coding interfaces unfortunately large language models will not be able to
help us precisely in this context although that does not mean we should not try in
many situations conveying an image can be very helpful and enrich our instruction I
suggest that we go back to the settings because we have a mode here that we
obviously want to activate however we must remember to respect the decisions of the
employer or for example clients regarding the use of large language models this
means that before we use tools such as a cursor we should consult either with a
supervisor or our client regarding the policy privacy and the way data is processed
in the models tab we have a list of available models that we can connect to and we
can also add additional models here however as you can see I am rather interested
in working with just one model in order to work with such models it is necessary to
connect
API keys in my case these are the open Ai and anthropic keys but we also have a
few additional options here the principle here is obvious namely that the connected
key allows us to utilize the given model unfortunately at least at the time of
recording this material if we want to use the composer option we cannot use our own
API keys but must rely on the plan available in the cursor account settings and the
last tab we have here is the settings that allow for adjusting the behavior of the
cursor itself as for each of them as you can see these are quite simple settings
that are also described in detail I also assume that over time this tab will change
here and new options will appear in it therefore we will not go through each of
these settings in detail now as it is easy to test them on your own I would just
like all the settings that I have activated to be visible but that doesn't mean of
course that it has to be the same in your case as for the settings we have here I
would just like to pause on indexing the contents of the project files specifically
it may happen that the cursor starts generating responses using older versions of
files or for example excluding new files at that point we may want to resynchronize
the directories as part of the indexing itself we also have the option to ignore
selected files and in any case it is worth remembering them and placing an ignore
cursor inside the file for example we have here the node modules directory or the
env. dff file containing our API keys I think that regarding the configuration
itself and the options we have available here that would be all so I actually
encourage you to do two things now the first one concerns the installation of the
cursor and adjusting its settings to meet one's needs and then implementing a very
very simple project from scratch it's simply about getting familiar with the cursor
mechanics the functionalities available here as well as the keyboard shortcuts all
of this will allow us to use this tool more efficiently or simply make the decision
that it is not for us on the other hand the second thing I encourage even more is
to learn about the functioning of large language models their current capabilities
and limitations this knowledge will directly translate into the quality of the
generated responses and thus into the value that the cursor will provide us finally
I would like to add that cursor is no longer the only code editor developing
functionalities related to generative AI an alternative that is still in the early
stages of development is The Zed editor as well as the tools created by jet brains
unfortunately despite my great affection and respect for this company they remain
behind in terms of implementing generative AI Tools in their products perhaps this
will change soon but for now the best solution in this regard Remains the cursor as
for the topic of the cursor that would be all once again I encourage you to install
this tool and to answer for yourself whether this tool is meant for us or not I
hope that the examples and material I have shown so far will help you better start
working with this tool explore its capabilities more effectively and be aware of
the limitations as well as above all the responsibility that rests on us as
developers related to the code we create for our projects I personally keep my
fingers crossed that generative AI tools will assist us all in our work and make
programming even more enjoyable while the final value of the software we design
will be even greater