Issues ImprovementsOnPrompts
Issues ImprovementsOnPrompts
(common for the Assessor, Classifier and Review microservices to achieve more
than 95% code coverage)
Issue: Dependency management – There were some dependencies that were missing in the
output pom.xml like open feign, starter-websocket depedency etc.
Solution: Need to explicitly specify some dependency to include like open feign dependency
needed for feign client call. Verify the version compatibility, if the versions of the dependencies
are compatible with each other and with the Spring Boot version you are using.
Issue: Exception Handling – The output was always generating generic exception handler
methods, not specific to any method/class.
Solution:
o Provide clear, detailed prompts to ChatGPT specifying the need for comprehensive
exception handling.
For example: Generate a Spring Boot REST controller that handles CRUD operations for
the `RequestDetails` entity. Ensure that all exceptions, such as
`ResourceNotFoundException`, `ValidationException`, and any other potential runtime
exceptions, are properly handled and return appropriate HTTP status codes and error
messages.
o Ask for specific exception handling methods and classes. For instance: Generate a
`GlobalExceptionHandler` class in Spring Boot that handles
`ResourceNotFoundException`, `ValidationException`, and generic exceptions, returning
appropriate HTTP status codes and error messages in JSON format.
Issue: Wrong Business logic implemented – The model couldn’t interpret the prompt correctly
and gave jumbled output. For example: for handing the approve and reject button, the business
logic was given in a single line prompt though it was explaining the whole logic, but the model
missed to update a column in approve request and instead put that logic for reject request in
output.
Solution: Need to provide clear context before asking model to generate output and give step
by step instructions for generating the business logic so that model can get a clear picture and
reasoning behind the logic. Provide the input case by case covering all the conditions if this
happens then implement "abc” logic else implement “xyz” logic.
Also provide the context of any abbreviations or the reason behind any logic so that model can
appropriately generate accurate output.
Issue: DIY comments – Output doesn’t generate code for each method/helping method instead
write comments to DIY (do it yourself), for example it gave comments in code output
//Implement the JSON parsing logic/Implement mail sending logic/Implement external API feign
call logic etc.
Solution: We need to specify explicitly to implement even these small helping methods, for
example we have asked to generate the service class to implement “xyz” logic taking the request
message received from Assessor service, instead we should ask to first write the JSON parsing
logic for the request message and getting the “A” and “B” entities from JSON and then ask to
implement the “xyz” logic taking these entities.
Issue: JUNIT test cases – The model doesn’t write test cases for all the classes/methods or
doesn’t provide 100% code coverage.
Solution: Need to provide the classes and method names (like controller class, service class etc
or we can write “the method which will implement "xyz” logic” as we do not know the name of
method in output yet) to get the test cases for all. Also, we should ask to provide 100% code
coverage while generating the test cases.
Issue: Missing fields in java model – If we do not specify the whole data model (the entity name,
the fields, the data type of fields) then the output would be inconsistent, and the logic will also
get impacted. For example, the whole model wasn’t provided for RequestDetails entity so the
LLM itself interpreted three fields and implemented the whole logic of nano pipe on basis of
these three fields.
Solution: We should provide exact name of the entities, what fields/columns that entity would
have and more importantly we need to specify the data type of these fields otherwise the whole
code would be inaccurate and will give unexpected results. Hence, it is important to specify
these details accurately to ensure data and logic consistency.
Also here are some generic suggestions that we should follow while writing the prompts:
o Provide clear description about the whole project including the facts what is a nano
pipe/what is a workstream, giving brief detail about the project flow and the logical
reasoning behind each part of the project so that the model has all the context it needs
to understand the project and build the code for the components.
Afterwards, we can provide a detailed workflow/description of the component for
which we need to generate the code.
o We should break the complex tasks into small manageable tasks, for example, I break
the classifier workstream into two services: Classifier (it will manage the request, classify
into project title description category, search the vector DB and sending mail to user
with review screen url) and Review service (to manage the review UI screen, fetch the
nearest neighbor details, handle approve and reject button functionality).
This is necessary so that the model doesn’t get bombarded with so much information at
once, hence request one part at a time.
o We can specify any specific design pattern that needs to be followed by the code or
even for smaller part in code, for example: Use the Repository pattern for data access
and ensure that all database operations in the service methods are transactional. Use
annotations like @Transactional.
o We can also offer context or examples of similar code to guide the model in the right
direction and get the expected output from it.