Logo

dev-resources.site

for different kinds of informations.

Document, Test & Review your code with Amazon Q Developer

Published at
12/31/2024
Categories
reinvent2024
amazonqdeveloper
development
codeassistant
Author
welcloud-io
Author
11 person written this
welcloud-io
open
Document, Test & Review your code with Amazon Q Developer

This year, at re:Invent 2024, there were a few announcements about Amazon Q developer. Among these announcements, three new agents looked interesting to me.

The first agent generates your documentation (/doc), the second agent generates your unit tests (/test) and the third agent review your code vulnerabilities (/review).

They look interesting because, even if crucial in software projects, these tasks are often overlooked. And I guess you would agree that writing documentation can be tedious, writing unit tests can be discouraging, and reviewing vulnerabilities can require too much expertise...

So, in this blog post, I evaluated those agents.

For that, I used this simple feedback application, which records feedbacks and acknowledge they have been received by sending an email. Here is its architecture:

Image description

The code I used is right there: https://github.com/welcloud-io/wio-doc-test-review-with-amazon-q-developer. Everything I demo here can be reproduced from it.

N.B. This is generative AI and results can be slightly different on your side.

/doc to document your application

I started by creating a README.md file that wasn't existing yet in my folder.

I first choose the /doc agent in the Amazon Q Chat in my IDE (you can also see the project tree that I want to document in the terminal)

Image description

And then I click on "Create a Readme" button

Image description

Q inspect your project and once done, I accept the generated README.md, which will be created in my folder.

Image description

The result is not bad, and when I verify what has been generated... that seems correct. I don't really see any correction to apply. At least, these are very small, like adding more arguments to the CDK commands. However compared to the time I saved compared to writing this documentation on my own, it's negligible.

Image description

However, I am a bit disappointed with the diagrams and I would like to add more of them.

Lets start a new documentation task:

Image description

Then, I don't create, but update the existing readme file...

Image description

...with a specific change

Image description

And I ask Q

add sequence diagram

Image description

And it naturally adds a mermaid sequence diagram, after the right section (the Data Flow section)

Image description

I accept this update, and I preview the diagram. The result is not bad at all once again, at least better than the original data flow for me.

Image description

I think I could add more things, but so far I understand that this agent has to be used in an iterative process. The documentation will not be generated in one go.

/test to test your code

Now I want to add unit tests to my code, so I will use the new /test agent.

For that I must open the file which contains the code I want to test, in my case the 'send_feedback.py'.

Image description

It proposes to specify a function, but since I just have only one function...

Image description

...I directly press enter.

Image description

After a few seconds I can see the test suite, that will be placed in a new generated test file.

Image description

I accept the tests...and I run them.

Very quickly, after just a few modifications, they are all running (7 tests, 130 lines of code). Again, the time to fix things, compared to the time I would have needed to write these tests on my own is negligible.

Image description

Now I will intentionally introduce an error and see if it's covered (therefore detected) by my new test suite. So, I select the feedback field in my table update instruction...

Image description

...and remove it from the code

Image description

I run my test suite again, and here is the result: the missing field has been detected.

Image description

So now I should be able to refactor my code safely. Pretty interesting!

N.B. I tried to use /test agent on my CDK code, but the result was not as good as when I generate it with my own prompt in the chat.

/review to detect code issues

Then, before committing I want to review my code, so I will use the /review agent

Image description

I will choose review workspace to scan all my files

Image description

Quickly, I get all the issues of my code. There is no critical issue, but one High level issue in my CDK code.

It detected there is no encryption of my DynamoDB database, and spots the lines of code that are involved

Image description

By clicking on the search icon of the issue, I can view the details of this weakness (CWE).

Image description

I can also ignore this issue, or ask Q to fix it.

I want to fix it. I click on the "Generate Fix" button, I take a look at the suggested code, and accept it

Image description

My code is updated, and the issue disappears from my list :)

Image description

I'm now more confident about deploying my code.

As a next step, I can rerun the tests previously generated by /test agent, or update my test suite with new tests. I can also use /doc agent to update my readme file with what was added since last generation. That could be part of a workflow, that will increase the quality of my project :)

Conclusion

Personally, I am not really looking for more agents with Amazon Q Developer. I believe we can manage a lot of things with what exists already. I don't want to feel overwhelmed, not knowing what to choose.

However, I think that specialized, well defined agents like /doc, /test, /review, can incredibly increase the quality of your work and save time.

They can be part of a software design workflow that you will create or re-invent :)

Featured ones: