Using AI to write API documentation
by Maciej Kołodziej | 20/10/2025Writing customer-facing API documentation is boring and hard. Things like OpenAPI (Swagger) helps - but it is not enough just to document the endpoints/methods - at least not with complex integrations. You will often need to consider the core concepts of the system, you need to think about data volumes and whether to use polling or a call-back strategy and so on.
I needed to write the customer documentation for a complex API and I was wondering if AI could help me. Of course, the naïve approach would be to just point the AI at my code (complete with OpenAPI annotations) and have it write some documentation. Indeed, that was exactly what I did on my first attempt - and... it wasn't enough.
So I set about it in a different way.
Choosing the right tool
The first thing I realised is that I need a tool that can look at all my code and various files and reference them and not just work from an isolated prompt. For me, that tool was Cursor, an AI development tool: It has access to all the code and existing documentation. Other development tools like VS Code + Copilot or Windsurf will do the same thing.
In a business context, you would want to look at something like NotebookLM or similar to get the same kind of experience. More and more AI tools offer this kind of "project mode" where the AI can work with a large set of relevant documents and help you work iteratively on an output document.
Taking the time to craft a good prompt
Next, I spent a good amount of time thinking about writing a detailed prompt. I took care to explain a lot of the context that the API will be used in and outlined the different architectural patterns that may be relevant. I also instructed the LLM to ask clarifying questions.
See also
The importance of context and prompting can’t be overstated — Nathan Ball explains it brilliantly in his RISEN framework for better AI prompts.
Choosing the right format
I wasn't sure what format for the document I should use. I could use Word, which would be suitable for 'business' people, but the most important audience was software engineers who will need to use this document to integrate with our system. For developers, Markdown is much more natural - natively used by GitHub. From both my experience and common opinion in the AI community, LLMs are very good at working and producing MD format. So this made the decision much easier as it allowed me to use Cursor which made using the code as the context easy. There are also tools that let us transform MD files into other formats when necessary.
Interacting and iterating
With a tool like Cursor, it is explicitly designed to iterate on an output file and not just rely on "memory" in the chat. This was helpful here, as the documentation is a big document. It also meant that I could manually make modifications to the output for some things and ask the AI to make other changes.
In essence, I was using the LLM as a "writing partner" rather than expecting it to just produce the output on its own.
This kind of approach has some additional benefits. It works the way your mind works so as 'we' were discussing what needs to be modified I was discovering what I forgot to add to the initial context and what's more important - what the recipient of the documentation won't know. So, after each discovery I asked AI to add another section where specific subject was explained. I was pointing it at the code and infrastructure-as-code so it could analyze what's really in the system as well as I filled the gaps with my own descriptions. This worked well and if any information didn't look correct - I just had to point it out and the paragraph was rewritten in seconds.
Review
When the document was filled with all the details the time came for review and refinements. AI made a great job here by analyzing the sequence of information and if a user reading the document from the top to bottom would get them in the correct order. Cursor did all of the reordering. I also asked it to add appropriate diagrams that will help understand the system better. To be honest they weren't perfect but still Good Enough™️. While we were editing, AI kept track of the table of contents: It's not so easy to reorder content without messing it up when working in Markdown format but LLM made it painless.
When the documentation looked to be finished - I started a new session(chat) with a different model and asked it to first analyze the whole document and explain to me how it understood it. It helped me find another gap. Another approach was to again analyze the text and compare it to the code and find if both 'meanings' match. Finally, to make sure it's understandable to the audience, I asked AI to take various roles like Software Engineer, Project Manager, Business Analyst and assess how understandable the document is to people in each of these roles. This resulted in creating a Glossary that explains common terms and jargon that might not be obvious to non-engineers.
What will the audience do with the document?
I also thought about the question above. I know perfectly well that no one likes to read long and boring docs! So, it's obvious that it will end up being read mainly by another AI tools like ChatGPT/Cloud/NotebookLM/Copilot/Cursor. Therefore, I just want to stress that nowadays we also need to think about how AI tools will understand the documentation we produce. Because very likely AI will do the big part of Business Analysis and Software implementation based on it.
The last step of the process was to ask the model to prepare the plan for implementation of the integration with API we documented before (using new(!) chat session with "fresh context window"). Reviewing the plan made me sure that the docs are describing all of the critical concepts and requirements correctly.
See also
Finding AI Opportunities: The “AI Applicability” Sweet Spot – How to identify the intersection between what AI can do and what your business actually needs.
The final result
I am very happy with the result. It certainly was not just a matter of pointing the AI at the system and getting it to write the documentation for me. But by working iteratively with the LLM as a writing partner, I believe I achieved a much better result in substantially less time than I would have been able to do on my own. It also taught me a lot about prompting and how important it is to provide enough context to the AI. It starts from scratch every time you create new sessions. So, each new session requires you to onboard your 'employee' again.