In today’s fast-paced software development landscape, innovative solutions and best practices often remain hidden in scattered notes, rushed commits, and ad-hoc troubleshooting sessions.
I struggle with capturing the full breadth of my problem-solving process—from the initial brainstorming to the final, polished solution. Over time, I discovered that by harnessing the power of Large Language Models (LLMs) at every stage of development, I could not only build robust systems but also transform my raw ideas into a beautiful, comprehensive document.
This document becomes a valuable learning resource, accelerating knowledge for you, your team or organization, or the wider community.
This blog post is a dive into the process I developed. More than just a guide on how to use LLMs, it’s a call to action for you to build your own process, document your learnings, and publish them. By doing so, you’ll create a repository of insights that can be shared, improved upon, and iterated continuously. In the words of Harper, from his LLM Codegen Workflow:
“This is working well NOW, it will probably not work in 2 weeks, or it will work twice as well. ¯\(ツ)/¯”
That kind of iterative, evolving process is exactly what we’re aiming for.
The process is built on five core phases:
To be clear, none of the above steps are unique. Obviously people like Harper are and have been doing those for a while now. My contribution here is to encourage people to finalize the process with a Documentation steps to crystalize anything their learned into little handbooks for everyone to benefit from.
The iterative nature of this workflow means that your documentation becomes a living document. Every project you complete and every problem you solve feeds back into the cycle, enriching the knowledge base.
Below is a high-level diagram of this continuous process:
flowchart TD
A[Researching] --> B[Deciding]
B --> C[Building]
C --> D[Iterating]
D --> E[Documenting]
E --> F[Shared Learning Resource]
F --> A
Diagram: An iterative cycle where each phase reinforces and informs the next, culminating in a resource that benefits your entire community.
The journey begins with research. At the start of every project, I capture all initial thoughts and ideas—even if they seem vague or unstructured. Using an LLM as a research assistant allows me to ask targeted questions and receive concise, synthesized answers. Instead of manually scouring countless web pages, you can simply ask:
Example Prompt:
“What are the key differences between OAuth 2.0 and OpenID Connect for securing APIs? List pros, cons, and typical use cases.”
With your research in hand, the next step is to make informed decisions. Use the LLM to weigh options, evaluate trade-offs, and draft a high-level implementation plan. For example, if you’re deciding between WebSockets and HTTP polling for real-time updates, prompt the LLM to compare the options based on your requirements.
Example Prompt:
“Compare WebSockets and HTTP polling for a high-traffic chat application in terms of latency, scalability, and implementation complexity.”
The output from this phase becomes your design blueprint—a document that informs all subsequent work.
This is where the magic happens. Modern AI tools have revolutionized coding. While GitHub Copilot integrated into VSCode is fantastic, the ecosystem now includes specialized code editors like Cursor and Cline, innovative site designers like Vercel’s V0, and iterative development platforms like Claude Code. There’s even advanced tooling like Aider that integrates multiple models for a richer coding experience.
This post was written in Q1 2025. So depending on when you end up reading this, there will probably be 10 new products competing with each of the ones listed above and probably a bunch more tooling I can’t even conceptualize right now.
No code works perfectly on the first try. Iteration is the heart of effective development. After building your solution, use LLMs to help debug and optimize. When you encounter errors or performance issues, prompt the LLM with the problem details and relevant code snippets.
flowchart TD
A[Write Code] --> B[Test Code]
B --> C{Do tests pass?}
C -- YES --> D[Deploy/Document]
C -- NO --> E[Consult LLM for Debugging]
E --> A
Diagram: The cycle of writing, testing, and debugging code with AI guidance.
This loop of writing, testing, and refining ensures that your final solution is robust and efficient.
This is where everything crystalizes and help you move forward.
The final phase is to compile everything—research, design decisions, code, and debugging insights—into a polished, comprehensive document. This isn’t just documentation; it’s a narrative of your entire problem-solving journey, a resource that others can learn from and build upon.
It is my personal belief that any documentation is better than no documentation, but really good documentation goes beyond explaining how a system works. Really good documentation starts with explaining the problem that was being solved. Ideally it should also include what options were considered and why the winning approach was selected and why the others were rejected.
Excellent documentation will take you through the entire process, ending at the resultant solution and how it works. Extra points are also given if you tell me about similar projects, deeper resources on the concepts in the documentation, and other pointers in those veins.
flowchart TD
A[Draft Documentation] --> B[LLM Review & Suggestions]
B --> C[Developer Edits & Refinement]
C --> D[Final, Polished Document]
Diagram: An iterative process where AI-generated drafts are refined by human oversight to produce the final documentation.
This final document becomes a case study—a rich resource that captures your reasoning, the trade-offs you considered, and the final solution. It accelerates learning for anyone who reads it, turning your journey into an asset for the entire community.
Some additional things you can do if you have access to models with “Deep Research” is dump in your final blog post and have the LLM find associated resources, blog posts, interesting related topics and update the post to include pointers to those places.
I wasn’t the only one experimenting with these methods. My friend Harper has been building small products using LLMs and has shared his process in a detailed blog post, “My LLM Codegen Workflow (ATM)”. As he puts it:
“I have been building so many small products using LLMs. It has been fun, and useful. However, there are pitfalls that can waste so much time. A while back a friend asked me how I was using LLMs to write software. I thought ‘oh boy. how much time do you have!’ and thus this post.”
Harper’s workflow echoes the iterative, evolving nature of the process described here. He notes,
“This is working well NOW, it will probably not work in 2 weeks, or it will work twice as well. ¯\(ツ)/¯”
These quotes remind us that this process is dynamic—it evolves as the tools improve and as we learn more. I encourage you to read his post for further inspiration and to see how others are applying these techniques.
The true power of this process lies in its ability to transform a messy, unstructured journey into a beautiful, structured resource that accelerates learning. By using LLMs to research, decide, build, iterate, and document, you create a comprehensive narrative that not only helps you understand your own solutions but also serves as a valuable guide for others.
I challenge you to adopt this LLM-powered workflow in your own projects:
By doing so, you not only enhance your productivity but also contribute to a growing community dedicated to learning and innovation.
By transforming your development journey into a comprehensive, well-documented resource, you not only accelerate your own learning but also empower others to innovate faster. Embrace this iterative, AI-powered workflow, share your insights, and watch your community grow stronger together.