Home

From Brain Dump to Blog Post: How to Leverage LLMs to Document What You Learn


In today’s fast-paced software development landscape, innovative solutions and best practices often remain hidden in scattered notes, rushed commits, and ad-hoc troubleshooting sessions.

I struggle with capturing the full breadth of my problem-solving process—from the initial brainstorming to the final, polished solution. Over time, I discovered that by harnessing the power of Large Language Models (LLMs) at every stage of development, I could not only build robust systems but also transform my raw ideas into a beautiful, comprehensive document.

This document becomes a valuable learning resource, accelerating knowledge for you, your team or organization, or the wider community.

This blog post is a dive into the process I developed. More than just a guide on how to use LLMs, it’s a call to action for you to build your own process, document your learnings, and publish them. By doing so, you’ll create a repository of insights that can be shared, improved upon, and iterated continuously. In the words of Harper, from his LLM Codegen Workflow:

“This is working well NOW, it will probably not work in 2 weeks, or it will work twice as well. ¯\(ツ)/¯”

That kind of iterative, evolving process is exactly what we’re aiming for.

Overview of the LLM-Powered Workflow

The process is built on five core phases:

  1. Researching: Gathering and synthesizing data.
  2. Deciding: Evaluating alternatives and planning your approach.
  3. Building: Writing code with the assistance of cutting-edge AI tools.
  4. Iterating: Testing, debugging, and refining your solution.
  5. Documenting: Compiling all insights into a clear, structured document.

To be clear, none of the above steps are unique. Obviously people like Harper are and have been doing those for a while now. My contribution here is to encourage people to finalize the process with a Documentation steps to crystalize anything their learned into little handbooks for everyone to benefit from.

The iterative nature of this workflow means that your documentation becomes a living document. Every project you complete and every problem you solve feeds back into the cycle, enriching the knowledge base.

Below is a high-level diagram of this continuous process:

flowchart TD
    A[Researching] --> B[Deciding]
    B --> C[Building]
    C --> D[Iterating]
    D --> E[Documenting]
    E --> F[Shared Learning Resource]
    F --> A

Diagram: An iterative cycle where each phase reinforces and informs the next, culminating in a resource that benefits your entire community.

1. Researching with LLMs

The journey begins with research. At the start of every project, I capture all initial thoughts and ideas—even if they seem vague or unstructured. Using an LLM as a research assistant allows me to ask targeted questions and receive concise, synthesized answers. Instead of manually scouring countless web pages, you can simply ask:

Example Prompt:

“What are the key differences between OAuth 2.0 and OpenID Connect for securing APIs? List pros, cons, and typical use cases.”

Best Practices for Research

  • Be Specific: Focus your queries to get precise information.
  • Iterate with Follow-Up Questions: Drill down to clarify and expand on initial responses.
  • Verify Critical Information: Use the LLM’s output as a starting point and verify details against official documentation.
  • Summarize Findings: Once you’ve gathered enough insights, ask the LLM to summarize your research into a coherent document. This summary becomes the backbone for later phases.

2. Deciding: Planning and Designing Your Solution

With your research in hand, the next step is to make informed decisions. Use the LLM to weigh options, evaluate trade-offs, and draft a high-level implementation plan. For example, if you’re deciding between WebSockets and HTTP polling for real-time updates, prompt the LLM to compare the options based on your requirements.

Example Prompt:

“Compare WebSockets and HTTP polling for a high-traffic chat application in terms of latency, scalability, and implementation complexity.”

Best Practices for Deciding

  • Provide Detailed Context: Outline your project requirements and constraints.
  • Request Structured Outputs: Ask for bullet lists or tables to compare options clearly.
  • Explore Alternatives: Don’t settle on the first answer—ask for additional approaches.
  • Draft a Blueprint: Generate a high-level plan that will guide your coding efforts.

The output from this phase becomes your design blueprint—a document that informs all subsequent work.

3. Building with AI-Powered Coding Assistants

This is where the magic happens. Modern AI tools have revolutionized coding. While GitHub Copilot integrated into VSCode is fantastic, the ecosystem now includes specialized code editors like Cursor and Cline, innovative site designers like Vercel’s V0, and iterative development platforms like Claude Code. There’s even advanced tooling like Aider that integrates multiple models for a richer coding experience.

This post was written in Q1 2025. So depending on when you end up reading this, there will probably be 10 new products competing with each of the ones listed above and probably a bunch more tooling I can’t even conceptualize right now.

How to Leverage AI in Coding

  • Break Down Tasks: Instead of asking for an entire application, request small, manageable code snippets. Keep your projects small, and compose your projects of stand-alone modules which you can work on in isolation.
  • Provide Context: Supply relevant code or project details so the LLM can generate accurate output.
  • Iterate and Refine: Use AI-generated code as a draft. Test it, review it, and then ask follow-up questions.
  • Explore Specialized Tools: Experiment with different platforms to find the ones that best fit your workflow.
  • Have Robust Rules: Make your linters strict and if your language has optional type checking (Python), use it. Use Typescript over Javascript.
  • Have Comprehensive Tests: Testing is more important than ever. Cover all eventualities. Luckily LLMs are actually really good at writing tests. You still have to watch them to keep them from cheating, but they are mostly repetitive.

4. Iterating: Testing, Debugging, and Refining Your Solution

No code works perfectly on the first try. Iteration is the heart of effective development. After building your solution, use LLMs to help debug and optimize. When you encounter errors or performance issues, prompt the LLM with the problem details and relevant code snippets.

The Iteration Loop

flowchart TD
    A[Write Code] --> B[Test Code]
    B --> C{Do tests pass?}
    C -- YES --> D[Deploy/Document]
    C -- NO --> E[Consult LLM for Debugging]
    E --> A

Diagram: The cycle of writing, testing, and debugging code with AI guidance.

Best Practices for Iteration

  • Isolate Issues: Tackle one error or function or bottleneck at a time.
  • Provide Context: Include relevant snippets and error logs in your prompts.
  • Ask for Explanations: Request not just fixes but also reasoning behind suggestions.
  • Retest After Changes: Verify that each fix resolves the issue without introducing new problems.

This loop of writing, testing, and refining ensures that your final solution is robust and efficient.

5. Documenting: Creating a Comprehensive Learning Resource

This is where everything crystalizes and help you move forward.

The final phase is to compile everything—research, design decisions, code, and debugging insights—into a polished, comprehensive document. This isn’t just documentation; it’s a narrative of your entire problem-solving journey, a resource that others can learn from and build upon.

It is my personal belief that any documentation is better than no documentation, but really good documentation goes beyond explaining how a system works. Really good documentation starts with explaining the problem that was being solved. Ideally it should also include what options were considered and why the winning approach was selected and why the others were rejected.

Excellent documentation will take you through the entire process, ending at the resultant solution and how it works. Extra points are also given if you tell me about similar projects, deeper resources on the concepts in the documentation, and other pointers in those veins.

The Documentation Process

flowchart TD
    A[Draft Documentation] --> B[LLM Review & Suggestions]
    B --> C[Developer Edits & Refinement]
    C --> D[Final, Polished Document]

Diagram: An iterative process where AI-generated drafts are refined by human oversight to produce the final documentation.

Best Practices for Documentation

  • Generate Incrementally: Document each phase as you complete it.
  • Use AI to Summarize: Let the LLM transform your raw notes into readable, structured text.
  • Review and Edit Thoroughly: Ensure technical accuracy and clarity.
  • Share Widely: Publish your document on your blog, internal wiki, or community forum, and invite feedback.

This final document becomes a case study—a rich resource that captures your reasoning, the trade-offs you considered, and the final solution. It accelerates learning for anyone who reads it, turning your journey into an asset for the entire community.

Some additional things you can do if you have access to models with “Deep Research” is dump in your final blog post and have the LLM find associated resources, blog posts, interesting related topics and update the post to include pointers to those places.

Learning from Harper’s LLM Codegen Workflow

I wasn’t the only one experimenting with these methods. My friend Harper has been building small products using LLMs and has shared his process in a detailed blog post, “My LLM Codegen Workflow (ATM)”. As he puts it:

“I have been building so many small products using LLMs. It has been fun, and useful. However, there are pitfalls that can waste so much time. A while back a friend asked me how I was using LLMs to write software. I thought ‘oh boy. how much time do you have!’ and thus this post.”

Harper’s workflow echoes the iterative, evolving nature of the process described here. He notes,

“This is working well NOW, it will probably not work in 2 weeks, or it will work twice as well. ¯\(ツ)/¯”

These quotes remind us that this process is dynamic—it evolves as the tools improve and as we learn more. I encourage you to read his post for further inspiration and to see how others are applying these techniques.

Conclusion & Call to Action

The true power of this process lies in its ability to transform a messy, unstructured journey into a beautiful, structured resource that accelerates learning. By using LLMs to research, decide, build, iterate, and document, you create a comprehensive narrative that not only helps you understand your own solutions but also serves as a valuable guide for others.

I challenge you to adopt this LLM-powered workflow in your own projects:

  • Experiment: Integrate LLMs into every phase of your development process.
  • Document: Turn your raw outputs into a polished blog post or technical document.
  • Share: Publish your work, share your insights, and invite feedback.
  • Iterate: Continuously improve your process and document your improvements.

By doing so, you not only enhance your productivity but also contribute to a growing community dedicated to learning and innovation.

By transforming your development journey into a comprehensive, well-documented resource, you not only accelerate your own learning but also empower others to innovate faster. Embrace this iterative, AI-powered workflow, share your insights, and watch your community grow stronger together.