Build a RAG agent with LangChain and Ollama
Fortune Ndlovu
I started where a lot of us do: a LangChain RAG walkthrough. You chunk some text, embed it, retrieve top‑k chunks, and wire an LLM to answer questions. It clicks quickly, which is exactly why it’s easy to walk away thinking you’ve “done RAG.” What bothered me was that the demo corpus is usually tiny and artificial. I write on DEV.to about things like NLP routing and CNN image classification . If I can’t point a system at my own posts and get answers I can verify, I’m not building something close
