I built this over the holidays to automate my yearly family expense audit. I wanted to see if I could get 100% mathematical accuracy using local LLMs without my data ever leaving my machine.
The Tech Stack:
Models: Granite 3.3 (8b for reasoning, 2b for vendor normalization). I found these to be a great balance of speed and consistency on local hardware compared to other small models I tried.
Interface: FastMCP. I realized early on that LLMs shouldn't do the math—they should write the tools for the math. By using MCP, the LLM generates SQL queries for SQLite instead of "hallucinating" totals.
Backend: FastAPI/Python.
Lessons Learned: The project taught me that "AI-assisted development" requires a lot of "Developer-led oversight." I had moments where the AI forgot its own API types or suggested overly complex fixes for simple CORS issues. It's a great co-pilot, but you still need to know how the plumbing works.
Repo: [Your GitHub Link] I also wrote a deeper dive on the "Writer vs. Editor" shift here: [Link to Medium Article V2]
Happy to answer any questions about the setup or the Granite performance!
>I wanted to see if I could get 100% mathematical accuracy using local LLMs
So did you?
>[Your GitHub Link] I also wrote a deeper dive on the "Writer vs. Editor" shift here: [Link to Medium Article V2]
Generated comments are against the guidelines. In any case———I suggest you give your comments and expense filings a quick manual review before submitting! :-)
I built this over the holidays to automate my yearly family expense audit. I wanted to see if I could get 100% mathematical accuracy using local LLMs without my data ever leaving my machine.
The Tech Stack:
Models: Granite 3.3 (8b for reasoning, 2b for vendor normalization). I found these to be a great balance of speed and consistency on local hardware compared to other small models I tried.
Interface: FastMCP. I realized early on that LLMs shouldn't do the math—they should write the tools for the math. By using MCP, the LLM generates SQL queries for SQLite instead of "hallucinating" totals.
Backend: FastAPI/Python.
Lessons Learned: The project taught me that "AI-assisted development" requires a lot of "Developer-led oversight." I had moments where the AI forgot its own API types or suggested overly complex fixes for simple CORS issues. It's a great co-pilot, but you still need to know how the plumbing works.
Repo: [Your GitHub Link] I also wrote a deeper dive on the "Writer vs. Editor" shift here: [Link to Medium Article V2]
Happy to answer any questions about the setup or the Granite performance!
>I wanted to see if I could get 100% mathematical accuracy using local LLMs
So did you?
>[Your GitHub Link] I also wrote a deeper dive on the "Writer vs. Editor" shift here: [Link to Medium Article V2]
Generated comments are against the guidelines. In any case———I suggest you give your comments and expense filings a quick manual review before submitting! :-)
https://news.ycombinator.com/item?id=45077654