Ideating personal context data stores for better AI experiences
Using AI to brainstorm and seed personal context data stores that make LLM interactions exponentially more useful and personalised.
Large language models can be rendered exponentially more powerful by the addition of surprisingly small amounts of contextual data — and I mean genuinely small, like a few kilobytes of text about your preferences, your work, and your circumstances. The gap between what a generic AI assistant produces and what a contextualised one produces is so large that once you've experienced the difference, using an LLM without personal context starts to feel like driving with the handbrake on. But building that context store in the first place is a chicken-and-egg problem: you need to know what context would be valuable before you can capture it, and you can't know what's valuable until you've experimented with having it. My Personal Context Store Ideation project uses AI itself to help solve this bootstrapping problem.
Using AI to figure out what AI should know about you
The concept is to support personal RAG implementations like those already available in frontends like Open Web UI. I've been developing data pipelines that let users build knowledge repositories locally and push them into vector stores for embedding in LLM workflows, and this project focuses on the crucial first step: figuring out what to put in there. I built a Hugging Face assistant that provides randomised suggestions for vaults of personal context knowledge — think "food preferences," "communication style," "career history," "health considerations," "technical stack" — along with specific files that should be populated within each vault and descriptions of what those files should contain. The idea is that you can grab a coffee, fire up a speech-to-text tool, and work through the suggestions to populate your context stores in a single focused session.
The beer test that proved the concept
The repo includes a practical demo using Open Web UI that I found surprisingly convincing. I created a small collection of context files about my food and drink preferences — nothing elaborate, just a few paragraphs about what I like and don't like in food, beer, wine, and coffee. I uploaded them as a knowledge store, connected them to a personalised assistant, and tested it by having it recommend a beer from a random bar menu I found online. It recommended Fiddlehead IPA, which is pretty much exactly what I would have ordered. That trivial example demonstrates something powerful: with a few minutes of voice-dictated context about your preferences, an AI assistant can make genuinely personalised recommendations that reflect your actual taste rather than generic popularity. Scale that up to career advice, health decisions, purchase research, or any domain where personal context matters, and the value proposition becomes enormous. The repository is open source on GitHub with starter ideas and example data stores.