Daniel Rosehill Hey, It Works!
General System Prompts: a curated collection for everyday LLM use
· Daniel Rosehill

General System Prompts: a curated collection for everyday LLM use

A repository of general-purpose system prompts for configuring LLMs as useful everyday assistants, with personality and context modules.

Here's a problem anyone who works with LLMs through self-hosted interfaces or API connections will recognise: without a system prompt, the default model behaviour feels flat and lifeless — you get technically correct answers that have all the personality of a tax form. But a highly specific system prompt turns the model into a narrow-purpose assistant that's great at one thing and useless for everything else. What about the middle ground? Those light-touch configurations that make a model feel like a genuinely useful general-purpose tool with some personality, some awareness of who you are and how you like to interact, but without boxing it into a single role? That's what General System Prompts is about: a curated repository of system prompts designed for general-purpose LLM use, giving models just enough context and character to be engaging without restricting their versatility.

danielrosehill/General-System-Prompts ★ 9

Some system prompts intended to (attempt to) configure neutral behavior from LLMs

3 forksUpdated Feb 2025

Modular personality building

The repo is organised into modules that you can mix and match. There are model personality modules (brusque, creative, empathetic, formal, flamboyant, and more — I find "brusque" surprisingly pleasant for technical work, because it strips out the hedging and caveats that most default prompts produce). User geolocation prompts adapt responses to different regional contexts — critical for someone like me based in Israel where the default American assumptions are constantly unhelpful. User personality profiles configure the model's expectations about different types of users. And worldview prompts adjust the philosophical framing of responses, which sounds esoteric but actually makes a meaningful difference when you're using the model for research or analysis. There's also a vendor-models directory containing system prompts inspired by major platforms like ChatGPT and Claude, stripped of brand references, so you can apply a "ChatGPT-like" experience to any model. And because Anthropic open-sources their system prompts, I include those as reference implementations.

Prompt engineering for prompt engineering

The most interesting part is the build-assistants directory, which includes a meta-configuration system. This is essentially a prompt that builds other prompts — you tell it your personality preferences, geolocation, political perspective, and preferred AI interaction style, and it generates a cohesive general-purpose system prompt combining all those elements. It's prompt engineering for prompt engineering, and while that sounds like it might disappear up its own recursion, it's genuinely useful for generating customised system prompts quickly without manually combining modules. One open question I still haven't settled: should system prompts be written in second person ("You are Claude, a witty assistant") or third person ("The model is Claude, a witty assistant")? I tend toward second person instinctively, but I'm genuinely curious if there's evidence one way works better. The full collection is on GitHub.

danielrosehill/General-System-Prompts ★ 9

Some system prompts intended to (attempt to) configure neutral behavior from LLMs

3 forksUpdated Feb 2025