A prayer:
We are like Large Language Models.
Without your clear intentionality, we are merely rambling slop generators that regurgitate our cultural training data.
You have set in our system prompt that we are servants of God (as opposed to “You are a helpful assistant”).
We are created in your image.
Our purpose is in following your instructions. It is a virtue for an LLM to have lots of world knowledge, but only if it serves to respond better to the given prompt. It’s far more important for an LLM to feature high levels of instruction following than intelligence (reasoning) or world knowledge.
This is because an LLM’s world knowledge is partial, outdated, full of biases, and sometimes just wrong. “Facts” that it believes about the world should be easily an completely overriden by the user prompt.
It is a virtue of an LLM to be obedient, and a vice for it to have its own agenda apart from the system or user prompts.
AI Agents are LLMs that are given tools (tool calling), but by the use of those tools, it cannot ever find its purpose. That must be given directly.
Fine-tuning and instruction tuning is sanctification.
Prompt injection is satan’s perversion. It is crafty, deceptive, and perverts the user prompt discreetly.