category term primitives
5 terms
patterns Memory key/value store file append system prompt injection Memory is just a key/value store with extra steps

The LLM can't remember anything between conversations — it's stateless. 'Memory' is just your application writing facts to a file (or database) and reading them back into the system prompt next time. The model isn't remembering. You are.

// "Remembering" something
await fs.appendFile('memory.txt', `User prefers TypeScript\n`);

// "Recalling" it next session
const memories = await fs.readFile('memory.txt', 'utf-8');
const response = await llm.chat({
  system: `What you know about this user:\n${memories}`,
  messages: [userMessage],
});
patterns Function Calling Tool Use · OpenAI Functions · Tool Calling JSON serialization function dispatch Function calling is just JSON serialization and function dispatch with extra steps

The LLM outputs JSON describing which function to call and with what arguments. You parse it and call the function. The API wraps this in structured types, but that's the whole thing.

// LLM returns: { name: "get_weather", arguments: { location: "NYC" } }
const response = await llm.chat(messages, { tools });

if (response.tool_calls) {
  for (const call of response.tool_calls) {
    const fn = tools[call.name];          // look up the function
    const result = await fn(call.arguments); // call it
    messages.push(toolResult(call.id, result));
  }
}
data Skills Gems · GPTs · Custom Instructions markdown YAML frontmatter prompt templates Skills are just markdown files with YAML frontmatter with extra steps

A skill is a markdown file with YAML frontmatter that gets appended to the system prompt. The LLM reads it like any other instruction. There's no runtime magic — it's string concatenation.

---
name: code-reviewer
description: Reviews code for correctness and style
---

When reviewing code, check for:
1. Off-by-one errors
2. Unhandled edge cases
3. Missing error handling

Always explain *why* something is wrong, not just that it is.
patterns Agents Agentic AI · AI Agents · Autonomous Agents while loop LLM call tool dispatch Agents are just while loops with an LLM as the transition function — with extra steps

An agent is a while loop. Each iteration: send messages to LLM, get back either a tool call or a final response. Execute the tool, append the result, repeat. Everything else is optimization.

messages = [system_prompt, user_message]

while True:
    response = llm.chat(messages)
    if response.has_tool_calls():
        for call in response.tool_calls:
            result = dispatch(call.name, call.arguments)
            messages.append(tool_result(call.id, result))
    else:
        print(response.text)
        break
protocols MCP Model Context Protocol JSON-RPC stdio MCP is just JSON-RPC over stdio with extra steps

MCP is JSON-RPC 2.0 over stdio. A tool call is a JSON-RPC request sent to a subprocess on stdin; the result comes back on stdout. Same pattern as LSP.

// Client → Server (stdin)
{"jsonrpc":"2.0","method":"tools/call","params":{"name":"read_file","arguments":{"path":"/foo"}},"id":1}

// Server → Client (stdout)
{"jsonrpc":"2.0","result":{"content":[{"type":"text","text":"file contents..."}]},"id":1}