What are the principles we can use to build LLM-powered software that is actually good enough to put in the hands of production customers?
-
Updated
Sep 21, 2025 - TypeScript
What are the principles we can use to build LLM-powered software that is actually good enough to put in the hands of production customers?
[ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning
A discovery and compression tool for your Python codebase. Creates a knowledge graph for a LLM context window, efficiently outlining your project | Code structure visualization | LLM Context Window Efficiency | Static analysis for AI | Large Language Model tooling #LLM #AI #Python #CodeAnalysis #ContextWindow #DeveloperTools
A local-first memory layer for AI (Cursor, Zed, Claude). Persistent architectural context via semantic search.
Documentation snippets for LLM context injection
Transform and optimize your markdown documentation for Large Language Models (LLMs) and RAG systems. Generate llms.txt automatically.
Building Agents with LLM structured generation (BAML), MCP Tools, and 12-Factor Agents principles
A lightweight tool to optimize your C# project for LLM context windows by using a knowledge graph | Code structure visualization | Static analysis for AI | Large Language Model tooling | .NET ecosystem support #LLM #AI #CSharp #DotNet #CodeAnalysis #ContextWindow #DeveloperTools
[ICLR 2025] Official code repository for "TULIP: Token-length Upgraded CLIP"
A discovery and compression tool for your Java codebase. Creates a knowledge graph for a LLM context window, efficiently outlining your project #LLM #AI #Java #CodeAnalysis #ContextWindow #DeveloperTools #StaticAnalysis #CodeVisualization
Smart code context extractor for AI assistants
A reliability layer for LLM context. Deterministic deduplication that removes redundancy before it reaches your model.
High-performance repository context generator for LLMs - Transform codebases into optimized formats for Claude, GPT-4/5, Gemini, and other LLMs
🚀 Intelligent Claude Code status line with multi-provider AI support, real-time token counting, and universal model compatibility. Supports Claude (Sonnet 4: 1M, 3.5: 200K), OpenAI (GPT-4.1: 1M, 4o: 128K), Gemini (1.5 Pro: 2M, 2.x: 1M), and xAI Grok (3: 1M, 4: 256K) with verified 2025 context limits.
Token Oriented Object Notation (TOON) for Linked Data
Information on LLM models, context window token limit, output token limit, pricing and more.
Turns your local codebase into a secure, token-optimized context prompt for LLMs like ChatGPT and Claude.
Three-layer memory architecture for long-term AI learning with Claude
Context-optimized MCP server for web scraping. Reduces LLM token usage by 70-90% through server-side CSS filtering and HTML-to-markdown conversion.
A visualization website for comparing LLMs' long context comprehension based on the FictionLiveBench benchmark.
Add a description, image, and links to the context-window topic page so that developers can more easily learn about it.
To associate your repository with the context-window topic, visit your repo's landing page and select "manage topics."