Start Your (Agent) Engines
February 24, 2026

Jack O'Brien
Co-Founder & CEO
Ladies and Gentleman! Start! Your! Engines!
We're at the starting line of the most competitive race in history: the race to build systems as proactive and capable as ourselves. Just six weeks into the new year, and there’s easily been more widespread usage of AI agents than throughout all of 2025. The mass adoption of Claude Code and the viral rise of OpenClaw (100,000+ GitHub stars in a week and a 9 figure acquisition by OpenAI) are just the beginning.
AI is moving beyond chatbots, and in this new paradigm we think it’s a bit outdated to define an agent by its underlying model. Yes, the model partially determines the intelligence of the system, but there’s a lot more to agents than the simple tokens in and tokens out that LLM APIs allow for. For your agent to thrive in the real world and win this race, you need to rethink the relationship between models and the systems that run them.
Agent Engines
At Subconscious, we build Agent Engines: systems that combine a language model with a tightly integrated context management and tool calling system to power agents that are more steerable, more accurate on long-context reasoning, and much more efficient. A tall order, but one we’re confident our core innovations in context management and efficient inference are well suited to tackle. A few definitions for clarity:
- Model: A language model. Takes in a message list, returns a response.
- Engine: An integrated model, context management, and tool calling system. Takes in a instructions and tools, and kicks off a long-running process.
- Harness: Application software built on top of an engine. Manages long-term state, triggers, and scheduling. A harness is not intelligence itself, it orchestrates intelligence over time. OpenClaw and Claude Code are harnesses.
An Agent Engine is the driving force of an agent. Where most agent building approaches require developers to piece together a framework like LangGraph with a model like Gemini and build context management, tool calling, retries, and orchestration themselves, Subconscious collapses that complexity. You choose an engine. The engine handles the rest.
Today we’re excited to release the next generation of our agent engines: TIM, TIM-Edge, TIMINI, TIM-GPT, and TIM-GPT-Heavy. Teams are already using them to build search agents, browser automation agents, coding agents, and even agents that automate physical systems.
How to Use an Engine
We designed our engines to be task solvers, not chatbots, so the interface is a bit different from what you might be used to. Rather than appending to a message list, our API takes instructions and a list of tools, then kicks off a long-running process.

The engine decomposes your instructions into subtasks on the fly, takes action, reevaluates its plan throughout the process, and returns a structured summary of every action taken. Your job is to make sure the instructions convey what you want, and that the tools work as expected with proper permissions.

To learn more about how to use our platform, checkout our docs and sdks.
Two Types of Engines
We've built our engines in two configurations: unified and compound. Both deliver an identical developer experience with tradeoffs in cost, latency, and accuracy.
Unified Engines
A unified engine is a model and runtime co-designed from the ground up and fine-tuned together for peak agentic performance. This is the core of our proprietary technology.
The model layer is built on post-trained Qwen models, and our inference runtime is a custom fork of SGLang optimized for agent workloads. Because a single agent run maps to a single inference call in a unified engine, we can post-train with exceptional efficiency (context pruning baked in!) and run large batches of agents cost-effectively. This architecture also makes it straightforward to adapt the system to non-standard tooling and deploy on-prem.
TIM (an acronym of the “Thread Inference Model” and a nod to TIM Beaver) is our flagship unified engine, built on Qwen3 80B and our upgraded proprietary runtime we call TIMRUN. TIM performs well on a wide variety of tasks. TIM-Edge is our extremely efficient unified engine, built on Qwen3 8B on top of our TIMRUN inference engine.
Compound Engines
A compound engine shares the same developer experience and context management system as a unified engine, but swaps in a frontier LLM instead of our proprietary model. You get the full reasoning power of the best available models, wrapped in the same simple interface.
The tradeoff is cost and latency. Compound engines are more expensive and slower per run, but they give you immediate access to frontier intelligence for tasks where accuracy across diverse workloads matters most.
Our three compound engine offerings, TIMINI powered by Gemini and TIM-GPT and TIM-GPT-Heavy powered by OpenAI, are available now.
Get Started Today
We’re releasing these systems today to get them in the hands of developers as fast as possible. Jump into our platform to experiment with the playground, or jump into an example repo to use our SDK and power an agent with an engine yourself.
The race to build agents is on, and we're excited to see what you build.