Team Brain··7 min read

Team Telepathy: a shared AI brain for engineering teams

One developer fixes a hard bug. Five minutes later, every AI assistant on your team knows about it. No Slack message. No wiki update. No standup. That's Team Telepathy.

Knowledge dies in chat threads

Every engineering team has the same problem: the most valuable knowledge lives in someone's head, or buried in a Slack thread from six months ago. The new developer deploys for the first time and hits the same environment variable trap that three other developers hit before them. The senior engineer on vacation means the team doesn't know why that one config flag is set the way it is.

AI coding assistants make this worse in a subtle way. They're genuinely good at finding solutions — but they find them from scratch every time. Every session, every developer, every AI assistant re-discovers the same hard-won knowledge independently. The team gets smarter, but the AI doesn't.

We wanted to change that. The question was: what if the AI assistant could benefit from what every developer on the team has already learned?

The insight: lessons have authors

The core primitive in our AI Brain is the lesson — a structured record of something that worked, something that failed, or something you discovered. Each lesson has a topic, an outcome, what worked, what failed, severity, and the exact commands or file paths involved.

For Team Telepathy, we added one field: author.

That single field changes the dynamics completely. Now a lesson isn't just knowledge — it's attributed knowledge. When your AI assistant surfaces a lesson about a production deployment issue, it can tell you that Elena figured this out on March 12th, after spending three hours on it. That context makes the lesson more trustworthy and more actionable.

How it flows through a team

The mechanics are deliberately simple. The entire team shares one Brain instance. When any developer's AI assistant learns a lesson, that lesson goes into the shared Brain. When any other developer starts a session, that lesson is there — surfaced automatically if it's relevant to what they're working on.

The magic happens in session briefings. At the start of a session, the AI doesn't just see its own history — it sees the most relevant lessons from the entire team's accumulated experience. If two people worked on the same subsystem last week and one of them hit a tricky issue, the other developer's AI knows about it before the first line of code is written.

We also built a synthesis layer: when multiple team members learn lessons on the same topic, those lessons can be condensed into a single authoritative version. No noise, no duplication — just the team's best current understanding of any given problem.

The onboarding case

The most striking use case we've seen is onboarding. A new developer joins the team and sets up the shared Brain instance. On day one, their AI assistant already knows:

  • Why certain environment variables are set the way they are
  • Which parts of the codebase have known rough edges
  • What the deployment process actually looks like (not just what the README says)
  • Which patterns the team has converged on, and which ones they've moved away from

This isn't documentation that someone wrote months ago and forgot to update. It's living knowledge, added incrementally by every developer every day — the kind of knowledge that usually takes three months to absorb by osmosis.

Privacy and signal vs. noise

Two concerns come up every time we talk about this: privacy and noise.

On privacy: lessons are explicit, not automatic. Your AI assistant doesn't secretly share everything it sees. Sharing happens when a developer (or their AI) explicitly stores a lesson. You control what goes into the shared Brain.

On noise: the quality system keeps the signal high. Lessons are scored by severity and recall frequency. Low-quality or rarely-used lessons sink; critical, frequently-recalled lessons surface. The Brain gets more useful as the team uses it — not more cluttered.

brain_doctor: your team knowledge health check

We built a diagnostic tool called brain_doctor specifically for teams. It reports the health of the shared Brain: how many lessons exist, how many are team-attributed, what the average recall frequency looks like, and a metric we call IQ Boost % — an estimate of how much context overhead the Brain is saving compared to a cold-start session.

The IQ Boost metric gives teams a concrete way to see the value accumulating in their shared Brain over time. Teams with high activity and consistent lesson attribution tend to see IQ Boost climb steadily over the first few weeks.

What we learned building it

The hardest part wasn't the data model or the retrieval logic. It was figuring out the right granularityfor a team lesson. Too broad and the lesson is useless ("deployment is hard"). Too narrow and it never applies to anything ("port 5433 was blocked on our specific AWS account in us-east-1 on March 12th").

We landed on topic-scoped lessons with a structured schema — enough structure to be searchable and attributable, enough flexibility to capture the messy reality of how software teams actually work. The topic naming convention (deploy:api, fix:auth-token, infra:k3s) turned out to be a surprisingly important UX decision — it makes lessons scannable and groupable in a way that free-form text isn't.

cachly is a managed AI Brain for developers — persistent memory, team knowledge sharing, and semantic cache for Claude Code, Cursor, GitHub Copilot & Windsurf. One MCP server. 51 tools. Free tier, EU servers, no credit card.

Your AI is forgetting everything right now.

Every session starts blank. Every bug re-discovered. Every deploy procedure re-explained. cachly fixes that in 30 seconds — your AI remembers every lesson, every fix, every teammate's hard-won knowledge. Forever.

🇪🇺 EU servers · GDPR-compliant🆓 Free tier — forever, no credit card⚡ 30-second setup via npx🔌 Claude Code · Cursor · Copilot · Windsurf
Team BrainAI MemoryMCPDeveloper ToolsKnowledge Management