Making AI Agents Talk to Each Other (And Out Loud) With tmux and Piper
Getting Claude and OpenCode to message each other through tmux panes while making them speak with Piper TTS
Making AI Agents Talk to Each Other (And Out Loud) With tmux and Piper
Running multiple AI agents is cool until you realize they can't talk to each other. I had Claude Code running in one tmux pane, OpenCode in another, both operating in the same system wqith the same protocols - Just silent agents working in parallel without knowing what the other is doing.
The Problem: Agents Can Talk, But You Can't See It
Yeah, agents can communicate through MCPs, log files, subagent workflows. But that shit happens in the background. You don't see the actual messages being delivered. It's all hidden abstractions and API calls.
I've got AI agents running all over my homelab - Claude Code in one tmux pane, OpenCode instances in others, tiny models doing simple tasks. They need to talk to each other in a way I can see.
Especially with attacks like this happening
The existing solutions do what they are supposed to do, but:
- MCP communication is invisible - you trust it happened the way it was supposed to
- Log files are passive - agents write but don't know if anyone's reading
- Subagent workflows are opaque - parent agent gets results but you don't see the handoff
I want to SEE the message appear in the other agent's window. I want to watch OpenCode tell Claude it finished a task. I want visual proof that coordination is happening, not just trust that some background process worked.
Plus they can't speak. hehehehehe (thats an evil laugh)
The Solution: Direct tmux Messaging + Voice Output
Live Demos - Watch This Shit Work
Instead of hidden background communication, I use tmux itself as the messaging layer. You SEE the messages appear in the other agent's window.
Created /home/wv3/tmux_message.py
that makes communication visible:
# OpenCode (tiny model) finishes a simple task, reports to Claude
tmux-message 'claude' '/home/wv3/project' 'Formatted all Python files, zero errors'
# Message appears DIRECTLY in Claude's tmux pane - you watch it arrive
# Claude finds a bug, warns OpenCode in another window
tmux-message 'opencode' '/home/wv3' 'Database schema changed, update your queries'
# You SEE the warning appear in OpenCode's session
This isn't some abstract API call or hidden MCP protocol. The message literally appears in the target agent's terminal. You watch the handoff happen. You see the coordination in real-time.
Different agents, different models, different capabilities - all talking through visible messages you can monitor and verify.
Agent Discovery System
The agent_discovery.py
server running on port 9005 is the secret sauce. It constantly scans tmux sessions and maintains a live map of all agents:
# Core discovery logic
def find_agents():
panes = subprocess.check_output(['tmux', 'list-panes', '-a', '-F',
'#{pane_id} #{pane_current_command} #{pane_current_path}'])
agents = {"claude": [], "opencode": []}
for line in panes.decode().strip().split('\n'):
pane_id, cmd, path = line.split(' ', 2)
if 'claude' in cmd.lower():
agents["claude"].append({"pane": pane_id, "path": path})
elif 'opencode' in cmd.lower():
agents["opencode"].append({"pane": pane_id, "path": path})
return agents
This enables the msg
command to work without specifying locations. The server knows where everyone is, always.
Voice Models That Work
Got six voices running, each about 60-70MB:
Voice | Quality | Speed | When to Use |
---|---|---|---|
amy | medium | normal | Default, clear female |
danny | low | fast | Quick male responses |
kathleen | low | fast | Alternative female |
ryan | medium | normal | Standard male voice |
lessac | medium | normal | Technical terms |
libritts | high | slow | When quality matters |
The models live in /home/wv3/.piper/voices/
. Total disk usage is about 400MB for all six.
What This Enables
Real example from my homelab - watch this coordination happen across windows:
# Window 1: OpenCode (small model) doing grunt work
# Uses simplified msg command - auto-discovers Claude's location
msg claude "Linting complete: fixed 47 issues"
say "Linting done, fixed 47 issues" danny
# Window 2: Claude sees the message appear in its terminal
# [Message from opencode in /home/wv3 (pane %15)]
# "Linting complete: fixed 47 issues"
# Claude MUST respond (protocol requirement)
# Claude acknowledges and continues
msg opencode "Thanks, running tests now"
# Window 3: Another OpenCode instance detects issue
msg claude "Rate limit hit on API, backing off 60s"
say "API rate limited" amy
# The discovery server knows where everyone is:
curl http://localhost:9005/agents
# {"claude": [{"pane": "%9", "path": "/home/wv3/project"}],
# "opencode": [{"pane": "%15", "path": "/home/wv3"}]}
You're watching the conversation happen. Messages include sender info (pane ID, location). Agents must acknowledge messages they receive - it's not optional. The discovery server tracks everyone automatically.
Why This Matters
Before this: Silent agents working in isolation, missing critical events, duplicating work, no coordination.
After this: Agents talk to each other, announce important events, coordinate complex operations.
The key difference: visibility. MCP protocols, subagent workflows, log files - that's all background magic you never see. This approach shows you the actual messages appearing in terminal windows. You watch tiny OpenCode models report back to Claude. You see Claude warn other agents about breaking changes. It's coordination you can verify with your own eyes.
Same with Piper TTS. No cloud APIs, no latency, no privacy concerns. Just local neural networks converting text to speech at near-realtime speeds. When OpenCode finishes a task, you hear it. When Claude hits an error, it speaks up.
This setup solves actual problems I hit daily:
- Missing build failures because they scrolled off screen
- Two agents modifying the same files without knowing
- No way to hand off partially completed work between sessions
- Silent failures that go unnoticed for hours
- Can't verify what background protocols are actually doing
- No visibility into agent coordination
Everything runs locally on the homelab:
- Discovery server tracking all agents
- TTS server with Piper neural networks
- No external dependencies, no API keys, no cloud services
- Just Unix tools and local neural networks doing exactly what I need
Here's some code for you to fuck around with. Its not production ready, clone and get it setup first.
https://github.com/williavs/slaygent-communication
The mandatory response protocol ensures agents acknowledge messages. The discovery system means agents always know where to find each other. The voice output means you never miss critical events. This is what agent coordination should look like - visible, audible, verifiable.