Internal · first proposal · for discussion Marco Avila · 2026-05-11 · v1

The shift

Knowledge teams have stopped tolerating tool fragmentation.

For a decade we taught teams that the answer was more apps. Slack for chat. Notion for docs. Linear for tasks. Otter for meetings. A separate AI assistant bolted on top of each one. The cost was hidden in the context-switching tax — and it added up.

That tolerance just snapped in late 2025. Two things broke at once. OpenAI's Realtime API made sub-second voice AI a deployable thing (Zunou shipped Voice Agent in 18 languages on top of it). And MIT showed context-window-stuffing collapses to ~0.04% accuracy on relational reasoning at scale — meaning the "bolt AI onto each silo" architecture has a hard ceiling. Zunou independently built an MIT-aligned Tool-based selective retrieval Tool-based selective retrieval Zunou's MIT-aligned AI architecture. Targeted tool calls instead of stuffing everything into context. ~100× token reduction. architecture (136+ production tools, ~100× token reduction) before that paper was published.

When context lives in five places, the AI is useless. When context lives in one place — and the architecture is designed for it — the AI becomes a chief of staff. Teams that figure this out compound. Teams that don't fall behind on every cycle — every meeting, every decision, every follow-up.

The losing path

Stay fragmented · bolt AI on each tool

Notion AI in Notion. Slack AI in Slack. Copilot in M365. Each one smart in its silo. None of them can connect the meeting decision in Slack to the action item that should live in Linear. The team still does that work manually. The AI is a feature, not a force.

The winning path

One workspace · context compounds

Chat, tasks, meetings, decisions — one surface. The AI sees everything. Today's meeting becomes tomorrow's prep brief automatically. The team's collective context becomes a competitive advantage. Switching to anything else means losing months of trained context.

Where we're going

Tokyo's working teams stop thinking in apps.
They think in Zunou.

By the end of year 1: small Tokyo teams open Zunou in the morning the way they opened Slack five years ago. Their meeting prep happens automatically. Decisions surface where they need to be acted on. Their AI knows everyone they work with — because everyone they work with is in Zunou too.

This isn't sold; it's experienced. When one Tokyo founder tells another "everyone I know is on Zunou" — that's the moment. From there it spreads passively.

What I'm proposing we discuss

This is my first GTM proposal at Zunou. I'm not asking you to ratify a finished plan — I'm asking you to react to a specific shape, push back where the inputs feel wrong, and shape the next version with me. The strategy below is genuinely good, but it gets twice as good with your judgment in it.

Specifically I want us to align on: the launch shape (pilot-first vs synchronized), the pilot community (TAI is my recommendation, others on the attack list), the year-1 framing (PMF first, lock ARR after month-3 paid signal — see §09 + §09.5), and what we discover in pilot that should change the rest of this plan.

6,500
Tokyo individuals we can plausibly reach
Across 13 prioritized communities (§06)
One pilot first
TAI · 4,000+ AI builders
Then learn, then double, then scale
~6 months
To stop-or-go decision
Pre-committed PMF stage-gate · §11

Unfamiliar with KGI KGI 重要目標達成指標 Key Goal Indicator — Japanese-standard top-level board commitment. The single number the company is held to. , NSM NSM North Star Metric — the single product metric that proxies for healthy growth. Tracks weekly. , PMF PMF Product-Market Fit — the moment the product visibly pulls users in rather than being pushed at them. , MCP MCP Model Context Protocol — open standard for AI systems to read/write external tools and data (Slack, Notion, Linear, etc.). Industry-default since early 2026. , Ringi Ringi 稟議 Japanese consensus-based written approval process. Documents circulate bottom-up through hierarchy. , Keigo Keigo 敬語 Japanese honorific speech. Required for any AI output that becomes external-facing. , or APPI APPI Act on the Protection of Personal Information — Japan's primary data privacy law. 2025–26 enforcement-focused regime. ? Hover any underlined term for an inline definition, or jump to the full glossary.


TL;DR · 90-second read ~25 min full read · skip to §12 for decisions only

Marco's first GTM proposal · the shape, for the team to react to and shape.

The shape · in one paragraph

Pilot-first community-led launch in Tokyo. TAI pilot launches end July / early Aug 2026 (Ilya-introduced), then layer in AI Tinkerers + Venture Café + one more. Hybrid venue sponsorship — flat fee + per-user-join bonus — that pays for actual conversion, not just seats (§06.4). Push Spaces Spaces Zunou's white-label platform play. Three types: Event Spaces (conferences) · Community Spaces (ongoing groups) · Managed Spaces (enterprise). Event-Spaces as the Zoom-playbook viral surface alongside it. Pre-committed stage-gate Feb 2027: ≥4 of 6 PMF criteria → fuel; ≤2 → pivot; 3 → extend.

The architecture (MIT-aligned tool-based selective retrieval, 136+ tools, Voice Agent on Realtime API, autonomous Relays) means the product is already deeper than what zunou.ai shows publicly — the launch problem is distribution + density, not product readiness. The page below is the show-your-work version. This card is for the skim.

The horizon · 3 milestone windows
Months 1–3 · Activation
Does the product produce dense behavior?
Magic-number completion · cohort retention · qualitative "would be very disappointed" signal.
Months 3–6 · Paid validation
Do people pay? At what tier?
First paying logos · Pro $19 vs Business $39 preference · willingness-to-pay · early churn.
Months 6–12 · Compound growth
Do parallel tracks compound?
+communities · +events · +content/PR · MoM acceleration. ARR number locks here, conditional on PMF + paid signal earned in 1–6.
This GTM is about PMF, not ARR. Crazy ARR comes after the signal is real — we won't fake-forecast our way there.
Pilot community
TAI
4,000+ AI builders · open membership · weekly events
Reachable pop.
6,500
Tokyo individuals across 13 prioritised communities (§06)
Stage-gate
~6 mo
Pre-committed fuel / pivot / extend decision (§11)
What I'd like us to discuss
  • · The launch shape — pilot-first as recommended (§06), or synchronized 4-community (the original §05 plan)?
  • · The year-1 ambition framing — ¥30M+ ARR is the target we're working toward; beachhead is just months 1–2 of PMF learning; parallel tracks compound from there. Lock the specific number after month-3 paid signal.
  • · The hybrid venue-sponsorship model — flat fee + per-user-join bonus (§06.4) — as the leading edge of community partnerships
  • · Which 3 capabilities the TAI pilot demo leads with — and which we hold back
What I'm uncertain about / want pushback on
  • · The 5/1/3-in-14-days magic number is a hypothesis. Real activation surfaces (Daily Debrief, Relays, Spaces joins) might be the right substitution.
  • · The 10% personal-conversion rate from Event-Spaces is the investor doc's working number. Tokyo-specific actual is TBD.
  • · Whether parallel tracks (Spaces events · content / PR · more communities) compound or run linearly is the biggest unknown — the year-1 ambition assumes compounding.
  • · Phase 0 capacity is plausible-not-committed — we should talk about what's realistic for the team to absorb.

This is a discussion document, not a finished plan. Everything below is the show-your-work version — derivations, sensitivity tables, community attack list, organizer pain points, references. The point of writing this much is so the team can disagree with specific lines, not the whole shape. Vote on §12 wherever you have a clear take; leave the rest blank — we'll talk through those.


00 · Background

What Zunou actually is — and what we're not yet selling.

Internally Zunou frames itself as "the AI operating system for leadership productivity" — a multi-surface platform with three production clients (Nova mobile · Dashboard web · Scout legacy) on top of a real backend with 136+ AI tools, voice agent, autonomous delegation (Relays), and meeting intelligence. The marketing tagline is "AI Chief of Staff" but the actual product is broader.

Surface 1 · flagship

Nova (mobile)

Expo / React Native — Web PWA live on nova.zunou.ai; iOS Xcode project + Android bundle ai.zunou.nova + EAS config in repo. ~23,000 LOC across 14 main screens + extensive sheets/panels (Relays, Brain Dump, Instant Meeting all shipped).

Surface 2 · power-user

Dashboard (desktop)

Browser. Pulse AI chat surface. Where heavy users live during the workday. Full-featured admin / data / reports.

Surface 3 · platform play

Spaces

White-label container for events / communities / alumni networks. Phase 2+ GTM angle — a powerful lever we're not using in this proposal but worth a separate discussion.

In production
136+
AI tools across 14 categories
Tool-based selective retrieval Tool-based selective retrieval Zunou's MIT-aligned AI architecture. Targeted tool calls instead of stuffing everything into context. ~100× token reduction.
Session modes
8
Specialised AI session types
Daily Debrief Daily Debrief Zunou's signature AI experience. Starts the day with a comprehensive voice or text briefing. · Quick Ask · Day Prep · Relay · Draft · …
Voice
18
Languages on the Realtime API
Sub-second latency · 8 voices · dialect support
Surface
11
Navigable Nova modules
Schedule · Tasks · Notes · Chats · Meetings · Relays Relays Zunou's autonomous AI delegation feature. Send an agent to gather info from a teammate; it has the conversation and reports back. · Pulse Pulse Zunou's per-workspace command center. Tracks workspace health: overdue tasks, pending insights, unread. · Stream · Insights · Contacts · More
Architectural moat · why the product compounds

The "bigger context window" race hit a wall in December 2025. Zunou's architecture is on the right side of that line.

MIT CSAIL — "Recursive Language Models" (Zhang, Kraska, Khattab · Dec 31 2025) showed GPT-5 with a 272K context window collapses on quadratic reasoning tasks: 90% accuracy at 8K tokens → under 30% at 262K tokens → F1 of ~0.04% on OOLONG-Pairs. The "relational reasoning" executives actually need ("what Q4 decisions feed next week's budget review?") is exactly the kind of task that collapses. Stuffing more tokens into one prompt is a dead end. Every "AI bolted onto a silo" approach inherits this ceiling.

Zunou independently developed an MIT-aligned architecture before that paper was published. Instead of context-stuffing, the agent explores data through Tool-based selective retrieval Tool-based selective retrieval Zunou's MIT-aligned AI architecture. Targeted tool calls instead of stuffing everything into context. ~100× token reduction. — 136+ production tools, lightweight session-context registry (entities tracked as event_1, task_2 not full objects), sub-agent delegation that spawns fresh contexts for heavy operations, and chunked data processing (a 2-hour meeting transcript is never loaded whole). Net: ~100× token reduction with full accuracy maintained.

The Lambda AI Proxy Lambda AI Proxy Zunou's server-side prompt + tool engine. All prompts, tool definitions, behavioral rules live here — never on the client. keeps prompts + tool definitions + 11 behavioral rules server-side. Competitors can't reverse-engineer the tuning; we ship agent improvements server-side in minutes, no app updates.

What this means for our pitch: we're not selling vapor. The product is launched, architecturally non-trivial, and substantially deeper than what zunou.ai shows publicly. The pitch problem isn't "can we ship?" — it's "given what's live today, which capabilities do we lead with for the community pilot, and which do we hold back?" Source: internal zunou-services investor doc (March 2026) cross-referenced with deployed code.

Production state · verified 2026-05-11 audit against zunou-services repo · no guessing

We have launched. Here's exactly what's live today vs what's still in flight.

Surfaces
Nova Web PWA
Live at nova.zunou.ai · CloudFront ap-northeast-1 (Tokyo) · production S3 bucket + deploy:prod script.
Nova iOS native
Xcode project, ZunouWidget, eas.json in repo · bundle ai.zunou.scoutapp (inherits Scout's App Store identity). App Store distribution status TBD.
Nova Android native
Bundle ai.zunou.nova in app.json · EAS profiles exist · Play Store distribution status TBD.
Dashboard (desktop web)
Full React SPA · 6-view task management · settings tabs · org admin · Stripe billing · live.
Scout (legacy hybrid)
Serving existing users · being superseded by Nova for new features.
Core capabilities
Voice + Text Agents
OpenAI Realtime API (voice) + Responses API (text) · services/ai-proxy/ · 7,000+ lines of tools · server-side prompts.
Relays · autonomous delegation
Backend service + 13 Nova components (~4,400 LOC) + dedicated services/relay-service/.
Brain Dump + Instant Meeting
Nova sheets shipped (1,221 + 1,898 LOC) · AssemblyAI transcription · AI summary pipeline.
Nova onboarding flow
7+ phase files (Welcome · Calendar · CreateOrg · Complete · ChooseTabs · MeetAgent) · services/nova/src/onboarding/.
Pusher real-time on Nova
pusher-js in package.json · channel-subscribe integration depth TBD.
Spaces (Event / Community / Managed)
Draft spec March 2026 (SPACES_SPEC.md) · org-with-type model + space_config JSON + join_code · deployment status TBD.
Multi-provider AI failover
ai-proxy is OpenAI-only currently · "Model-agnostic by design" is positioning, not yet implementation.

"Now what": the community pilot launches on what's already live (Voice + Text Agents · Relays · Brain Dump · Instant Meeting · Nova onboarding · web PWA on JP-region CDN). The ◐ items move from partial to shipped during Phase 0–1 readiness. The single ○ item (multi-provider failover) is a year-2 hardening goal — not a community-pilot blocker. The product is ready for the community push. The work that remains is mostly GTM-side — see §13.

The opening: the AI productivity stack is fragmenting, not consolidating. Personal-tier exec assistants — alfred_, Klaio, Alyna — solve email + scheduling for one person, not teams. AI workspace tools — Glean, Mem, Lindy — solve search and agents for English-speaking enterprise. Meeting tools — Otter, Fireflies, Granola — transcribe but don't act. The category's most-funded team-focused AI Chief of Staff, Xembly, shut down in 2024. The platform incumbents — Notion AI, Slack AI, Microsoft Copilot — have each shipped AI surfaces bounded to their own walled gardens.

What no one ships today: a unified workspace — chat that rivals Slack + task management + AI native — that lands via integrations rather than asking teams to migrate cold, AND speaks fluent Japanese ( Keigo Keigo 敬語 Japanese honorific speech. Required for any AI output that becomes external-facing. , Ringi Ringi 稟議 Japanese consensus-based written approval process. Documents circulate bottom-up through hierarchy. ). That's the gap Zunou is positioned to close — provided we ship + distribute before any incumbent decides to unify their own stack. The window isn't a fixed countdown; it closes when an incumbent breaks out of its walled garden.

The proposal you're about to read answers one question: how do we manufacture density inside small Tokyo communities, fast enough to matter, while that window is still open?

Zunou's site has two use cases · neither is exactly the launch target

zunou.ai use case · 1 Not enough alone

For Founders

"Routine tasks are now automated."

Intelligent agents run in the background — route tasks, approvals, follow-ups. Approve / review only when human input is needed.

Founders alone don't give us critical mass. Tokyo has too few of them, and they're too spread across companies to hit the magic-number density threshold by themselves. The launch goes broader — see resolution below.
zunou.ai use case · 2 Too early

For Enterprises

"Strategic decisions are now real-time."

Live exec dashboard — progress, workload, and outcomes across teams, projects, priorities. No more chasing updates.

Out of scope until the platform is proven. Enterprises buy validated tools, not waitlist betas. They need paying logos, operational maturity, APPI APPI Act on the Protection of Personal Information — Japan's primary data privacy law. 2025–26 enforcement-focused regime. compliance posture, referenceable customers. Realistic open: Phase 5a (post-stage-gate). Pre-building advisor relationships earlier is fine — selling is not.
What the launch actually targets

Tokyo's broader builder + operator + founder community.

The four launch communities cover that whole population — not just founders. TAI (4,000+ AI engineers / researchers / PMs — mostly builders) · AI Tinkerers Ginza (~200 builders shipping in production) · Tokyo Founders (~150 operator-founders) · Venture Café Toranomon (the mix, every Thursday). The magic-number mechanic applies wherever there's a small team + many meetings + decisions to track — which is true across the segment, not just at the founder layer.


01 What this is · what it isn't

A battle-test plan to get people in and stick to PMF.

Zunou is launched. This proposal is about earning the right to scale aggressively — by running a focused beachhead, instrumenting it well, getting real users in, and validating that the product sticks. Not an ARR forecast. Not a finished GTM strategy. A battle-test plan with explicit signals, predetermined actions, and a stop-or-go decision in ~6 months.

What this is

  • A battle-test plan

    Get real users into Zunou inside a focused community beachhead (TAI first). Pressure-test the product, the onboarding, the pricing, the unit economics — with people who'll tell us the truth.

  • A PMF-first GTM

    Hunt for product-market fit before optimizing for ARR. Five named PMF signals (§09.5), predetermined actions on each. PMF is the gate; everything else flows from it.

  • Instrumented week one

    Every metric in §06.7 + §09.5 has a target, a trigger threshold, and a documented action if it drifts. If we can't measure it, we don't claim it.

  • Stop-or-go-able

    A pre-committed PMF PMF Product-Market Fit — the moment the product visibly pulls users in rather than being pushed at them. stage-gate at ~6 months. Three explicit outcomes (fuel · extend · pivot), agreed in advance.

What this isn't

  • A year-1 ARR forecast

    Year-1 ambition is large (¥30M+) — but the specific number gets locked after month-3 paid signal, not before. The beachhead is PMF discovery, not a 12-month ramp target.

  • A finished plan

    Strategy is a living document. We refit the magic number, pricing, and channel mix monthly against real cohort data. The shape is durable; the numbers are iterations.

  • A request for consensus

    Explicit objections are more useful than reluctant nods. Use the vote chips on each decision; flag what's mis-framed.

  • An external sales pitch

    This is internal strategy + alignment. The investor / external-facing version comes after the team shapes this one.

03 The architectural strategy

MCP-native. Land in their stack while it gets replaced.

Zunou is a full workspace — chat that rivals Slack, task management, AI assistant. People won't migrate from their existing tools overnight. MCP is the bridge.

10,000+
Public MCP servers (Mar 2026)
Slack · Notion · Linear · Salesforce · GitHub · Google Workspace · M365 · Stripe · Sentry · Vercel · Supabase · Figma.
Source: DigitalApplied adoption stats
4,750%
MCP SDK download growth in 16 months
2M / month at launch (Nov 2024) → 97M / month (Mar 2026).
Source: Pento — Year of MCP
92%
Of new agent frameworks ship with MCP built-in
LangGraph · CrewAI · AutoGen + OpenAI / Microsoft / Google all on board.
Source: The New Stack — Why MCP won

What Zunou actually competes with: Slack at the chat layer · Notion / Linear at the task layer · alfred_ / Klaio at the personal-AI layer · Otter / Fireflies at the meeting layer. Zunou does what all of these do, in one place, with AI native. That's the end state.

The starting state is different. Most teams already have years of context in Slack, Notion, Asana, email, calendar. Telling them "switch to Zunou" on day one loses every conversation. They won't migrate overnight — and we shouldn't ask them to.

MCP solves this. We adopt the protocol as a host and inherit the entire 10,000+ server ecosystem on day one. Day-1 users keep their existing tools and use Zunou's AI on top of them. Day-90 users find themselves opening Zunou first because the context is already there. Day-365 users use Slack only for external comms — because internal happened in Zunou.

Land · day 1

Integrate, don't ask

Connect Slack + Notion + calendar + email via MCP. Zunou's AI works across them immediately. Zero migration friction. The user keeps every habit they have.

Compound · day 30–90

Context lives here now

Decisions, action items, follow-ups all surface in Zunou. The morning brief becomes the first surface opened. Slack becomes the second.

Expand · day 180+

Zunou is the workspace

Internal chat happens in Zunou. Tasks live in Zunou. Slack stays for external comms; Notion becomes the public-doc archive. The team's center of gravity has shifted.

MCP is the bridge — not the moat. The moat is the unified workspace that compounds once we're landed.

Same posture · the AI layer itself

Model-agnostic by design.

Zunou is not built on one provider. The product routes between Anthropic (Claude), OpenAI (GPT), Google (Gemini) — and others as they prove competitive — based on cost-per-task, latency, and quality benchmarks. Cheap models handle high-volume work (summarisation, classification); premium models handle heavy reasoning. The user never sees which one ran their query.

Why agnostic?
One provider outage doesn't down the product. We're not locked to a roadmap we can't see.
Why this matters for cost
Routing per task can cut inference spend 40–70% vs single-provider — material at the scale Zunou is targeting.
Why this matters for Japan
Future-proofs against bringing JP-domestic providers (Sakana, Rakuten 7B, ELYZA) into the stack for sovereign-data customers.

And underneath the bridge · the architectural moat

What stays defensible after a competitor also adds MCP.

MCP is a protocol — anyone can add it. The honest question is: when Notion AI or Slack AI ships MCP support, what's left? Four things, all already in production at Zunou. The MIT context-window collapse (§00) tells us why each one matters.

Moat 1

Tool-based selective retrieval Tool-based selective retrieval Zunou's MIT-aligned AI architecture. Targeted tool calls instead of stuffing everything into context. ~100× token reduction.

136+ production tools across 14 categories (Tasks · Calendar · Notes · Meetings · Insights · Messaging · People · Relays · Contacts · Voice Control · Session Mgmt · Drafting · Recording · Error Reporting). The agent decides what to look up — no speculative loading. ~100× token reduction vs context-stuffing approaches. The MCP layer doesn't replace this; it extends the toolset.

Moat 2

Session context registry + sub-agent delegation

Entities tracked as lightweight refs (event_1, task_2) — full objects resolved only when needed. Complex tasks spawn fresh sub-agents (clean context). Long sessions maintain quality after 30+ operations. No context rot. This is an architectural choice; bolting it onto an existing chat product is months of work.

Moat 3

Server-side prompt + behavior IP

The Lambda AI Proxy Lambda AI Proxy Zunou's server-side prompt + tool engine. All prompts, tool definitions, behavioral rules live here — never on the client. holds all prompts, tool definitions, and 11 shared behavioral rules — server-side, never exposed to clients. Session-type-based tool access ( Daily Debrief Daily Debrief Zunou's signature AI experience. Starts the day with a comprehensive voice or text briefing. gets all 136 tools; Relay Conversations get a focused subset to prevent scope creep). Competitors can't reverse-engineer this; we ship agent improvements in minutes without app updates.

Moat 4

Production voice (and the 50–100× cost asymmetry)

Voice Agent on OpenAI Realtime API Realtime API OpenAI's sub-second voice-AI WebSocket. The 'iPhone moment' for enterprise voice. Powers Zunou's Voice Agent. — sub-second latency, VAD, interruption handling, 18 languages, 8 voices, camera integration, text-input fallback. Voice session ≈ $0.90 vs Text Agent ≈ $0.02 (via Responses API) — text dominates volume → 70%+ target gross margin at scale. Anyone can wrap Realtime; we've already shipped it across iOS, Android, Web.

MCP gets us into the room. The architecture is why we're still in the room 18 months later.

Worth talking through · "but we already have text agent — why also do Slack / MCP?"

Text agent + MCP integrations aren't the same thing. They serve different purposes — and together they produce entanglement.

Fair question. We have a full Text Agent already — it can chat, look things up across our 136+ tools, draft anything. Why also invest in MCP integrations to Slack, Notion, Google Workspace, etc.? Three honest reasons + one we should discuss openly:

1 · Day-1 retention

Users keep their existing tools.

Telling a TAI builder to migrate Slack history into Zunou on day 1 loses 80% of them. MCP means they don't migrate — Zunou's AI reads their existing Slack / Notion / Linear / calendar in place. Day-1 friction goes from "rebuild your workspace" to "click connect." Materially higher activation.

2 · Cross-app intelligence

The text agent gets smarter when it sees more.

"What did the team decide in #product yesterday?" is only answerable if the agent can read Slack. Without MCP, our text agent is bounded to data inside Zunou. With MCP, it's bounded to data the user has access to anywhere. That's the difference between a smart chat box and a chief of staff.

3 · Entanglement (★ the strategic one)

Once integrated, leaving Zunou means losing cross-app workflows.

The text agent alone is replaceable — ChatGPT can also answer questions. But a text agent entangled across 6 of the user's tools isn't replaceable without rebuilding all those connections elsewhere. Entanglement is the moat that compounds with time — month 1 user has 2 integrations, month 12 user has 10, and switching to anything else costs 10× more by then.

4 · The honest cost · worth discussing

MCP isn't free — each integration is eng investment + maintenance.

We don't need to integrate everything day one. Open question: which 2–3 integrations do we lead with? Slack + Google Workspace + Notion are the most-used by TAI builders; Linear comes after; the rest can wait. Phase 0 ships 1–2; the rest are layered as users ask.

The short version

Text agent alone = good product. Text agent + MCP entanglement = indispensable workflow. The first wins users on capability; the second keeps them because leaving costs more than staying. We should do both — but pace the MCP build by user pull, not by checklist.

04 The category right now

No one builds exactly Zunou's product. Many build pieces of it.

Honest audit: who's a direct competitor, who's adjacent, who's a platform threat. Not everyone with 'AI' in their tagline is in our lane.

Capability matrix · the cohort the internal team benchmarks against

From the March 2026 investor document. The cells are deliberately binary — partial credit is hidden behind product complexity that doesn't survive contact with a real exec workflow. Source: internal capability audit, cross-referenced with each vendor's public docs.

Capability MS Copilot Google Gemini Notion AI Otter.ai Fireflies Zunou
Unified workspace (calendar + tasks + notes + chat + meetings) Partial Partial
Real-time voice agent (sub-second, full tool use) ✓ Realtime API
Text agent with deep tool execution Basic Basic Basic ✓ 136+ tools
Autonomous delegation (Relays — AI conversing with a teammate) Relays Relays Zunou's autonomous AI delegation feature. Send an agent to gather info from a teammate; it has the conversation and reports back.
Meeting recording + AI analysis (post-meeting actionables) Teams only Meet only
On-device voice recording (Brain Dump / Instant Meeting) ✓ 2-tap
Proactive insights (action items / risks surfaced unprompted) ✓ HITL
Cross-org connections (external DMs first-class) ✓ QR + deep link
Event / Conference Spaces (white-label per event) Spaces Spaces Zunou's white-label platform play. Three types: Event Spaces (conferences) · Community Spaces (ongoing groups) · Managed Spaces (enterprise).
Personalization memory (5 categories, persistent)
Mobile-native flagship (cross-platform, voice-first design) Weak Weak Nova Nova Zunou's flagship mobile client — iOS, Android, Web. Expo / React Native. 30,000+ LOC across 50+ screens.
Japanese fluency (Keigo · Ringi-shaped output) Partial Partial Partial JP wedge

Six rows in this matrix are only a Zunou ✓ — Voice Agent · Relays · Proactive Insights · Cross-org Connections · Event Spaces · Personalization. That's the breadth gap the GTM has to convert into recognition before incumbents close it.

Lane / Competitor What it actually does Threat to Zunou
Direct · team-focused AI Chief of Staff
Xembly ↗
US · $20M raised
Was the closest direct competitor: meeting recording + action items + follow-ups for teams. Shut down June 2024 None — exited the market. Cautionary tale, not a threat.
Klaio ↗
EU · early
"AI Chief of Staff" branded; chat-style assistant for ops + tasks. Small team, individual + small-team tier. Low — no JP presence, no MCP-native land-and-expand story, no full workspace surface.
Alyna ↗
US · early
AI productivity assistant marketed as "AI Chief of Staff" for individuals + small teams. Low — individual-first, no team workspace, no JP.
Adjacent · personal exec assistant (1 user, email + calendar)
alfred_ ↗
$24.99/mo · individual
Email triage + scheduling + voice-matched drafts for one executive. Personal CoS. Low — single-user tool. Different category. Doesn't address team density.
Adjacent · AI workspace tools (search + agents, enterprise lean)
Glean ↗
$7B+ valuation
Enterprise AI work assistant — unified search across all company apps, custom agents. Sells top-down to large enterprises. Medium long-term — but they sell to 1,000+ seat enterprises through procurement. Different motion. No JP focus.
Mem ↗
Personal-first AI
AI-native note-taking and knowledge management. Personal use evolving to teams. Low — knowledge-base oriented, not workflow/operations.
Lindy ↗
Agent builder
AI agent platform — users build custom agents for email, calendar, follow-ups. DIY composability. Low — power-user tool, not a turnkey workspace. Different ICP.
Adjacent · meeting AI (transcribe + note, don't act)
Otter ↗ · Fireflies ↗ · Granola ↗
Mature category
Meeting transcription + AI summaries. Granola is the most loved by execs (notepad-style, no bot in room). Medium — Granola in particular is a feature competitor for Zunou's meeting-AI surface. We compete by going broader (chat + tasks + AI in one).
Platform incumbents · the long-run threat
Slack AI ↗
Salesforce · APAC +19% YoY
AI summaries + search inside Slack. Distribution = every Slack workspace. High long-run — but bounded to Slack. Can't see calendar / Notion / Linear. No Keigo Keigo 敬語 Japanese honorific speech. Required for any AI output that becomes external-facing. / Ringi Ringi 稟議 Japanese consensus-based written approval process. Documents circulate bottom-up through hierarchy. nuance.
Notion AI ↗
JP language ✓
AI inside Notion — write, summarise, find. Distribution = every Notion workspace. High long-run · bounded to Notion. Limited cross-app reach.
MS Copilot ↗
M365 + Teams
AI woven through M365 + Teams. Strong in JP enterprise on the Microsoft stack. High long-run for the Microsoft-stack share of JP enterprise. Limited reach on Slack-native + Google-native teams.

Notably not in this table: Ashley AI (askashley.com) — a retail/customer-service conversational AI, not exec ops; different category entirely. Other "AI Chief of Staff" branded tools that emerged in 2023 have either pivoted to consumer or gone quiet. Audit refreshed monthly; flag additions to Marco if a new entrant launches in this space.

Japanese AI companies = partners, not competitors

Sakana AI ($135M Series B, Nov 2025), LayerX ($100M Series B, Sep 2025), ELYZA (KDDI-backed), Rakuten AI 3.0 — all sell foundation models or back-office automation. None compete with the exec CoS surface. The right move is co-marketing (joint PR Times release, joint AiSalon Tokyo demo) — not competing.

What we can credibly own

The category is competitive but no one has built exactly Zunou's product: a unified workspace that lands via MCP integrations before asking for migration. Defensibility comes from three things the incumbents have weak incentives to build: the land-and-expand strategy via cross-app integrations (see §03), Japanese-localized affordances like Keigo Keigo 敬語 Japanese honorific speech. Required for any AI output that becomes external-facing. and Ringi Ringi 稟議 Japanese consensus-based written approval process. Documents circulate bottom-up through hierarchy. , and the community-distributed habit loop. The window stays open as long as no incumbent decides to break its walled garden.

05 The mechanic

Density manufactures product-market fit.

Not features. Not virality. Density — the threshold past which a small group's behavior changes.

Precedent · the threshold pattern

Slack
~2,000 team messages

Past this threshold, retention jumps to 93%. Below it, teams churn. The product itself doesn't change — the team's behavior does.

Facebook
7 friends in 10 days

Chamath Palihapitiya's growth team identified this exact number. Cross it and the user retained for life. Miss it and they churned. Every product roadmap decision was filtered through it.

Zunou's hypothesis · derived from how the product creates value

5
Colleagues from the same community, active
1
Connected calendar
3
AI actions accepted within 14 days

Why these numbers — the derivation

5
Colleagues from the same community. Zunou's value compounds when cross-context exists — when your AI knows what others in your circle decided this week. Below ~5, the AI has thin context and reads as a chat box. At 5+, the cross-references start producing insights you couldn't get elsewhere. Why not 3 or 10: Slack's network-effect studies cluster around 4-7 as the activation band; we set 5 as a defensible mid-point to refit.
1
Calendar connected. Without the calendar, Zunou can't surface meetings, prep, decisions, or follow-ups — and 80%+ of the product surface is dark. Calendar is the single integration that unlocks daily utility. Why exactly 1: empirical — every PLG study on calendar-adjacent products (Cron, Calendly, Reclaim) ties activation to first OAuth connection. There's no fractional version.
3
AI actions accepted. One accepted action is a fluke. Two is a coincidence. Three within 14 days is a habit. Acceptance (rather than impression / view) is what tells us the AI's output is actually trusted. Why 14 days: matches industry SaaS-onboarding studies showing the first 2 weeks predict W4 retention with ~80% confidence (Mode, Amplitude). Why 3: below this users haven't internalized the value; above this they reach for Zunou unprompted.

These specific numbers are a starting hypothesis, not a commitment. We instrument them on day 1 and refit monthly against real cohort data. If at month 2 the actual threshold is 7/1/4 — we update. If at month 3 only 5/1/3 in 21 days correlates with W4 retention — we update the window too. The contract is the framework; the numbers are an iteration.

The launch mechanic that produces this density is the part of the strategy that sounds unusual. Instead of launching one Tokyo community at a time (the Eventbrite playbook — city-by-city, the "campus model"), we light four overlapping communities in the same week, picked specifically because their members already see each other.

The physics term for this is sympathetic detonation — adjacent explosive charges igniting each other through shockwave coupling. The growth-theory term is percolation threshold — the moment a sparse graph flips from disconnected clusters into one giant connected component.

Sympathetic detonation · animated

Four launches, same week, in overlapping communities. The first ignites. Adjacent communities — sharing some of the same people — light up next. By the end of the week, the four clusters that started disconnected are one connected component. Scroll into view to play.

Sympathetic-detonation animation across four Tokyo communities Animated SVG: TAI ignites first, then bridges activate to AI Tinkerers Ginza, Venture Café Toranomon, and Le Wagon alumni in sequence. Final state: one connected component. TAI ~4,000 builders AI Tinkerers Ginza ~200 shipping builders Venture Café Toranomon ~500 weekly mix Le Wagon alumni ~300 tech bootcamp T+0 · TAI ignites T+48h · AI Tinkerers T+5d · Venture Café T+7d · Le Wagon · one component

What the picture shows: each cluster is a community; circles are members; the dashed lines between clusters are overlapping individuals who attend two or more. Once enough bridges activate, the disconnected clusters cross the percolation threshold and become one giant connected component. From that point on, "everyone I know is on Zunou" can be literally true for someone in the graph — the sentence in the dark panel below.

In plain terms: an attendee at AI Tinkerers Ginza on Tuesday sees three people at Venture Café Toranomon on Thursday. By the end of the launch week, one sentence becomes literally true inside the Tokyo English-speaking founder graph —

The sentence that signals we won
"Everyone I know
is on Zunou."

When this becomes literally true for one person inside a launch community, the percolation threshold is crossed. From there it spreads passively.

One launch leaks. Four overlapping launches chain-react.


06 The launch shape

One pilot first. Learn. Double. Scale.

We don't sympathetic-detonate four launches before we know the platform is ready. We onboard one community deeply, take the operational learnings, fix what breaks — then double, then scale to the rest.

Pilot · stage 1

One community

Deep onboarding. Marco + Malek hand-hold the first ~50 users. Every friction point gets logged. Platform readiness becomes a known.

Learn · stage 2

Fix what breaks

Onboarding gaps, integration friction, support questions. Real magic-number data replaces the hypothesis. Stickiness mechanics validated or rebuilt.

Double · stage 3

Adjacent overlap

Add 1–2 communities that share members with the pilot. Validates the cross-community percolation thesis before scaling further.

Scale · stage 4

Sympathetic detonation

Now the multi-community simultaneous launch from a position of knowing the platform works. The original §05 mechanic, but earned not assumed.

Why pilot-first, not synchronized

A simultaneous 4-community launch only works if the platform is ready to handle 200 cold signups with no operational drag. We don't know that yet. Falling flat across 4 communities at once damages 4 relationships at once and the recovery cost is brutal in the Tokyo founder graph — everyone knows everyone. A messy pilot in 1 community recovers; a messy launch across 4 doesn't. The sympathetic-detonation play in §05 stays — we just earn the right to run it before we run it.

Attack list · 13 community targets across 4 priority tiers

We pursue P1 first and only commit Marco's time at P2/P3 once pilot signal is validated. But we keep all 13 warm — community organizers take 2–6 weeks to respond and we don't want to be stuck waiting on a single channel. Backup communities are not afterthoughts; they're the insurance against P1 stalling.

Tier Community Audience · size Why us · why now Warmth Action
P1 TAI (Tokyo AI) ↗
tokyoai.jp
AI engineers · researchers · PMs
~4,000 members
Broadest overlap with every other community. Open membership. Builders run the most meetings + are most AI-fluent. 🟢 Direct — Marco knows Ilya Kulyatin (founder) personally Demo at monthly meetup → onboard 30–50
P2 AI Tinkerers Ginza ↗
tokyo.aitinkerers.org
Shipping AI builders
~200 selective
Highest-signal users — they build the tools others adopt. Heavy overlap with TAI. 🟢 Direct contact possible Demo at next demo night
P2 Venture Café Toranomon ↗
venturecafetokyo.org · weekly
Builders + founders + investors mix
~500 unique/yr
Weekly recurring touchpoint. Compound exposure. CIC Tokyo proximity helps later enterprise plays. 🟡 Public events — apply Speak at Thursday gathering
P3 Tokyo Founders Group
private Slack
Operator-founders
~150
Closer to Zunou's "For Founders" persona. Private list = warm-intro access only. 🟡 Warm intro likely — to confirm Confirm intro, then time to pilot
P3 Startup Grind Tokyo ↗
startupgrind.com/tokyo
Founders + investors
~400 reachable
Monthly fireside event rhythm. Established credibility. Overlap with Venture Café. 🟡 Apply for speaker slot Speak / be a featured guest
P3 Le Wagon Tokyo alumni ↗
blog.lewagon.com/tokyo
Tech bootcamp alumni
~300+
Slack-active, tools-curious. Strong builder overlap with TAI + AI Tinkerers. 🟢 Alumni network access Workshop at alumni event
P3 Tokyo Product Meetup ↗
PMs across Tokyo SaaS
Product managers
~600 active
PMs are ICP ICP Ideal Customer Profile — the specific type of company / user we're built for. — they run meeting-heavy cross-team work. High intent for Zunou. 🟡 Apply to organize a talk Talk + demo at meetup
P3 Founders Live Tokyo ↗
founderslive.com
Founders
~200/event
Pitch-format events. International founder mix. Good warmth-builder before IVS. 🟡 Apply or attend Sponsor or pitch
P4 Headline portfolio ↗
headline.com · runs IVS
JP VC portfolio
~80 cos
Portfolio-wide rollout = scale via VC relationship. Headline also runs IVS. 🟡 Advisor warm-intro Once we have paying logos
P4 Coral Capital portfolio ↗
coralcap.co · YC-like program
JP VC portfolio
~60 cos
English-friendly founders. Coral's portfolio program creates community already. 🟡 Cold-to-warm via James Riney Sponsor a portfolio event
P4 Genesia Ventures portfolio ↗
genesiaventures.com · SE Asia + JP
SE Asia + JP early-stage
~50 cos
Cross-border founders. Good fit for Zunou's English-tolerant user. 🟡 Advisor intro needed Investor demo
P4 IVS Kyoto ↗
ivs.events · annual July
JP startup conference
~13,000 attendees
Anchor event of the year. Launchpad / side-event slot is high-leverage with working platform. 🟢 Live conversation — meeting this week / next Discuss Launchpad / sponsor booth / Event Space partnership
P4 Open Network Lab ↗
onlab.jp · seed accelerator
Digital Garage accelerator
cohort: ~10–15
Cohort founders need ops tools. Onlab's network gives advisor access. 🟡 Direct application Sponsor cohort tooling

Warmth legend: 🟢 immediate access (advisor / direct contact) · 🟡 outreach needed (cold-to-warm in <2 weeks) · 🔴 long lead time (months ahead). Why 13 not 4: a P1 demo gets scheduled in weeks; P3 takes 2–3 months. Building warm chains across all 13 in parallel means no idle waiting if any one channel stalls. Marco's time-priority is P1; everyone else stays warm via email or one-pager.

Why events compound · the Zoom-playbook arithmetic

Every event is a download channel. Zunou's internal math is more aggressive than this proposal.

The investor document's Phase 2 GTM is built on this single observation: a 1,000-person conference where everyone scans a QR code = 1,000 Nova installs that day. Some convert to personal post-event use. This is exactly what we're proposing for Tokyo — the community side of the same mechanic.

One event
1,000
Attendees
QR scan at the door → Nova installed → AI agent pre-loaded with conference context.
Conversion
10%
To personal Nova use
Investor doc working number. Tokyo-specific actual TBD by pilot.
Per event
100
Retained monthly-actives
Net new MAU from a single conference partnership.

How this scales the §06 attack list: the 13-community plan isn't 13 meetups — it's 13 recurring event channels. TAI alone has ~4 events/month. Venture Café is weekly. IVS Kyoto in July = 13,000 attendees in 3 days (one event = 1.3k installs at 10% personal conv = ~1,300 retained). The community attack list and the Zoom-playbook math are the same mechanic counted two ways. Both are bottlenecked by the same Phase 0 gate: Spaces Spaces Zunou's white-label platform play. Three types: Event Spaces (conferences) · Community Spaces (ongoing groups) · Managed Spaces (enterprise). Event-Spaces flow working end-to-end + venue sponsorship operational (§06.4 level 1).

The 10% conversion number is the investor doc's working assumption — not measured. Pilot data refits this. If Tokyo's actual is 5% we still get 50 retained users per 1,000-attendee event. If it's 20% (high-affinity AI community like TAI), the arithmetic doubles. Either way, events outperform any other distribution channel we can plausibly build at this stage.

The other side of the table

What community organizers actually need from us.

A community owner says yes to a partner the way a venue says yes to a tenant: only if we solve a pain they already have. Below: the four recurring complaints we've heard from TAI / AI Tinkerers / Venture Café / Le Wagon organizers, and the specific Zunou capability that addresses each. Plus a venue play that has a market price.

What organizers complain about Why it hurts What Zunou ships that addresses it Specific surface
Securing a venue The single most-cited operational pain. Company offices are gated by HR / facilities. CIC Tokyo + EGG JAPAN + WeWork charge per event. Cafés don't fit 50 people. Booking takes weeks of relationship work for one night of value. We don't run venues — we sponsor them, but with a model that incentivizes actual product adoption not just attendance. TokyoDev's ¥1,000/attendee program is the public market floor, but their audience is job-seeker-curated (lower Zunou-acquisition value). TAI / AI Tinkerers / Venture Café are higher-quality audiences for us — and at TAI's scale (~4,000 members, ~100–300 per meetup), a flat ¥1,000/attendee gets expensive fast without guaranteeing downloads. We propose a hybrid model — see below — that shares risk with the organizer and pays for outcomes, not seats. Sponsorship line-item (proposed addition to §09 panel D budget). Optional adjacent play: bulk meeting-room deal with CIC Tokyo / EGG JAPAN as Phase 2.
RSVP no-shows + low engagement Connpass + Doorkeeper + Lu.ma capture RSVPs but stop there. 30–50% no-show is normal. Pre-event hype dies in a Slack channel. Attendees show up cold and leave without knowing each other. Spaces Spaces Zunou's white-label platform play. Three types: Event Spaces (conferences) · Community Spaces (ongoing groups) · Managed Spaces (enterprise). Event Spaces — attendees scan a QR at the door, Nova rebrands to the event, auto-channels per track, AI agent pre-loaded with the conference context (schedule, speakers, attendees). Pre-event push notifications via Pusher. Real-time talk-time + engagement analytics back to the organizer. Nova Event Spaces · auto-created per-track channels · QR onboarding flow already shipped.
Post-event content / recap Talks happen → no recording, no notes, no follow-up. Attendees forget what they learned by Monday. Sponsors get a thank-you email but no proof their money produced anything. Instant Meeting Instant Meeting Zunou Nova's 2-tap impromptu meeting recording. Faster than Otter (3 taps). Auto speaker diarization + retroactive calendar entry. records the talk (2 taps, faster than Otter). AI pipeline produces TLDR + per-speaker transcript + sentiment + strategic takeaways + actionables — automatically. Daily Debrief Daily Debrief Zunou's signature AI experience. Starts the day with a comprehensive voice or text briefing. the next morning surfaces "yesterday's session highlights" to every attendee. Nova Instant Meeting · AssemblyAI transcription · server-side AI summary pipeline (already in production).
Recurring engagement between events The hardest problem in community-building. Slack channels go quiet between meetups. Members forget the group exists until the next event email lands. Community Spaces Spaces Zunou's white-label platform play. Three types: Event Spaces (conferences) · Community Spaces (ongoing groups) · Managed Spaces (enterprise). are permanent (vs Event Spaces being time-bound). Member directory, shared notes, per-community insights, cross-org connections persist after the event. Each member's Pulse Pulse Zunou's per-workspace command center. Tracks workspace health: overdue tasks, pending insights, unread. surfaces "needs attention" prompts so the community stays alive in the background, not just at events. Nova Community Spaces · cross-org Contacts · Pulse per community.
Sponsor reporting Sponsors paid ¥X; they get a photo and a thank-you email. No measurable engagement data. Hard to renew sponsorships without a clean report. Event Spaces attendee analytics: scans per session, channel activity, talk-time per attendee, AI agent queries per topic. Organizer exports a sponsor-ready report. This makes the organizer's job easier — not just ours. Dashboard's analytics views · event-scoped data export.

Validation calls to schedule · what we'll ask each organizer

We haven't spoken to them yet. Here are the relationships + the specific questions each call needs to answer.

The §06.4 thesis (community partnerships · hybrid sponsorship model · co-marketed events) needs validation from the actual organizers before it hardens into a plan. These calls happen in the next 4 weeks. Each card lists the relationship we have + the specific information we need to gather.

TAI · Tokyo AI ~4,000 members · WhatsApp

Relationship: Marco knows Ilya Kulyatin (founder) personally. Direct access.

Questions for the call
  • · Typical meetup attendance + active-member-vs-roster dynamic?
  • · Which 2–3 pain points TAI members repeatedly mention?
  • · Existing sponsorship models that have worked + what hasn't?
  • · Reaction to hybrid model: flat fee + per-user-join bonus?
  • · Demo slot at the next monthly meetup — feasible?
  • · Are members already paying for AI tools (Cursor, ChatGPT Plus, etc.)?
IVS · Kyoto ~13,000 attendees · July annual

Relationship: meeting scheduled this week or next with Headline Asia / IVS team.

Questions for the call
  • · LAUNCHPAD application timeline + selection criteria for 2026?
  • · Sponsor tiers — pricing + what each tier unlocks?
  • · Side-event Spaces deployment — feasible alongside main programming?
  • · Attendee composition (founder / investor / builder mix)?
  • · Co-marketed "AI assistant for every attendee" Spaces experience — interest level?
  • · Past sponsor examples — what worked / what didn't?
AI Tinkerers Ginza ~200 shipping builders · selective

Relationship: public events accessible. Need to identify the organizer + secure intro.

Questions for the call
  • · Demo-night cadence + selection process?
  • · Member overlap with TAI — rough %?
  • · Pain points shipping builders most cite when running their own teams?
  • · Sponsorship interest — hybrid model fit?
  • · Are members already paying for productivity tools?
  • · Anyone we should specifically not pitch to (competing AI Chief of Staff founders, etc.)?
Tokyo Founders Group ~150 operator-founders · private

Relationship: warm intro likely — need to confirm with the connecting party + identify the right TFG admin contact.

Questions for the call
  • · Active-member vs roster — how many actually engage?
  • · What channel(s) does TFG use (Slack / WhatsApp / something else)?
  • · Pain points operator-founders mention vs builder-founders?
  • · Have they sponsored / partnered with any product before?
  • · Closed-door demo vs. public event — what fits their culture?
  • · Willingness-to-pay signal — do members already pay for ops tools?

Why this is in the proposal: hardening the §06.4 thesis depends on real organizer conversations, not Marco's hypothesis. The calls happen in the next 4 weeks; quotes + corrected pain points + sponsorship-model feedback come back into the proposal as v2. If any of these conversations push back hard on the venue-sponsorship model, the proposal rebalances around what actually works.

The pitch to the organizer (one sentence)
"We cover your event costs with a flat sponsorship — plus we pay you a per-user bonus for every attendee who actually joins Zunou. Your members get Nova free for the partnership, and after each event we hand you an AI-generated recap + sponsor report that takes you zero minutes to produce."

Why this works: every line is something they already want, none of it requires them to migrate their community off Connpass / Doorkeeper / Lu.ma, and the financial cost is bounded and per-event. We're a sponsor with product utility, not a platform asking them to move.

Venue strategy · three escalating commitments

What we can do about the venue problem.

Level 1 · Phase 1 ★

Hybrid: flat fee + per-user-join bonus

Base sponsorship ¥75k–¥150k per event (covers venue / F&B / AV) + ¥2,000 per attendee who creates a Zunou account within 14 days, capped at ¥400k per event. Pays for outcomes, not seats. Organizer shares the conversion risk + upside.

Working line item: ~¥2.5–5M / 6 months across 3–4 P1–P2 communities (high end if conversion is strong — that's a feature, not a bug). Subject to validation with each organizer (§06.4 calls). Folds into §09 panel D.

Level 2 · Phase 2

Coworking partnership

Bulk meeting-room block with CIC Tokyo (Toranomon Hills · already canonical venue) and/or EGG JAPAN at Marunouchi. We become "the AI workspace partner — bookable rooms included" for any community we sponsor.

Procurement-light. Needs warm intro to CIC. Adjacent benefit: CIC tenants are themselves ICP.

Level 3 · Phase 3+

"Zunou Tokyo" recurring space

Lease a fixed evening slot at one canonical venue (Toranomon area) and host a rotating monthly schedule for partner communities. Zunou-branded, AI-powered onboarding at the door, all partners' members welcome.

Real commitment — only after Phase 1 proves communities want recurring presence. Operationally expensive; payback comes from the brand surface.

Sources + caveats: TokyoDev's published meetup-sponsorship program (¥1,000/attendee, reimbursed within 10 business days) is the market floor — but their audience (job-seekers) doesn't match Zunou's ICP. TAI / AI Tinkerers / Venture Café audiences are materially more valuable per acquired user, which is why a hybrid (flat + per-join) model fits better. Numbers above are starting hypotheses subject to validation with each organizer (§06.4 calls). CIC Tokyo as canonical venue: Toranomon Hills Business Tower 15F hosts Tokyo Tech Meetup Feb 2026 — public.

Before we invite anyone

Are we ready to onboard them?

A pilot only works if the platform handles the first 50 users without falling over. These are the questions we answer with platform / product / Malek before the TAI demo gets scheduled.

Onboarding experience

First 15 minutes, day 1

  • · Sign-up → time-to-first-value: target ≤ 5 min.
  • · Calendar OAuth: works on Google Workspace + iCloud + Outlook?
  • · Slack OAuth: works on personal + workspace?
  • · Voice setup: mic permission, fallback if denied?
  • · First AI action: prep brief for next real meeting — does it land or feel generic?
  • · What happens if integrations fail silently?
Their current pain

What TAI members complain about

  • · "Messages are scattered everywhere — WhatsApp for TAI, Slack at work, LINE for clients. I lose threads constantly."
  • · "I miss action items from meetings because note-taking sucks."
  • · "Calendar prep takes 20 min I don't have."
  • · "Event RSVPs scattered across Connpass / Lu.ma / Peatix."
  • · "Follow-ups slip through the cracks; my CRM is a notebook."
  • · If Zunou solves 2–3 of these well, they'll try it. If it solves 1 plus has rough edges, they won't.
What makes them stick

Day 7, 30, 90 retention

  • · Day 7: morning brief becomes the first app opened.
  • · Day 30: 5+ colleagues active in same workspace (the magic number) — cross-context insights are visible.
  • · Day 90: internal team chat shifts from Slack to Zunou for at least one workflow.
  • · Habit loop: morning brief → meeting prep → in-meeting capture → post-meeting follow-up → next morning's brief.
Open platform questions

Need answers before the demo

  • · Can we handle 50 concurrent active users without degradation?
  • · Inference cost per active user — what's the actual runrate?
  • · Multi-workspace isolation — solid or leaky?
  • · What's the support model when a TAI member DMs Malek with a bug at midnight?
  • · APPI compliance posture — even if not enterprise, JP users will ask.
  • · Disaster mode: what's the rollback plan if the launch demo glitches live?

The honest answer to "ready?": we don't know yet. Phase 0 (foundations) answers it. The TAI pilot demo doesn't get scheduled until the open-question column above is green. If platform readiness takes longer than expected, we slip the demo — not the platform's quality bar.

What they see · what makes them stay · what we watch

Discovery surface, driver tree, stickiness mechanics, analytics dashboard.

Four design questions that determine whether the pilot converts: how do unsigned-in visitors discover us, what makes people open Zunou again on day 2, what makes them open it on day 30, and how do we actually see whether any of it's working?

A · the discovery question what shows before sign-in

Should we have something public before sign-in?

Right now zunou.ai is a waitlist-only landing page. Anyone who hears about Zunou from a TAI demo, a friend, or a Tweet hits the same wall. No discovery surface = no organic top-of-funnel. This is the most concrete blocker we can fix early. Three options, none mutually exclusive:

Option 1 · marketing-only

Status quo

Keep zunou.ai as it is. Email capture for waitlist.

Risk: 0% discovery. Every user requires manual intro or community-event channel.

Option 2 · public utility ★

Free useful surface

A genuinely useful free tool that doesn't require sign-in: Tokyo AI events feed · meeting-summary template gallery · public showcase of community digests · weekly AI roundup.

Brings non-users to a Zunou-branded surface. Converts on trust + utility. SEO-indexable. Open question: which utility resonates with TAI / AI Tinkerers ICP enough to be worth building?

Option 3 · social proof

"Built with Zunou"

Public testimonial wall · case studies · a "see how teams use it" page anchored on real workflows from pilot users (with permission).

Builds credibility but useless until we have pilot users. Defer to Phase 2.

Recommendation: ship Option 2 in Phase 0 as a focused public utility — events feed is the obvious choice given TAI / AI Tinkerers / Venture Café cluster events on Connpass + Lu.ma. Cost is low (data already public, just aggregate + render). Open question for the team is which utility to lead with.

B · the driver question what we actually control

What are the levers we pull?

The KGI (ARR) breaks down into a driver tree. Each leaf is something Marco or the team can actually do something about. Bold the controllable nodes.

ARR driver tree ARR decomposes into three branches: active users, paid conversion rate, revenue per paid user. Each branch has 3–4 controllable leaf drivers. ARR KGI Active users SIZE OF THE BASE Discovery surface traffic ← public utility · panel A Community demos / events ← Marco runs the calendar Word-of-mouth virality (k) ← shareable artifacts (§12 #17) Sign-up → activation rate ← onboarding (§06.5) Paid conversion % FREE → PAID Magic-number completion ← product mechanic (§05) Free-tier limits hit ← pricing design (§09 B) Upgrade prompt timing ← UX ¥ per paid user / mo ARPU Tier pricing ← decision §12 #14 Tier mix (Pro vs Business) ← upsell path Add-ons · AI overage · Spaces ← future lever
KGI Branch (rolls up to KGI) Controllable leaf (the lever you can pull)

Why this matters: if ARR isn't growing, we look at the driver tree and ask "which node is broken?" — not "let's try harder." Each driver maps to a KPI (panel D). When we say "the platform isn't ready" we mean a specific node, not a vibe.

C · the stickiness question why do they come back tomorrow

What makes users love it, not just try it?

Trial-to-stick conversion is where most AI tools die — the Forrester / Anaconda 88% pilot-failure rate is exactly this gap. Five mechanics we design for explicitly:

1 First-session "aha"

User sees something they couldn't get elsewhere within their first 5 minutes. For Zunou: the meeting summary that catches a follow-up they would have missed.

2 Daily ritual hook

Morning brief at 8 AM with today's calendar + pending follow-ups + cross-context signals. One reason to open the app every day at the same time.

3 Compounding context

Every meeting / chat / decision makes the next AI output better. Switching cost goes up over time — leaving Zunou means losing months of trained context.

4 Social proof loop

"5 colleagues from your community are on Zunou." Visible, ambient, real. The magic-number's social dimension — you don't want to be the one who left.

5 Shareable artifacts

Meeting summaries → Slack DMs to non-users. "Here's what we decided" prep docs → emailed to collaborators. Every shared artifact is an ad for Zunou. Loom did this with video; we do it with structured outputs. This is decision #17 in §12.

D · the analytics question what we look at daily

How do we actually know what's working?

Without a daily-monitored dashboard, the strategy operates on vibes. The minimum we need before scaling community sign-ups is below. As GTM lead, Marco runs this dashboard weekly — every metric below has a target, a trigger threshold, and a documented action if it drifts.

The GTM dashboard · what Marco watches every Monday

Six tiles. One decision rule per tile. Zero ambiguity about the next move.

GTM weekly dashboard mockup Mock dashboard with six KPI tiles, stage-gate progress strip, and decision rules per metric. NSM, MoM growth, magic-number completion, paid conversion, member-of-N, inference cost per AU. GTM Dashboard Monday 08:00 JST · week ending YYYY-MM-DD PHASE 1 · WEEK 8 NSM · WAU PAST MAGIC NUMBER 145 ↗ 22% WoW Target: 200 by stage-gate ▶ if < +5% WoW × 4 wks → escalate MOM GROWTH · ACTIVE USERS +28% on track ✓ Target: 25–30% by M12 ▶ if < 15% × 2 wks → channels review MAGIC-NUMBER COMPLETION · 14D 38% on track ✓ Target: 35% · stretch 45% ▶ if < 30% × 2 wks → onboarding emergency PAID CONVERSION · ACTIVATED→PAID 11% +1pp / wk Target: 12% (industry mid) ▶ if < 8% × 4 wks → pricing review MEMBER-OF-N · VIRAL INDICATOR 12% trending up Target: 15% by stage-gate ▶ if < 10% by M4 → viral loop broken INFERENCE COST · ¥/AU/MO ¥520 under cap ✓ Cap: ¥600 · stretch ¥400 ▶ if > ¥800 × 2 wks → routing review STAGE-GATE PROGRESS · 6 CRITERIA 4 of 6 hit · pending decision: month 6 review NSM ≥ 200 Activation ≥ 35% ≥3 paid logos Inference ≤ ¥600/AU Density ≥ 25% Member-of-N ≥ 15% ▶ ≥4 → FUEL · 3 → EXTEND 60d · ≤2 → PIVOT Source: PostHog · Auto-refreshed Mon 07:30 JST · Decision rules in §06.7 D ▼ EXPORT WEEKLY SUMMARY

Reading the mockup: illustrative values, not real data. Each tile shows current value · WoW or MoM delta · target · the explicit decision trigger if the metric drifts. The bottom strip shows stage-gate progress in real time (4/6 criteria green = "fuel pending"). This is the dashboard Marco runs weekly — and the artifact senior team uses to monitor without needing Marco in the room.

Beneath the dashboard, the operating cadence — what we look at, when, and the action if it drifts:

Cadence What we watch What it tells us Lever if off
Daily Sign-ups · activations · first-session completions · errors Onboarding health · platform stability Marco / Malek personally onboard problem signups
Weekly WAU · magic-number completion · cohort retention W1 / W2 / W4 · NPS pulse Whether pilot mechanics are firing Adjust onboarding · re-run pilot demo · tune nudges
Monthly Magic-number refit · paid conversion · Member-of-N · inference ¥/AU · CAC payback Economics + hypothesis validation Refit magic-number numbers · adjust pricing
Stage-gate All 6 PMF criteria (§11) Fuel · extend · pivot The big decision

Tool stack: PostHog or Mixpanel for product analytics · Linear for the issue tracker · a simple shared Notion or Linear-doc as the public dashboard so the team sees the same numbers daily. The dashboard itself is a Phase 0 deliverable — without it the pilot has no nervous system.

07 How it actually unfolds

Five phases. Each gated on readiness, not on the calendar.

We move fast. Phase 0 readiness in 4–6 weeks; TAI pilot launch by end of July or early August 2026. The gates are still readiness-based, not calendar-based — but the readiness bar is sized for a fast launch, not an open-ended timeline.

Timeline · May 2026 → Aug 2027 · earliest plausible
Zunou Japan-led GTM phase timeline Linear timeline from May 2026 through August 2027 showing Phase 0 Foundations, Phase 1 Pilot, Phase 2 Learn, Phase 3 Double, stage-gate decision, and Phase 5a Fuel or 5b Pivot outcomes. Phase 0 Foundations Phase 1 TAI pilot Phase 2 IVS + density Phase 3 Double · first paid STAGE-GATE Feb 2027 Phase 5a · Fuel ≥4 of 6 PMF · scale 3 of 6 · Extend 60d Re-decide May 2027 Phase 5b · Pivot ≤2 of 6 · rotate wedge Jun '26 Jul Aug Sep Nov Feb '27 Apr '27 Gate: 6 items shipped Gate: 50 onboarded Gate: NSM ≥ 60 Gate: ≥5 paid · MoM ≥ 20% ↑ TODAY (2026-05-11)

Reading the timeline: bars are work; the diamond is a decision. Date labels are illustrative — actual cadence shifts if any gate slips. The fork after stage-gate is the whole point of §11: we don't ship a plan to year 2 before we know if year 1 hit fit.

  1. Phase 0 · Pre-pilot readiness · 4–6 weeks (June–mid-July 2026)

    Ship cost guardrails + community-pilot instrumentation.

    Per-AU inference telemetry · free-tier soft/hard caps · invite-gating via Spaces join_code (one code per partner community) · magic-number counters wired to real Zunou events ( Daily Debrief Daily Debrief Zunou's signature AI experience. Starts the day with a comprehensive voice or text briefing. open, Relays Relays Zunou's autonomous AI delegation feature. Send an agent to gather info from a teammate; it has the conversation and reports back. sent, calendar OAuth, colleague-join) · per-user anomaly detection (5× median = same-day flag) · PostHog cohort dashboards. Gate: cost-gating + magic-number instrumentation green, or pilot slips.

  2. Phase 1 · TAI pilot launch · end July / early August 2026

    TAI first (pilot-deep) — Ilya-introduced, partner-curated.

    Demo at TAI monthly meetup (Marco + Ilya warm-introduced). Invite codes distributed to attendees + WhatsApp group. First 50 onboarded with Marco / Malek in the room. Concurrent PR Times release. Gate: 50+ activations in the first 14 days, magic-number ≥ 35%.

  3. Phase 2 · Double + IVS · August–October 2026

    Layer 2nd–3rd communities + IVS Kyoto.

    AI Tinkerers Ginza demo + Venture Café Thursday Gathering. IVS-side-event Spaces deployment (IVS July annual; if calendar lines up, otherwise IVS-class equivalent). Hybrid sponsorship model running (§06.4). Gate: NSM NSM North Star Metric — the single product metric that proxies for healthy growth. Tracks weekly. ≥ 60 WAU WAU Weekly Active Users — users who took meaningful action this week. past magic number by end of phase.

  4. Phase 3 · Paid signal + density · November 2026–January 2027

    First paying logos · tier preference data · scale-up readiness.

    First 5–10 paying logos by month 3 of pilot; 20–30 by month 6. Tier preference (Pro $19 vs Business $39) calibrated. Akai Wagon / Indelible portfolio rollouts warm-introduced. "This Week in Tokyo" digest hits 5,000 weekly uniques. Gate: paid signal real · Sean Ellis ≥ 40% · MoM growth accelerating.

  5. Phase 4 · Stage-gate review · February 2027

    The single fuel / extend / pivot decision against six PMF criteria.

    ~6 months after Phase 1 launch. NSM NSM North Star Metric — the single product metric that proxies for healthy growth. Tracks weekly. ≥ 200 · one launch community at 25%+ density · 35%+ activation in 14 days · ≥3 paid logos · inference ≤¥600/ AU AU Active User — used as denominator in per-user cost calculations. /mo · Member-of-N Member-of-N A user active in N partner communities. Our percolation indicator across the launch graph. ≥ 15%. Hit ≥4 → fuel. ≤2 → pivot. 3 → extend 60 days.

  6. Phase 5a · If PMF — fuel

    Scale within the community lane that worked.

    Raise sponsored-seat caps on the original pilot community. Open adjacent communities #5–10 (P3 tier from §06). Akai Wagon + Indelible portfolios warm-introduced where overlap exists. Layer in Spaces-powered large events (IVS-class) + content / PR / partnerships running concurrent. Push toward ¥30M+ ARR via the parallel-track stack — communities are just one channel, not the ceiling. Enterprise / SI conversations deferred to year 2. SI / large-enterprise deferred to year 2. Trading-house and major SI conversations (NTT Data, Fujitsu, Itochu) require a proven product, multiple referenceable customers, full APPI APPI Act on the Protection of Personal Information — Japan's primary data privacy law. 2025–26 enforcement-focused regime. + ISMS ISMS Information Security Management System — ISO/IEC 27001 certification, table-stakes for Japanese enterprise SaaS contracts. posture, and a JP-domestic legal entity. Pre-building advisor warm chains earlier is fine. Selling at this layer is a year-2 decision conditional on community-phase outcomes.

  7. Phase 5b · If pivot

    Rotate the wedge — most likely Ringi-first vertical or events-only product.

    Re-run validation (10 calls in 4 weeks) on the new wedge. Keep the four community partnerships warm; don't burn them. Aim to re-launch a focused product within 90 days.

08 Prior art

The pattern repeats. We're not improvising.

Four billion-dollar outcomes. Each manufactured density inside overlapping groups before going broad.

Notion
2018–19

Simultaneously seeded YC + 500 Startups + Techstars portfolios + designer Twitter.

Over half of YC's recent batch became customers. 95% organic traffic.
First Round Review
Discord
2015–16

Won gaming guild leaders ('supernodes') first; shipped Twitch integration as cross-community accelerant.

133% MoM growth at 3M users.
Growthcurve case study
Figma Community
2019–20

Public gallery of design files / templates / plugins → SEO + activation + pull-mechanism.

300+ creators, 600+ public files. Became how non-users discovered Figma.
First Round on Figma's 5 phases
Lenny's Newsletter
2020+

Free weekly newsletter for 9 months before charging anything; paid Slack as the dense layer.

1M subscribers by 2024.
Growth In Reverse
09 Showing the math

Where the numbers come from — none of them are locked.

This GTM is about reaching product-market fit — not hitting an ARR number. Crazy ARR comes after PMF is real, and we'll define that number once we've earned the right. Three financial discussions to have now: the operational envelope (panel D), the inference cost ceiling (panel C), and the pricing structure (panel B). The funnel + trajectory below are single-track beachhead sanity-checks for cost planning — not the year-1 target. The point of this section is unit economics, not forecasting.

A · the beachhead question communities = PMF discovery, not the ARR engine

Communities are our beachhead. ARR is what comes after PMF — not what we forecast off the beachhead population.

The GTM shape · beachhead → parallel tracks → exponential

Goal: ¥30M+ ARR on a 12-month horizon — the number is reachable; the question is which tracks fire and how cleanly. We don't get there by squeezing more out of one beachhead. We get there by layering tracks in parallel as the beachhead's signal lets us.

Months 1–6 · beachhead pilot in TAI: dense onboarding, learn what breaks, hunt for PMF signal. Goal: Sean Ellis ≥ 40% · W4 retention ≥ 25% · 5–10 first paying logos by month 3 · 20–30 by month 6 (§09.5).

Months 6–12 · parallel tracks compound: additional communities (P2 → P3 → P4 in §06), large-scale events (IVS Kyoto, AiSalon Tokyo, Spaces-powered conferences), content/PR/partnerships running concurrent. Exponential MoM growth from month 7 onward as each track adds users without competing for the same channel.

What we don't lock yet: the specific year-1 ARR number. We commit to it after 3 months of cohort signal — not 30 minutes of bottom-up math. The funnel below sizes the beachhead alone; the parallel tracks multiply on top of it.

Below: what the first 1–2 months in the TAI beachhead are designed to teach us. Not a year-12 forecast — the beachhead is a small, focused learning vehicle. The market is much larger than TAI; parallel tracks (more communities, Spaces-powered events, content / PR) layer on top once the beachhead's signal lets us scale aggressively.

What the beachhead teaches us · 1–2 months, not 12

Six numbers the pilot calibrates — so we can scale exponentially after, with eyes open.

1 · Try rate
What % of TAI members actually try Zunou after a demo?
Hypothesis ~40–60%. Pilot measures the real number. Determines top-of-funnel for every parallel track that follows.
2 · Magic-number rate
% of triers who complete 5/1/3 in 14 days?
Hypothesis 35–45%. If it's higher, onboarding is great. If lower, refit the magic number itself against real user behavior.
3 · Paid signal
5–10 first paying logos by month 3?
Tier preference (Pro $19 vs Business $39) tells us where willingness-to-pay sits — unlocks year-2 ARR planning.
4 · Sean Ellis score
% "very disappointed" without Zunou?
≥40% = PMF (industry benchmark). Single most predictive qualitative signal.
5 · Cost per AU
Real ¥/AU/mo at pilot scale?
Working cap is ¥600 (panel C). Pilot measures actual. Calibrates free-tier limits before we onboard 10× more users via parallel tracks.
6 · Viral signal
Member-of-N rising organically?
Are activated users inviting colleagues without being prompted? Leading indicator for whether parallel-track scale-up gets viral assist or has to lean entirely on paid + earned acquisition.
Why year-1 ambition stays large — even though the beachhead is small

The beachhead is 1–2 months of PMF learning in TAI, not a 12-month ramp. Sizing year-1 ARR off the beachhead alone would be the wrong shape — it bakes in low ambition because the beachhead is deliberately a small, focused learning surface.

The year-1 ARR comes from the stack of tracks the beachhead's signal unlocks: more communities (the 13-target list in §06 is a start, not a ceiling) · Spaces-powered large events (IVS-class · 13k attendees per event) · content + PR + partnerships · enterprise warm chains. Year-1 ambition: ¥30M+ ARR, locked with real data after month-3 paid signal.

What this section is for (and what it's NOT)
  • · For: the operational envelope (panel D), inference cost ceiling (panel C), and pricing structure (panel B) — the unit-economics decisions we need to make before the community pilot ramps users.
  • · Not for: committing to a year-1 ARR number. We deliberately don't list one. ARR is a year-2+ conversation once PMF + paid signal are real. The right discipline in months 1–6 is measuring what's working (§09.5 below), not forecasting what hasn't happened.
  • · The math above — funnel + trajectory — sizes the single-track beachhead alone, useful for inference-cost planning and stage-gate criteria. Real GTM layers parallel tracks (more communities, large events, content/PR) on top.
B · the pricing question · reference only, not for decision now

Pricing waits on paid signal — but here's the landscape for context.

Pricing isn't a decision for this proposal. Zunou's investor doc already proposes Free / Pro $19 / Business $39 / Enterprise (March 2026). Locking JP-launch tiers now — before we have paid signal from the beachhead (§09.5 signal #5) — would create confusion. The right cadence: month-3 paid validation tells us where willingness-to-pay actually sits, then we lock the JP-pilot tier shape with real data. The patterns below are reference material — not options to debate today.

Pattern A · Notion-style

Generous free · AI extra

Free tier: basic chat + tasks, limited AI calls. Plus: ~¥1,200/mo. AI add-on: ~¥1,500/mo on top of any paid tier.

Lets users feel value before paying anything. Risk: heavy AI users on free tier burn inference cost — needs the per-user cap (panel C).

Pattern B · Linear-style ★

Per-user · AI included

Free tier: solo / hobby use, capped at 1 workspace. Standard: ~¥2,000/mo per user, AI fully included. Pro / Business: ~¥3,500/mo with team features.

Pricing is simple. AI cost rolled into per-seat. Working estimate uses this. ¥2,000 = ~$13 USD, well below Slack Business ($12.50) + an AI add-on combined.

Pattern C · usage-metered

Pay for AI actions

Free seat ~unlimited; AI actions metered at ~¥10–40 each. Heavy users pay more, light users pay nothing.

Aligns cost with value perfectly. Risk: usage-anxiety friction; users hesitate to use AI freely. Unconventional for the workspace category.

Reference: what comparable tools charge (US$, per user per month)

· Notion: Free → Plus $10 → Business $18 + Notion AI $10/seat
· Linear: Free → Standard $8 → Plus $14 (AI included)
· Slack: Pro $7.25 → Business+ $12.50 → Enterprise (AI extra)
· Cursor: Free → Pro $20 → Business $40 (AI-native)
· Glean: Enterprise only, ~$40+/user typically
· Otter: Free → Pro $17 → Business $30

Zunou competes with all of these stacked. Charging less than each individual tool is the obvious early posture; charging more later (once we're indispensable across the stack) is the obvious longer arc.

What the investor doc actually plans · cross-reference

The internal pricing plan already exists. JP launch math should align with it, not invent a new one.

Tier Listed price ¥ equivalent (¥150/$) Target user
Free $0 Individual exploration · 1 org · limited AI sessions
Pro $19 / user / mo ≈ ¥2,850 Small teams · unlimited AI · Voice Agent · meeting recording · all 11 modules
Business $39 / user / mo ≈ ¥5,850 Growing teams · Relays Relays Zunou's autonomous AI delegation feature. Send an agent to gather info from a teammate; it has the conversation and reports back. · advanced insights · custom statuses · priority support
Enterprise Custom Managed Spaces · SSO · audit logs · dedicated support · APPI + ISMS posture
Event Spaces Per-event Conferences · white-label event app · attendee analytics · QR onboarding
Community Spaces Monthly Alumni networks · professional guilds · cohort programs (TBD pricing)

Implication for the forward build (panel A): the working estimate used a ~¥2,525 avg/user/mo blend (70% Standard ¥2,000 + 25% Pro ¥3,500 + 5% Business ¥5,000). With the investor-doc tiers, the same shape produces a higher blended ARPU (~¥3,500–4,000) — which raises forward-build exit-month ARR closer to ~¥3.5M at the same activation + conversion rates. This is one of the things decision #14 should ratify together.

C · the inference-cost question what we pay providers per user per month

How much AI usage can we afford to give away?

The free tier needs an inference budget per user so cost stays predictable. ¥600/user/mo is a working estimate, not a target. Below: how it was derived, and where the actual ceiling will land.

The numbers that already exist · Zunou's measured unit economics

Voice Agent · per session
~$0.90
OpenAI Realtime API Realtime API OpenAI's sub-second voice-AI WebSocket. The 'iPhone moment' for enterprise voice. Powers Zunou's Voice Agent.
Sub-second latency · full 136-tool execution · 18 languages · VAD + interruption handling.

Plus meeting recording: ~$0.10 / hour (AssemblyAI). The investor doc's gross-margin target: 70%+ at scale — predicated on text agent dominating usage volume naturally (it's faster for most tasks and the user picks it without prompting).

What this means for the JP free tier budget: ¥600/user/mo of inference covers ~30 Voice sessions or ~3,000 Text sessions. The voice/text mix is the lever — not the price of either token. If pilot data shows JP users default to voice, ¥600 burns in 3 weeks of heavy use. If text dominates (likely for keyboard-faster JP power users), ¥600 holds for months. The cap design follows from this asymmetry.

Model-agnostic routing (§03) reduces this further: same workload on Gemini Flash vs GPT-5 vs Claude Sonnet swings cost by 5–10×. We route to the cheapest model that meets quality bar per task type. The Voice Agent's $0.90 floor is constrained by OpenAI's Realtime API specifically; if Anthropic or Google ship a competitive real-time voice API, the unit cost compresses further.

D · the budget question total launch-year spend · TBD

What does the launch year cost?

Three line items dominate. None of them is "buy ads." This isn't a paid-acquisition budget; it's an operational + sponsored-seat budget for a community-led launch.

Line item Working estimate How we got there
Sponsored inference
(free tier of paying users + waitlist)
¥4–7M ~3,500 avg free users × ¥600 cap × 12 months — actual lands lower because not everyone hits cap.
Pilot community partnerships
(sponsored seats + events + marketing)
¥3–6M 50 sponsored seats × ¥600 × N months × 4 communities if we run synchronized at scale. Pilot-only is much less (~¥360k).
Ops + content + legal
(content production · APPI review · launch events)
¥3–5M Case studies, JP-language landing content, community digests, APPI legal review, founder-dinner / launch-event expenses.
Venue sponsorships (§06.4)
(hybrid: flat fee + per-user-join bonus, capped per event)
¥2.5–5M ~¥75–150k flat + ¥2,000/join, capped ¥400k/event · 3–4 communities × ~monthly × 6 months. Range scales with conversion — high end means more downloads (good problem).
Total working envelope ¥12.5–21M Operational + sponsored-seat + venue-sponsorship line items only. Compensation (Marco's contract, advisor fees) is a separate conversation, not included here.

¥12.5–21M ≈ $85–140k USD. Operational only — the team-comp side of any GTM commitment is a separate discussion that doesn't belong in this envelope. The hybrid-sponsorship range is wider than a flat rate because actual cost scales with download conversion — that's a feature, not a bug. The point of putting the methodology in §12 #11 is so we ratify the shape of spend, not the absolute total.

The GTM operating engine

Signals · cost gates · nudges · optimization — how this proposal earns the right to dream big.

Investors invest in evidence. We don't have ARR yet — but we'll have qualitative signal in 30 days, cohort retention in 60, paid validation in 90. Below: the operating engine that produces those signals. What we measure, how we keep inference costs gated while we ramp community users, how we nudge users toward the magic number, and how we use the resulting data to improve Zunou. ARR follows from this engine — not from a forecast.

A · the PMF signals we hunt for what tells us product-market fit is real

Five signals. If they fire in the right order, we've earned the right to scale.

1
Sean Ellis "very disappointed" test

≥ 40% of active users say they'd be "very disappointed" without Zunou.

The single most-cited PMF signal in B2B SaaS (Superhuman / Hiten Shah's research). Survey at day 14 of activation. The threshold isn't arbitrary — it's predictive.

Where measured: in-product survey at day-14 + Customer.io trigger · NPS-style scale.
2
Retention curve flattening

W4 retention ≥ 25%, W8 ≥ 20% — and the curve flattens (doesn't keep dropping).

A flat tail past W4 is what PMF looks like in cohort analysis. If retention keeps declining at W8 / W12, the product hasn't found its keepers yet — keep iterating.

Where measured: PostHog cohort retention · weekly review · per-acquisition-source breakdown.
3
Organic share rises

≥ 25% of new signups arrive via referral or organic search (not Marco-touched).

If every signup needs Marco's hand-on-back to convert, we don't have PMF — we have a high-touch service. Organic share rising month-over-month is the test for product-led pull.

Where measured: attribution tags on signup form · referral codes per community · UTM tracking.
4
Magic-number completion

≥ 35% of activated users complete the 5/1/3 magic number in 14 days.

The leading indicator for everything else. Refit monthly: if real users hit retention at 7/1/4 instead of 5/1/3, we update the definition. The bar is held against industry SaaS activation norms (40–55%).

Where measured: wired to real Zunou events — Daily Debrief opens, Relays sent, calendar OAuth, colleague-join.
5
Paid validation ★ · the most informative signal

First 5–10 paying logos by month 3 · 20–30 by month 6 · stable tier preference.

People paying with their own money is the cleanest PMF signal there is. Even with the beachhead alone, 5–10 paying logos by month 3 is realistic — TAI's 4,000 builders include people who already pay for Cursor, ChatGPT Plus, Linear, Notion. They have wallets and they pay for tools that earn it. By month 6, tier preference (Pro $19 vs Business $39) tells us where willingness-to-pay actually sits — and that data unlocks year-2 ARR planning.

Month 3
5–10 paid
First paying logos
Month 6
20–30 paid
Tier preference data
Month 12
ARR unlocks
Year-2 number gets locked here
B · cost gating + user monitoring so the community ramp doesn't burn the inference budget

How we keep costs bounded as we open from beta to the community pilot.

Inference cost is the variable that can blow up fastest if a community ramp goes well. Voice sessions at $0.90 each × heavy users × no cap = a problem. The cost-gating framework below is what keeps the per-user economics bounded while we're learning what real users do with the product.

Free-tier limits · soft + hard caps

Three layers of cost protection.

  • · Soft cap (80% of monthly quota) — in-app banner: "you've used 80%. Upgrade or wait." Logged for analysis.
  • · Hard cap (100%) — feature gates: Voice Agent → Text Agent fallback · Brain Dump disabled until next month · upgrade modal.
  • · Per-session cap (anomaly) — single session > $X cost = auto-throttle to text-only mode + alert.
Open question for product: what are the JP-pilot free-tier numbers? Investor-doc plan implies "limited AI sessions" on Free; we need to pick concrete monthly quotas (e.g., 20 voice min, 200 text msgs, 5 recordings).
Invite gating for the community pilot

Invite codes per community — not open self-serve.

  • · Per-community invite codes — TAI-XXX · TINK-XXX · VC-XXX · LW-XXX. Each code = attribution + concierge-ish first 50 onboardings.
  • · Invite-only signup for the pilot — open self-serve at zunou.ai stays unchanged; the community variant is gated. Existing Spaces join_code mechanism (SPACES_SPEC.md) is the same plumbing.
  • · Limit per code — caps total claims so a leaked code can't flood signup with anonymous users.
Why this matters: we get clean attribution per community + bounded user volume + Spaces-aligned UX — without building a new auth layer.
Per-user anomaly detection

Outliers get flagged the same day, not the same month.

  • · Daily cost-per-user job — flag any AU whose 24h cost exceeds 5× the cohort median.
  • · Investigate — is this a power-user we should upgrade? Or a bot / scraper / abuse pattern?
  • · Action — upgrade nudge for legit users · throttle + investigation for suspect ones.
The single biggest hidden cost in PLG SaaS: one heavy/abuse user can equal 100 normal users' inference cost. Catch within 24h, not 30 days.
Daily cost-per-AU dashboard

Tracked daily, reviewed weekly.

  • · Cost per AU per day — by tier, by cohort, by community.
  • · Voice/text mix per AU — voice burns 45× faster; mix shifts tell us a lot.
  • · Cap-hit rate — % of free users hitting soft cap, hard cap, per week.
  • · Routing efficiency — once multi-provider lands (year 2), cheapest-routed % by task type.
Wire-up: per-AU inference telemetry is a Phase 0 deliverable (§07).
Why this section is non-negotiable

A community ramp goes well = lots of new free users = inference cost spike. If we're not gating + monitoring by Phase 1, a successful pilot becomes a runaway cost problem. Cost discipline is what turns "we got 500 signups in a week" from a panic into a celebration.

C · signal → action mapping what we DO when each signal fires

Every signal has a predetermined action. No "let's wait and see."

Signal Threshold Cadence Predetermined action
Sean Ellis "very disappointed" < 30% Day 14 / cohort Onboarding emergency — rework first-5-minutes flow · qualitative interviews with low-scorers in 7 days.
Sean Ellis ★ ≥ 40% Day 14 / cohort Greenlight to invest in channel scaling — content production, PR, +1 community.
W4 retention < 20% Weekly cohort review Pause new-community outreach. Onboarding + magic-number definition refit. Re-check Sean Ellis.
W4 retention ≥ 30% Weekly cohort review Push for paid validation — add upgrade prompt at magic-number completion · trial Pro tier on activated cohort.
Organic signup share < 10% by M3 Monthly Discovery surface broken — ship public utility (§06.7 panel A) or accelerate it.
Organic signup share ≥ 25% by M6 Monthly Product-led pull is forming — invest in viral mechanics (shareable artifacts · referral system · Spaces propagation).
Magic-number completion < 30% × 2 wks Weekly Re-fit the definition — observe what activated users actually do that low-engagement users don't. Update 5/1/3 if needed.
Paid conversion 0 paid logos by M3 Monthly Willingness-to-pay broken — qualitative pricing interviews · test different tiers / messaging · maybe price too high.
Inference ¥ / AU > ¥1,200 × 2 wks Weekly Routing review · tighten free-tier caps · investigate top-quartile users.
Per-user cost outlier > 5× median × 1 day Daily (auto-flagged) Investigation queue — power-user upgrade nudge OR throttle + abuse review same day.

Why predetermined actions matter: when signals fire under pressure (live demo glitches, viral spike, churn cluster), decisions made in advance are 5× faster and 10× better than decisions made in panic. This table is the playbook we follow when stuff happens.

D · user-nudge playbook when do we send what, to whom

Trigger-based nudges that push users toward magic-number completion.

Onboarding doesn't end at "welcome." Magic-number completion requires deliberate nudges across the first 14 days. Each nudge is trigger-based (event-driven) not scheduled (time-only) — so users who already completed step X don't get nudged about it again.

When Trigger Channel Message / action
Day 0 Sign-up complete In-app + email Welcome · "let's connect your calendar — Zunou's value compounds with calendar context" · CTA to OAuth.
Day 1 No calendar yet Push + email Single-action nudge: "30 seconds to connect your calendar — see tomorrow's prep brief." Skip if already connected.
Day 2 No Daily Debrief opened Push (8 AM JST) "Your morning briefing is ready — 3 follow-ups from yesterday's meetings." (Only fires if there's actual content.)
Day 3 0 colleagues invited In-app banner "Zunou's value compounds when 5 colleagues are on it. Invite 1 person — most people start with their COO or designer." Pre-filled invite copy in JP + EN.
Day 5 < 2 AI actions accepted Push "Try Relays — send Zunou to ask Sarah about Q3 timeline. The AI does the asking." First-time tutorial.
Day 7 Cohort midpoint Email digest "Here's what you accomplished this week with Zunou: X meetings prepped, Y action items captured." Show value.
Day 10 Soft-cap approaching In-app banner If > 60% of monthly quota used: "Heavy user? Pro removes limits — $19/mo, AI fully included."
Day 14 Magic-number checkpoint In-app + email + Sean Ellis survey Completed 5/1/3? → "You're activated. Here's the team-tier preview." Not completed? → ask which step blocked them.

Tooling: Customer.io or similar for the email/push side · in-app banners served from a feature-flag service · trigger events emitted from the GraphQL API. Each nudge is A/B-tested monthly — copy that drives magic-number completion stays, copy that doesn't gets cut.

E · feature optimization loop how signal data drives product investment

Signal data → feature decisions → product investment. Quarterly cadence.

Every quarter, the signals dashboard data feeds a product-investment decision: which features to double down on, which to cut, which to reposition. This is how GTM signal becomes product roadmap — not just a marketing report.

High-engagement feature

Used by > 60% of activated users · cited in Sean Ellis "what would you miss most"

Action: double down. Invest in polish, performance, deeper integrations. This is what makes Zunou Zunou — protect it, promote it, sharpen the demo around it.

Mid-engagement feature

Used by 20–60% · positive sentiment but not load-bearing

Action: reposition or simplify. Maybe the feature is in the wrong place in the UX, maybe the messaging is off, maybe it's a power-user feature in a beginner surface. Iterate before investing.

Low-engagement feature

Used by < 20% · zero Sean-Ellis mentions · maintenance burden

Action: cut or sunset. Every feature in Zunou competes for cognitive space; removing dead weight makes everything else clearer. Brave product teams cut more than they ship.

Quarterly cadence: month-3 / month-6 / month-9 / month-12 product review. Marco brings the GTM signal data; product team brings the eng cost / opportunity. Decisions logged + reviewed against actuals at the next cycle.

Why this engine matters for the bigger story
"We don't have ARR yet. We'll have qualitative signal in 30 days,
cohort retention in 60, paid validation in 90.
That's what earns the right to dream big."

Investors invest in evidence. This operating engine is how we produce it — without faking a forecast off a 6,500-population beachhead. The ARR ambition is real and large; the path to lock the number is the work below, not the math above.

10 What could break this

88% of AI agent pilots fail to graduate to production.

That's the Gartner finding for 2026. Most enterprise buyers have been burned, or seen peers burned. Our discipline is the response.

Source citations on every AI output. Addresses Gartner's #1 blocker — evaluation gaps (64% of failed pilots). Every action Zunou takes is anchored to the meeting / message / document it came from.

Human-in-the-loop on every external action. Addresses governance friction (57% of failed pilots). AI drafts; humans send. We never auto-act on someone's behalf.

Refuse to ship anything below 65% acceptance in beta. Addresses model reliability (51% of failed pilots). The metric is gated — if a feature can't beat the bar, it doesn't reach launch.

11 The moment we'll know

A pre-committed stop-or-go decision in ~6 months.

No 18-month death march. Six PMF criteria; three possible outcomes; one explicit rule we agree to in advance.

≥ 4 of 6
Fuel

Open communities #5–10. Raise sponsored-seat caps. Push toward the KGI.

3 of 6
Extend 60 days

Then re-decide. Don't force it; don't kill it prematurely either.

≤ 2 of 6
Pivot

Most likely candidates: Ringi-first vertical, or events-only product.

The six criteria: NSM NSM North Star Metric — the single product metric that proxies for healthy growth. Tracks weekly. ≥ 200 weekly active users past the magic number · One launch community at 25%+ density · 35%+ activation in 14 days · ≥ 3 paid logos · Inference cost ≤ ¥600 / AU AU Active User — used as denominator in per-user cost calculations. / mo · Member-of-N Member-of-N A user active in N partner communities. Our percolation indicator across the launch graph. ≥ 15%.

Before we commit · how this fails

Five ways I might be wrong — and the early signal that tells us.

Strong proposals say what would invalidate them. Below: the five biggest bets in this proposal, the failure mode if each one is wrong, and the first metric that would tell us inside Phase 1 (months 1–6). The point is to be specific about what would change our minds — not to list every theoretical risk.

1
The community thesis

Organizers say yes to the venue sponsorship — but the partnership doesn't move activation.

We sponsor TAI's venue, get to demo at the meetup, attendees install Nova — and most never come back. The community gives access; the product doesn't earn retention.

Early signal

Week 4 cohort retention < 25% (industry SaaS baseline ~35–40%). Or: pilot users hit the 14-day mark with < 2 of 3 magic-number criteria met.

Worth discussing now

Is the magic-number bar the right activation gate? §11.5 question for product.

2
The "architecture wins" thesis

TAI builders try Zunou but don't see why it beats raw Claude / GPT.

The MIT-aligned architecture (§00) is real, but builder-ICP users don't feel it. They have Claude open in another tab and don't see the workflow integration value because they've already built personal scaffolding.

Early signal

<15% of pilot users open Daily Debrief twice in their first 7 days. Or: feedback themes cluster around "I could do this in Claude already."

Worth discussing now

Is the TAI builder-ICP the right pilot audience, or should we lead with operators (Tokyo Founders Group, Venture Café) who have less personal-AI scaffolding?

3
Parallel tracks don't compound

Beachhead PMF lands, but the post-beachhead stack — more communities, Spaces events, content/PR — doesn't multiply on top of it.

Year-1 ambition assumes layered tracks compound after the beachhead. If each new track produces linear (not multiplicative) acquisition — because the audiences don't overlap, or Spaces events don't convert to retained users, or content/PR doesn't drive organic signups — the year-1 ARR comes in well below the ambition.

Early signal

Member-of-N stays below 10% through Phase 2 (target ≥15% by stage-gate). Or: Event-Spaces install-to-retain conversion below 5% across 3+ events. Or: organic-signup share stuck below 15% by month 6.

Worth discussing now

Which parallel tracks do we lean into first if compounding signal is weak? Spaces-led events vs more communities vs content/PR? The order matters — pre-decide it now so we're not improvising mid-Phase 2.

4
Phase 0 capacity

The six Phase 0 items take 12 weeks, not 4–6.

Eng absorption for MCP integrations + JP landing + events feed + telemetry + isolation testing is heavier than the proposal assumes. The TAI demo slot Marco has access to slides into Q4 2026, by which point IVS Kyoto has passed and the community partnerships have cooled.

Early signal

30 days post-review: fewer than 3 of 6 Phase 0 items have an owner with a committed timeline.

Worth discussing now

Which Phase 0 items are actually required for the TAI pilot, vs which can ship after? §11.5 question for eng.

5
Exogenous · the incumbent walled gardens crack open

Slack AI / Microsoft Copilot / Google Workspace ship genuine cross-app reasoning — not just within their silo.

The §03 thesis assumes incumbents stay bounded to their own walled gardens (and the §04 capability matrix is built around this gap). If Slack AI ships a real Notion / Linear / calendar integration, or Copilot escapes the M365 boundary, the differentiation collapses. This is the one risk Marco can't directly mitigate — only respond to.

Early signal

Salesforce / Microsoft / Google announce cross-app agents (vs. cross-app search). Track competitor product blogs + WWDC / Build / Ignite cycles.

Worth discussing now

What's the response plan if Slack AI ships cross-app in Q3 2026? Pivot to JP-specific wedge (Keigo · Ringi) faster? Lean harder on Spaces?

How to use this section

If you finish reading these five cards and have a sixth failure mode I missed, that's the most valuable comment you can leave. The point of writing the pre-mortem is to make our blind spots visible before we commit Phase 0 capacity to a plan that already has them.

If we go this direction · questions per team

What we'd want to talk through with each side of the room.

Not a commitment list. The point of this section is to surface the questions each team should be in the room for, so when we align on direction we already know which conversations to schedule next.

Conversation with Open questions worth their input Why their answer shapes the proposal
Engineering Phase 0 includes six candidate items: MCP integrations (Slack · Notion · Google Workspace), magic-number telemetry wired to real Zunou events ( Daily Debrief Daily Debrief Zunou's signature AI experience. Starts the day with a comprehensive voice or text briefing. open, Relays Relays Zunou's autonomous AI delegation feature. Send an agent to gather info from a teammate; it has the conversation and reports back. sent, calendar OAuth, colleague-join), JP landing on zunou.anysigma.com, events-feed v0 (Connpass + Doorkeeper APIs), per-AU inference telemetry, multi-workspace isolation under 50 concurrent. Which of these are realistic in the next 4–6 weeks, and which should we descope or sequence later? The Phase 0 list is what I'd hope for; eng owns what's actually possible. If 3 of 6 are realistic we still run the pilot, just with different instrumentation.
Product The magic number is currently "5 colleagues + 1 calendar + 3 AI actions accepted in 14 days." Generic "AI actions" should probably be replaced with real Zunou activation events. Are Daily Debrief Daily Debrief Zunou's signature AI experience. Starts the day with a comprehensive voice or text briefing. opens · Relays Relays Zunou's autonomous AI delegation feature. Send an agent to gather info from a teammate; it has the conversation and reports back. sent · Brain Dump Brain Dump Zunou Nova's one-tap voice capture. Speak your thoughts; Zunou creates a structured event with summary + action items. the right substitutions, or are there better activation surfaces I'm not seeing? Also: free-tier limits for the JP pilot, and which 3 capabilities the demo leads with. Product owns what Zunou is actually built to make sticky. If the magic-number definition is wrong, the stage-gate criterion (§11) measures the wrong thing.
Design JP landing on zunou.anysigma.com needs to feel like Zunou, not a Marco side-project. Event-Spaces QR onboarding flow needs to nail the first-time-tap to active-session in < 5 min on iOS + Android. What's the realistic design capacity for these two surfaces in the next 3–4 weeks? Onboarding quality determines activation rate. The forward-build's 45% activation lives or dies on the first 5 minutes of the first session.
Founders / leadership The year-1 ARR ambition is ¥30M+ — but we explicitly defer locking the specific number until month-3 paid signal calibrates real willingness-to-pay. Are we aligned on that cadence (§12 #6)? Plus: are there ~2 warm intros into P3 / P4 communities you can broker, and what's the team's appetite for the ¥11–18M operational envelope methodology (§09 panel D, not the total — compensation lines are a separate conversation)? These three are the ones only the founders can answer. Everything else flows from them.
JP advisors I have direct access to TAI (Ilya) and a live conversation in flight with IVS; warm intros likely to TFG and AI Tinkerers. I haven't yet had the validation calls — those happen in the next 4 weeks (questions per call in §06.4). Who else should I be talking to before this hardens into a plan, and is the hybrid sponsorship model (flat fee + per-user-join bonus) something you've seen land with JP community organizers? The §06.4 thesis is grounded in real organizer conversations. Adding 1–2 more named voices makes it impossible to push back on as "Marco's theory."
Marco (self) Once direction is aligned: run the pilot end-to-end, weekly metrics dashboard, organizer-relationship cadence, monthly magic-number refit, monthly inference-cost refit, §12 decision tracking, stage-gate package preparation. Anything that doesn't have a named owner above is mine by default.
How this section is meant to be used

This isn't a sign-up sheet. It's a list of conversations I'd want to have if the team is broadly aligned on the shape (TL;DR + §06). If the shape is wrong, none of these conversations are worth scheduling yet — and that's also a useful answer.

World-class GTM · the consolidated ask

What I'm asking for · why · battle-tested or build · benefit · risk · timing.

A GTM proposal lives or dies on how clearly it states the trade — what's asked, what's already de-risked vs needs building, what we get if it works, what we lose if it doesn't. Each row below is a single concrete ask, structured so the team can react in one pass.

The ask Number Status Why Benefit if granted Risk if not granted Timing
Phase 0 engineering capacity
MCP integrations · magic-number telemetry · JP landing · events-feed v0 · isolation testing
4–6 wks Plausible Pilot demo can't be scheduled until the magic-number is instrumented against real Zunou events (Daily Debrief, Relays, calendar OAuth). TAI demo scheduled by July 2026 · clean activation telemetry from day 1 · the §11 stage-gate decision becomes evaluable. Demo slips to Q4; IVS Kyoto window closed; warm community momentum cools. Soon · 30 days
Design capacity
JP landing on zunou.anysigma.com · Event-Spaces QR onboarding polish
2–3 wks Partial Forward-build assumes 45% activation — entirely depends on first-5-minutes UX quality. Keigo-aware copy + sub-5min onboarding is non-trivial. Activation rate at or above 45% (the single biggest ARR lever). Each +5% activation = +¥1.3M exit-month ARR. Activation drops to 30–35%; year-1 run-rate floor lowers to ~¥1.5M; year-2 trajectory weakens. Soon · 30 days
Operational budget envelope
Inference + sponsored seats + hybrid venue sponsorship (flat + per-user-join) + ops/content/legal · compensation excluded (separate conversation)
¥12.5–21M Methodology Per-user inference math is honest; venue model pays for outcomes (downloads) not seats — shares risk with organizer. Ops sized to a community-led launch. Ratify the shape, not the total. ~6,500 individuals reachable in the beachhead; 4-community partnership program active for 6+ months; sufficient cohort scale to produce PMF signals (Sean Ellis, retention, paid validation). Single-channel launch (no venue sponsorship); activation depends on viral mechanics alone; cap on reach. This review
2 warm intros from leadership
P3 / P4 communities Marco doesn't have direct access to (e.g. Headline portfolio, Coral Capital portfolio, Genesia)
2 To request P3 cold-to-warm takes 2–3 months unassisted. Founder/leader warm intro compresses to days. Phase 3 "double" expansion (§06 stage 3) reaches a 2nd + 3rd community materially faster · sympathetic-detonation mechanic gets to test sooner. P3 stalls for months; the 13-community attack list shrinks to 4 in practice; sympathetic-detonation thesis remains unvalidated. This review + 30 days
APPI compliance posture sign-off
Plus plan for JP legal entity decision in year 2
Partial JP users will ask about APPI even in pilot. Account-deletion UI is built; data-export is the named gap (§13). Need a clear stance. JP users can be onboarded with confidence; enterprise conversations later in year 2 don't have a blocker; press releases hold up to legal scrutiny. JP enterprise sales blocked at year 2; community trust eroded if asked publicly. 60 days
5 organizer validation calls
TAI / IVS / AI Tinkerers / Venture Café / Tokyo Founders Group · question list in §06.4
5 All to schedule Marco has relationships (Ilya at TAI direct · IVS meeting this week/next · TFG warm intro likely) but no validation calls have happened yet. All 5 need to be scheduled + run. §06.4 thesis gets grounded in real organizer voices, hybrid sponsorship model gets pressure-tested before scaling spend, pilot launch has named co-conspirators. The community pain panel + sponsorship math read as Marco's hypothesis until validated. Next 4 weeks
Marketing channel investments
JP-language content production · PR Times release calendar · founder demos · community digests
¥2–4M To build Phase 1 launch needs PR Times release; Phase 2 needs content (case studies, JP-language landing copy, community digests). All §13 gaps. Discovery-surface traffic (§06.7 panel A) gets actual content to drive; PR amplifies launch beyond demo rooms; community digests sustain engagement between events. Awareness % drops from 60% → 35–40% (only people in demo rooms); funnel input shrinks ~⅓; year-1 ARR floor lowers. Post-stage-gate
GTM dashboard tooling
PostHog or Mixpanel · weekly cohort views · attribution v0 · automated stage-gate report
~$400/mo To set up Without instrumented metrics, the proposal runs on vibes. The stage-gate decision (§11) requires evaluating six measurable criteria. Stage-gate package automatically generated · cohort retention W1/W2/W4 visible · trigger-based escalation when metrics drift. Stage-gate becomes subjective; "the platform isn't ready" becomes a vibe, not a specific node. Phase 0
How to read the Status column
  • Battle-tested · built: shipped in production, code reference in §13.
  • Partial · plausible: foundation exists, completeness or commitment gap named.
  • Open · to build: work to create or commit; either GTM-side (Marco) or product investment.
What this section is NOT

Not a contract. Not a sign-off form. It's the consolidated trade the proposal asks the team to evaluate. "Soon · to discuss" is fine for any timing column — the point is naming the trade-offs explicitly so we can have one focused conversation instead of ten scattered ones.

If we say yes to all 8 asks: the math in §09 panel A is live, the trajectory in §09 holds, the pre-mortem failure modes (§11.4) have explicit mitigations, and the §11 stage-gate decision in ~6 months has the data to be made cleanly. If we say yes to fewer than 4: we should re-shape the proposal around what we can commit to — not pretend the full version is still on the table.

12 Where do you stand?

12 open questions I'd like your take on.

The vote chips are just a quick way to register a position — leave anything blank where you don't have a clear take yet, we'll talk through it. Pick 'concern' or 'object' if a row is mis-framed; even one explicit disagreement is more useful than ten silent shrugs. Five decisions are already settled and shown for context.

How votes are tracked

Every vote is written server-side (Cloudflare KV, keyed by your authenticated Cloudflare-Access email) — not just stored in your browser. Marco can review the full picture at /admin/votes (admin-only, allowlisted).

The admin view shows two things: (1) per-decision tally — how many people are on-board / concern / object on each row; (2) per-user matrix — who voted what across all 12 questions. Change your vote anytime by re-clicking; the latest value wins and the timestamp is recorded.

Settled · for context, no vote needed
01
Geographic focus — Japan-led for 12 months Ratified

Home market; second geography sequenced post-PMF, not preemptively.

02
NSM = WAU past magic number, in a partner community Ratified

Knowing it grows slowly at first.

03
Calendar as gate Ratified

Accept that users without calendar see a deliberately broken-feeling product.

04
Stage-gate rule — ~6 months after Phase 1 launch Ratified

≥4 of 6 PMF criteria → fuel; ≤2 → pivot; 3 → extend 60 days.

05
Retire HN / PH / IH and cold-LinkedIn Ratified

Replace with warm-intro + community + earned media.

Open · strategy & launch
06
KGI — year-1 ARR ambition is large; lock the specific number after month-3 paid validation

Year-1 ambition: ¥30M+ ARR is plausible — but only because we don't stop at the beachhead. Beachhead = TAI + a few communities = 1–2 months of PMF learning. The real year-1 number comes from layering parallel tracks on top: more communities, Spaces-powered large events (IVS-class), content/PR, partnerships. We deliberately don't lock a specific year-1 ARR number until month-3 paid signal calibrates real willingness-to-pay × tier preference. Question to discuss: do we agree the year-1 framing is 'PMF in months 1–3, paid signal by 3–6, ARR number locks after that' — with ¥30M+ as the ambition we're working toward?

default = ship
07
Magic number (v0) — 5 / 1 / 3 in 14 days

Working hypothesis. Derivation in §05. Refit monthly against real cohort data once pilot starts. Pushback now if you think the inputs are off.

default = ship
08
Launch shape — pilot-first vs synchronized 4-community?

Current recommendation in §06: pilot one community deep first (TAI), learn, then double. Original §05 plan was 4-at-once synchronized. Pilot-first means slower but lower risk. Vote on which approach.

default = ship
09
Sponsored-seat cap + free-trial duration

Working estimate: 50 members × ¥600 × N months × 4 communities. Candidate Ns: 3 months vs 6 months. Question to test with community owners. Bounded liability either way — not 'free forever'.

default = ship
10
Public events feed — before / with / after waitlist?

Connpass + Doorkeeper APIs + Lu.ma at launch. The discovery question (§06.7): do we put this in front of unsigned-in users to drive top-of-funnel?

default = ship
11
Budget approach — methodology over total

Per-user inference math + sponsored-seat caps + hybrid venue sponsorship (flat fee + per-user-join bonus, capped ¥400k/event) + ops/content/legal placeholders. §09 panel D shows ¥12.5–21M operational envelope (compensation lines out of scope here — separate conversation). Ratify the methodology + sponsorship-model shape, not the absolute total.

default = ship
12
Adopt Masaru's positioning angle as the lead message

“Strategy in Notion. Tasks in Asana. Decisions in Slack. Good luck.” Effective for builders / operators; less so for enterprise.

default = ship
13
Community-discovery direction (A / B / C)

A = passive emergence (default); B = lightweight in-product recs (Phase 3+ test); C = community discovery as a core surface.

default = ship
Open · operational gaps
14
Pricing v0 — what tier shape + price point?

Three patterns spec'd in §09 panel B (Notion-style / Linear-style / usage-metered). Validate with community owners + early users before locking.

default = ship
15
Support model v0 — Marco + Malek covering JP/EN business hours

In-app widget + Slack channel. First CSM hire post-day-180 if Phase 5a fuel.

default = ship
16
Attribution v0 — last-touch for cohort analysis

Revisit at day 90 once data flows.

default = ship
17
Loop 6 commitment — invest product eng in meeting-prep viral loop

Loom rode this exact loop to 25M users — every shared video is an ad.

default = ship

How voting works: click any chip below an open decision. Your vote is saved to your authenticated session — Cloudflare Access already knows who you are, so no separate login. Marco sees the aggregated results at /admin/votes. You can change your vote any time; the most recent click wins. Hover the chip after clicking to confirm the saved state.

13 Audit · what's built vs what isn't

What's still open — Zunou is launched, this is what's left for the community push.

Zunou is already in production with real users — Nova Web PWA, Dashboard, and Scout all live; iOS Xcode project + Android bundle + EAS configured. This isn't a pre-launch audit; it's a 'now what — what's-left for this GTM push' audit. Of 32 items audited against the repo: 5 are shipped, 7 are partial, 20 are open. Status badges are verified against code, not aspirational.

Audit finding · verified 2026-05-11

Of 32 items audited against the repo: 5 shipped, 7 partial, 20 open. Product is launched — most of what's left is GTM-side.

Built
5
Shipped in production · code reference cited
Partial
7
Foundation exists; named completeness gap
Open
20
Needs to be created · mostly GTM / measurement / risk

Status reconciliation: see §00 "Production state" callout for the verified per-feature audit. This section enumerates what's left to build / commission. Numbers below reconcile to 5 / 7 / 20 — if you spot a mismatch it's an error, please flag it.

Strategic · 5 items
0 1 4
Pricing v0 — tier shape + self-serve upgrade
Investor doc defines Free / Pro $19 / Business $39 / Enterprise (Mar 2026). JP-launch tiering + free-tier limits not yet locked — §12 #14.
Customer support model (JP / EN coverage)
§12 #15. Who answers, on what channel, what SLA.
Crisis / outage playbook for launch week
Rollback plan if TAI demo glitches live; comms script if AI provider has an incident on launch day.
Hiring plan tied to day-180 stage-gate
First CSM hire post-fuel decision. Eng / product / design hiring conditional on Phase 5a.
Advisor / investor update cadence
Monthly written + quarterly call. Standardised metric format.
Product · 11 items
5 5 1
Web PWA on JP-region CDN
Nova PWA shipped to nova.zunou.ai on AWS S3 + CloudFront, region ap-northeast-1 (Tokyo). See services/nova/docs/DEPLOY_PWA.md + services/dashboard/src/sw.ts.
Nova native iOS / Android distribution
Xcode project (services/nova/ios/Zunou.xcworkspace) + ZunouWidget + eas.json all in repo · iOS bundle ai.zunou.scoutapp (inherits Scout's App Store identity) · Android ai.zunou.nova. App Store / Play Store distribution status to confirm.
Nova onboarding flow
Multi-phase onboarding shipped: Welcome · Calendar · CreateOrg · ChooseTabs · MeetAgent · Complete. See services/nova/src/onboarding/ + services/nova/app/onboarding.tsx.
Pusher real-time on Nova
pusher-js in services/nova/package.json · channel-subscribe integration depth + push-notification triggers in flight per NOVA_FINISH_CHECKLIST.md.
Spaces (Event / Community / Managed)
Comprehensive spec drafted March 2026 — services/nova/docs/SPACES_SPEC.md. Architecture: org with type + space_config JSON + join_code. Deployment status to confirm — critical for the Zoom-playbook viral mechanic (§06).
Account deletion + data export (APPI)
Account deletion UI shipped — services/nova/src/components/panels/DeleteAccountPanelContent.tsx. APPI-grade data export endpoint not yet implemented; required before any JP enterprise customer signs.
Team admin / permissions UX
Dashboard ✓ — role-based access, org-user invites, Pulse-member invites, Auth0 RBAC backend. Nova-side admin UI surface to confirm. See services/dashboard/src/types/permissions.ts.
Notification preferences (Dashboard) + hub
Full services/notification-hub/ with preferences.mjs · Pusher Beams + Expo Push + Web Push handlers · Dashboard NotificationsTab with per-Pulse settings. Nova-side surface follows once Pusher integration completes.
Voice + Text Agents + 136+ tools
OpenAI Realtime API (voice $0.90/sess) + Responses API (text $0.02/sess) · services/ai-proxy/tools.mjs at 7,000+ lines · server-side prompts + behavioral rules · Tool-based selective retrieval Tool-based selective retrieval Zunou's MIT-aligned AI architecture. Targeted tool calls instead of stuffing everything into context. ~100× token reduction. architecture.
Accessibility (WCAG AA)
Aria-label coverage on 16+ components in Nova + Dashboard, but no formal WCAG AA audit run yet. Needed before enterprise procurement (year 2).
Error recovery during meetings
Dedicated services/error-assistant/ service + services/meet-bot/ with status tracking (waiting → in meeting → recording → finished). Meeting failure modes recoverable.
GTM artifacts · 8 items
0 1 7
Onboarding email cadence + copy
Day-0 / day-1 / day-3 / day-7 / day-14 magic-number nudge sequence. JP + EN copies. Customer.io or similar.
Founder demo script for AI Tinkerers / TAI
~12-minute live demo: 3 capabilities lead (per §11.5 question for product). Ilya can review before the meetup slot.
Community-owner pitch deck
Separate from this proposal. The "venue sponsorship + co-marketed event" pitch in §06.4 needs a 5-slide deck for organizer conversations.
Investor pitch deck
Internal services/investor-deck/INVESTOR_DOCUMENT.md (724 lines, Mar 2026) exists as source material. External-facing pitch deck (slide format, narrative-tightened) is a separate deliverable.
Partnership MoU template
For community partnerships, venue partnerships, JP advisor agreements. Light legal review.
PR Times release calendar
Phase 1 launch + Phase 2 first-paying-logos + Phase 3 stage-gate-outcome. Bilingual.
JP press relationships
Nikkei XTech / TechCrunch JP / The Bridge / Bridge Tokyo. Warm intros via advisors.
Customer reference library
Phase 2 deliverable. Post-launch case studies, video testimonials, written quotes.
Measurement + risk · 8 items
0 0 8
Cohort analysis cadence
Mon weekly / first Friday monthly. PostHog or Mixpanel. §06.7 panel D names the dashboard as a Phase 0 deliverable.
Attribution model — last-touch v0
§12 #16. Revisit at day 90 once cohort data exists.
Qualitative feedback loop post-launch
10-min user interviews monthly. NPS pulse weekly via in-product widget.
Churn / exit interview process
Anyone who cancels gets a 5-min call slot. Synthesized findings monthly.
Single-founder dependency contingency
If Marco gets sick, run-the-pilot runbook + warm-handoff to Malek or named Zunou owner.
Competitive-event response playbook
If Slack AI / Notion AI / Copilot ships cross-app reasoning during Phase 1 (pre-mortem #5). What we say publicly + what we do strategically.
Launch-community-cancellation backup
If TAI demo gets cancelled, fallback to AI Tinkerers Ginza or Venture Café within 2 weeks.
AI-provider failover (Anthropic / OpenAI / Gemini)
Honest state: ai-proxy is currently single-provider (OpenAI Realtime + Responses APIs). "Model-agnostic by design" is positioning, not implementation. Multi-provider routing is real engineering work to do post-stage-gate. See services/ai-proxy/index.mjs.

Legend: built (code reference cited) · foundation exists, completeness gap named · open · needs creation. The 17 decisions in §12 resolve the highest-priority OPEN items. The remainder are Phase 0 / Phase 1 work that needs owners + dates — not blockers, but tracked.

14 How to engage

React on the 17. Push back specifically. Default = ship.

We don't need consensus — we need explicit objections so we can address them or proceed with the disagreement noted.

If you're on board

A single thumbs-up reply on the 17 is enough. We move forward and lock the Phase 0 owners + dates.

If you have a concern

Name the specific decision number + the alternative you'd ratify instead. We'll add it to the agenda for the live discussion.

Three ways to respond

Reply to Marco

Email or Slack DM. Decision number + ✅ / ⚠️ / ❌. One line each is fine.

Team thread

Slack thread on the launch channel. Everyone sees the reactions at once — fastest path to alignment.

Live review slot

30-min decision meeting once we have written reactions. Calendar invite from Marco when scheduled.

"If you finish reading this and have no objections, we haven't written it well enough."

15 Define the terms

Glossary.

The international team doesn't all share the same vocabulary. Every acronym and strategic term used in this proposal — defined in one place.

Metrics & strategy

KGI 重要目標達成指標
Key Goal Indicator (重要目標達成指標) is the top-level outcome the company commits to. In Japanese strategy practice it sits above NSM and KPIs. Ours is ¥30M ARR within 12 months of synchronized launch.
NSM
North Star Metric — the single product metric that most closely tracks the value users get. Connects KGI (lagging) to KPIs (leading). Ours: weekly active users past the magic number, in a partner community.
KPI
Key Performance Indicator — the weekly-tracked leading metrics (activation rate, time-to-calendar-connect, W4 retention, etc.) that roll up into NSM. See §11 (Stage-gate) for our specific six.
PMF
Product-Market Fit — the point where customers pull the product through the funnel instead of being pushed. Often invisible from inside but unmistakable from outside.
ARR
Annual Recurring Revenue — the yearly value of subscription contracts. Standard B2B SaaS health metric. ¥30M ARR ≈ US$200k.
MoM
Month-over-Month growth rate — (this month − last month) / last month. We commit to ≥20% MoM in the second half of the launch window.
ICP
Ideal Customer Profile — the company / user we're built for and who we deliberately target. Ours: founder-led English-tolerant Tokyo scale-ups.
WAU
Weekly Active Users — users who engaged within the rolling 7-day window. We use this for NSM rather than DAU (daily) because executive use patterns are weekly, not daily.
DAU
Daily Active Users — meaningful action within 24h. DAU/WAU ratio is a 'stickiness' indicator.
AU
Active User — a single engaged user. Our inference budget is expressed per active user per month (¥600/AU/mo on free tier).

Growth & product

Magic number
The threshold (defined per product) above which retention jumps. Slack: 2,000 messages. Facebook: 7 friends in 10 days. Ours (v0 hypothesis): 5 colleagues from same community + 1 calendar + 3 AI actions accepted, in 14 days.
Density
Zunou-specific term — the condition where enough colleagues in the same community / team are using the product that its AI has the cross-context it needs to be useful. The product IS the density.
PLG
Product-Led Growth — go-to-market where the product (free tier + self-serve onboarding) does the selling. Notion, Linear, Figma, Slack all archetypal PLG.
CLG
Community-Led Growth — distribution model where the user community generates referrals, content, social proof. Often paired with PLG.
Member-of-N
A Zunou-specific KPI: the % of weekly active users who are members of 2+ partner communities. Tracks the cross-community percolation effect from the sympathetic-detonation launch.

Physics & network theory

Sympathetic detonation
Physics term used for our launch mechanic. When multiple communities with overlapping members are launched in the same week, attendance at one triggers attendance at another. Adjacent fuses light each other.
Percolation threshold
Statistical physics concept describing when adding edges to a graph causes the structure to flip from sparse disconnected clusters into one giant connected component. Our 4-community launch is engineered to cross this threshold inside Tokyo's English-speaking founder graph.

AI & technical

Tool-based selective retrieval
Tool-based selective retrieval is Zunou's core architectural choice: the agent decides what to look up via 136+ production tools (calendar / tasks / meetings / notes / chats / insights / relays / contacts / etc.) and retrieves only what's relevant. MIT (Dec 2025) research showed context-stuffing collapses to ~0.04% accuracy on relational reasoning at scale. Zunou independently developed an architecture aligned with MIT's Recursive Language Model research before it was published.
Realtime API
OpenAI Realtime API (late 2024) made sub-second conversational AI possible. Zunou's Voice Agent is built on it: VAD (voice activity detection), interruption handling, 18 languages with dialect support, 8 voice options, speed/style control, camera integration mid-conversation. Voice session cost ~$0.90 vs text ~$0.02 — 50–100× cheaper to keep most usage in Text Agent.
MCP
Model Context Protocol — open standard introduced by Anthropic in Nov 2024, adopted by OpenAI / Microsoft / Google in 2025–26, donated to the Linux Foundation in Dec 2025. 10,000+ public servers exist. Zunou is MCP-native.
HITL
Human-in-the-Loop — AI proposes / drafts, humans confirm before any externally-visible action (sending email, posting to Slack, creating events). Our discipline against the 88% agent-pilot failure rate.
LLM
Large Language Model — the family of AI systems (Claude / GPT / Gemini / Llama / Mistral / etc.) that Zunou routes through MCP-mediated context for each task.

Zunou product surfaces & features

Nova
Nova is Zunou's next-generation mobile client: Voice Agent · Meeting Intelligence · Relays · cross-org connections · customizable home dashboard (12+ widgets, 3 hero styles, 5 templates). Designed as a 'personal AI command center' with an always-present AI bubble inspired by iOS AssistiveTouch.
Dashboard
Zunou Dashboard is the desktop web client — Vitals home, 6-view task management (List · Table · Calendar · Kanban · Gantt · Timeline), full meeting replay with transcripts + sentiment + talk time, rich-text Slate-based chat, Quill-based notes, org chart, billing.
Scout
Scout is the original Capacitor-based hybrid client that proved the concept. Wraps the web experience with native auth, push notifications, voice. New product investment now flows to Nova; Scout continues serving existing users.
Relays
Relays are Zunou's most differentiated capability. An executive creates a Relay with an objective; the recipient receives a push notification and converses with Zunou's AI (voice or text); the AI synthesizes the findings and reports back. DynamoDB-backed (scout-errand-service Lambda) with Pusher real-time status. No major competitor ships anything analogous.
Daily Debrief
Daily Debrief gathers today's + tomorrow's calendar, overdue tasks, recent actionables, and pending insights into a comprehensive briefing — voice (immersive) or text (efficient). Available as a home widget or a full session. The daily-ritual hook in our stickiness mechanics (§06.7 panel C).
Brain Dump
Brain Dump is one-tap voice recording on Nova with real-time AssemblyAI transcription. Speak your thoughts; Zunou creates an event with AI-generated summary, action items, insights, takeaways. Competitive with Otter.ai but with full business-context integration.
Instant Meeting
Instant Meeting records impromptu meetings with attendee tracking and speaker diarization. 2 taps to start (Otter is 3). Post-recording: AI assigns speakers, generates per-speaker transcripts, creates retroactive calendar entries, extracts insights.
Spaces
Spaces transform Zunou from a team tool into a platform. Event Spaces (time-bound: conferences scan a QR code; Nova rebrands; auto-channels per track). Community Spaces (permanent: alumni networks, professional guilds). Managed Spaces (enterprise white-label: admins push config). Spaces is the Zoom-playbook viral mechanic for Phase 2 GTM.
Pulse
Pulses are Zunou's workspace-health dashboard — overdue tasks, pending insights, unread messages, 'Needs Attention' signals. Not a chat inbox — a project-health surface. Each cross-org connection also gets its own dedicated Pulse with tasks / notes / messaging scoped to that relationship.
Lambda AI Proxy
The Lambda AI Proxy is Zunou's server-side IP: prompts, tool definitions, 11 shared behavioral rules across Voice and Text agents, session-type-based tool access. Competitors can't reverse-engineer our behavioral tuning. Improvements deploy in minutes, server-side — no app updates needed.

Japan-specific

Ringi 稟議
稟議 (ringi) — the standard Japanese corporate decision process: a written proposal (ringisho) circulates from lower-level employees upward, each stamping approval, before a final senior sign-off. Slow but builds organisational buy-in. Zunou's Ringi-automation alpha is a defensible JP-specific feature.
Keigo 敬語
敬語 (keigo) — Japanese honorific speech system, with three registers: teineigo (polite), sonkeigo (respectful, for the listener), kenjogo (humble, for the speaker). External-facing communication that gets keigo wrong reads as offensive. The Zunou wedge against Notion AI / Slack AI / Copilot.
APPI
Act on the Protection of Personal Information — Japan's primary data privacy law. The 2025–26 amendments add administrative penalties and stricter cross-border transfer rules. Required compliance for any JP enterprise customer.
ISMS
Information Security Management System — the formal security-management framework defined by ISO/IEC 27001. Required by most Japanese enterprises before they sign a SaaS contract. Independent audit, ~6-month process, ongoing recertification. Together with APPI compliance, this is the gate to enterprise sales.
METI
経済産業省 — Japan's Ministry of Economy, Trade and Industry. Runs national AI strategy and the SME AI subsidy programs (50–66% project cost reimbursement, ¥300k–¥4.5M per grant). JP-companies are eligible.
IVS
Infinity Ventures Summit — Japan's largest startup conference, held annually in Kyoto (July). 60+ alumni exits, freee and COVER among them. Organised by Headline Asia. Phase 2 of our rollout is anchored here.
TAI
Tokyo AI (TAI) — the largest technical AI community in Japan. Engineers, researchers, investors, PMs. Founded by Ilya Kulyatin. Recurring meetups + Connpass + WhatsApp as the persistent community channel. 4,000+ members as of May 2026.
AiSalon
AiSalon Tokyo — global community for AI-focused founders, builders, investors. Tokyo chapter co-hosted with Tokyo AI, JETRO-supported. Monthly in-person events with lightning talks.
JETRO
Japan External Trade Organization — government-affiliated agency that supports foreign business with entry into Japan and Japanese business expansion globally. Free advisory; useful for non-JP companies setting up. Less relevant for Zunou (we're JP-native) except for inbound advisor relationships.

Business & process

PWA
Progressive Web App — a web app that installs to the user's home screen, runs offline, sends push notifications. Zunou's current shipping surface is a PWA.
MoU
Memorandum of Understanding — a written but typically non-binding agreement that signals commitment. Used in our portfolio-as-community play for the GP-Zunou agreement.
MoSCoW
Prioritization framework that buckets work into Must (do now), Should (after must), Could (if time), Won't (this cycle). Used in product-requirements.md to tier features for Phase 0 / 1 / 2 / deferred.
OAuth
Open Authorization — the protocol that lets you grant a third-party (like Zunou) access to your Google Calendar or Slack workspace without sharing your password. Foundation of all modern integrations.
SaaS
Software-as-a-Service — the dominant B2B software model: web-delivered, subscription-priced, continuously updated. Zunou is SaaS.
CSM
Customer Success Manager — a role responsible for post-sale customer adoption, retention, and expansion. Decision 15 commits to delaying our first CSM hire to post-day-180 stage-gate.
GTM
Go-to-Market — the strategy for launching, positioning, distributing, selling a product. This whole proposal is Zunou's Japan-led GTM plan.
CDN
Content Delivery Network — distributed servers around the world that cache static content close to users for fast delivery. Cloudflare runs one of the largest. zunou.anysigma.com is served from it.
16 The receipts

References.

Every numerical claim above traces back to a public source. Listed here for anyone who wants to verify or go deeper.

Last verified: 2026-05-11. If a source url is broken or you want a section traced to a specific footnote, ask Marco. The detailed source-by-claim mapping lives in the internal strategy markdown.