Simultaneously seeded YC + 500 Startups + Techstars portfolios + designer Twitter.
The shift
Knowledge teams have stopped tolerating tool fragmentation.
For a decade we taught teams that the answer was more apps. Slack for chat. Notion for docs. Linear for tasks. Otter for meetings. A separate AI assistant bolted on top of each one. The cost was hidden in the context-switching tax — and it added up.
That tolerance just snapped. AI in 2026 made it obvious: when context lives in five places, the AI is useless. When context lives in one place, the AI becomes a chief of staff. Teams that figure this out compound. Teams that don't fall behind on every cycle — every meeting, every decision, every follow-up.
Stay fragmented · bolt AI on each tool
Notion AI in Notion. Slack AI in Slack. Copilot in M365. Each one smart in its silo. None of them can connect the meeting decision in Slack to the action item that should live in Linear. The team still does that work manually. The AI is a feature, not a force.
One workspace · context compounds
Chat, tasks, meetings, decisions — one surface. The AI sees everything. Today's meeting becomes tomorrow's prep brief automatically. The team's collective context becomes a competitive advantage. Switching to anything else means losing months of trained context.
Where we're going
Tokyo's working teams stop thinking in apps.
They think in Zunou.
By the end of year 1: small Tokyo teams open Zunou in the morning the way they opened Slack five years ago. Their meeting prep happens automatically. Decisions surface where they need to be acted on. Their AI knows everyone they work with — because everyone they work with is in Zunou too.
This isn't sold; it's experienced. When one Tokyo founder tells another "everyone I know is on Zunou" — that's the moment. From there it spreads passively.
What I'm proposing we discuss
This is my first GTM proposal at Zunou. I'm not asking you to ratify a finished plan — I'm asking you to react to a specific shape, push back where the inputs feel wrong, and shape the next version with me. The strategy below is genuinely good, but it gets twice as good with your judgment in it.
Specifically I want us to align on: the launch shape (pilot-first vs synchronized), the pilot community (TAI is my recommendation, others on the attack list), the year-1 ARR target (math is in §09 — different scenarios, different implications), and what we discover in pilot that should change the rest of this plan.
Unfamiliar with KGI KGI 重要目標達成指標 Key Goal Indicator — Japanese-standard top-level board commitment. The single number the company is held to. , NSM NSM North Star Metric — the single product metric that proxies for healthy growth. Tracks weekly. , PMF PMF Product-Market Fit — the moment the product visibly pulls users in rather than being pushed at them. , MCP MCP Model Context Protocol — open standard for AI systems to read/write external tools and data (Slack, Notion, Linear, etc.). Industry-default since early 2026. , Ringi Ringi 稟議 Japanese consensus-based written approval process. Documents circulate bottom-up through hierarchy. , Keigo Keigo 敬語 Japanese honorific speech. Required for any AI output that becomes external-facing. , or APPI APPI Act on the Protection of Personal Information — Japan's primary data privacy law. 2025–26 enforcement-focused regime. ? Hover any underlined term for an inline definition, or jump to the full glossary.
What Zunou actually is — and what we're not yet selling.
Internally Zunou frames itself as "the AI operating system for leadership productivity" — a multi-surface platform with three production clients (Nova mobile · Dashboard web · Scout legacy) on top of a real backend with 136+ AI tools, voice agent, autonomous delegation (Relays), and meeting intelligence. The marketing tagline is "AI Chief of Staff" but the actual product is broader.
Nova (mobile)
iOS · Android · Web. Voice agent · meeting recording · chat · tasks · cross-org connections. 30,000+ LOC across 50+ screens.
Dashboard (desktop)
Browser. Pulse AI chat surface. Where heavy users live during the workday. Full-featured admin / data / reports.
Spaces
White-label container for events / communities / alumni networks. Phase 2+ GTM angle — a powerful lever we're not using in this proposal but worth a separate discussion.
What this means for our pitch: we're not selling vapor. The product is real, deployed, and substantially deeper than what zunou.ai shows publicly. The pitch problem isn't "can we ship?" — it's "which of these capabilities do we lead with for the community pilot, and which do we hold back as the upgrade path?" Source: internal zunou-services investor doc, March 2026.
The opening: the AI productivity stack is fragmenting, not consolidating. Personal-tier exec assistants — alfred_, Klaio, Alyna — solve email + scheduling for one person, not teams. AI workspace tools — Glean, Mem, Lindy — solve search and agents for English-speaking enterprise. Meeting tools — Otter, Fireflies, Granola — transcribe but don't act. The category's most-funded team-focused AI Chief of Staff, Xembly, shut down in 2024. The platform incumbents — Notion AI, Slack AI, Microsoft Copilot — have each shipped AI surfaces bounded to their own walled gardens.
What no one ships today: a unified workspace — chat that rivals Slack + task management + AI native — that lands via integrations rather than asking teams to migrate cold, AND speaks fluent Japanese ( Keigo Keigo 敬語 Japanese honorific speech. Required for any AI output that becomes external-facing. , Ringi Ringi 稟議 Japanese consensus-based written approval process. Documents circulate bottom-up through hierarchy. ). That's the gap Zunou is positioned to close — provided we ship + distribute before any incumbent decides to unify their own stack. The window isn't a fixed countdown; it closes when an incumbent breaks out of its walled garden.
The proposal you're about to read answers one question: how do we manufacture density inside small Tokyo communities, fast enough to matter, while that window is still open?
Zunou's site has two use cases · neither is exactly the launch target
For Founders
"Routine tasks are now automated."
Intelligent agents run in the background — route tasks, approvals, follow-ups. Approve / review only when human input is needed.
For Enterprises
"Strategic decisions are now real-time."
Live exec dashboard — progress, workload, and outcomes across teams, projects, priorities. No more chasing updates.
Tokyo's broader builder + operator + founder community.
The four launch communities cover that whole population — not just founders. TAI (4,000+ AI engineers / researchers / PMs — mostly builders) · AI Tinkerers Ginza (~200 builders shipping in production) · Tokyo Founders (~150 operator-founders) · Venture Café Toranomon (the mix, every Thursday). The magic-number mechanic applies wherever there's a small team + many meetings + decisions to track — which is true across the segment, not just at the founder layer.
A proposal, not a verdict.
Every number here is a hypothesis. Every choice a defensible best-guess. If you finish reading and have no objections, we haven't written it well enough.
What this is
- Evidence-grounded
Every numerical claim is sourced and verifiable. The References section at the end is exhaustive.
- Stop-or-go-able
A pre-committed PMF PMF Product-Market Fit — the moment the product visibly pulls users in rather than being pushed at them. stage-gate at ~6 months. Three explicit outcomes, agreed in advance.
- Instrumented week one
The KPI KPI Key Performance Indicator — leading metrics we manage weekly. They roll up to NSM, which rolls up to KGI. tree is the implementation contract. If we can't measure it, we don't claim it.
What this isn't
- A final answer
Strategy is a living document. We refit the magic number monthly against real cohort data.
- A request for consensus
Explicit objections are more useful than reluctant nods. Use the vote chips on each decision.
- A sales pitch
This is internal alignment. The external-facing version comes after the 17 decisions are ratified.
The window is open right now. It won't be in 12 months.
A category leader exited. Japan's adoption tipped past 30%. Every productivity app is racing to embed agents.
The counter-pressure: Gartner reports 88% of AI agent pilots fail to graduate to production — evaluation gaps (64%), governance (57%), reliability (51%). Most buyers have been burned. Our positioning is the response to that wound: not another agent pilot — the morning briefing your team will actually use tomorrow.
MCP-native. Land in their stack while it gets replaced.
Zunou is a full workspace — chat that rivals Slack, task management, AI assistant. People won't migrate from their existing tools overnight. MCP is the bridge.
What Zunou actually competes with: Slack at the chat layer · Notion / Linear at the task layer · alfred_ / Klaio at the personal-AI layer · Otter / Fireflies at the meeting layer. Zunou does what all of these do, in one place, with AI native. That's the end state.
The starting state is different. Most teams already have years of context in Slack, Notion, Asana, email, calendar. Telling them "switch to Zunou" on day one loses every conversation. They won't migrate overnight — and we shouldn't ask them to.
MCP solves this. We adopt the protocol as a host and inherit the entire 10,000+ server ecosystem on day one. Day-1 users keep their existing tools and use Zunou's AI on top of them. Day-90 users find themselves opening Zunou first because the context is already there. Day-365 users use Slack only for external comms — because internal happened in Zunou.
Integrate, don't ask
Connect Slack + Notion + calendar + email via MCP. Zunou's AI works across them immediately. Zero migration friction. The user keeps every habit they have.
Context lives here now
Decisions, action items, follow-ups all surface in Zunou. The morning brief becomes the first surface opened. Slack becomes the second.
Zunou is the workspace
Internal chat happens in Zunou. Tasks live in Zunou. Slack stays for external comms; Notion becomes the public-doc archive. The team's center of gravity has shifted.
MCP is the bridge — not the moat. The moat is the unified workspace that compounds once we're landed.
Model-agnostic by design.
Zunou is not built on one provider. The product routes between Anthropic (Claude), OpenAI (GPT), Google (Gemini) — and others as they prove competitive — based on cost-per-task, latency, and quality benchmarks. Cheap models handle high-volume work (summarisation, classification); premium models handle heavy reasoning. The user never sees which one ran their query.
No one builds exactly Zunou's product. Many build pieces of it.
Honest audit: who's a direct competitor, who's adjacent, who's a platform threat. Not everyone with 'AI' in their tagline is in our lane.
| Lane / Competitor | What it actually does | Threat to Zunou |
|---|---|---|
| Direct · team-focused AI Chief of Staff | ||
| Xembly ↗ US · $20M raised | Was the closest direct competitor: meeting recording + action items + follow-ups for teams. Shut down June 2024 | None — exited the market. Cautionary tale, not a threat. |
| Klaio ↗ EU · early | "AI Chief of Staff" branded; chat-style assistant for ops + tasks. Small team, individual + small-team tier. | Low — no JP presence, no MCP-native land-and-expand story, no full workspace surface. |
| Alyna ↗ US · early | AI productivity assistant marketed as "AI Chief of Staff" for individuals + small teams. | Low — individual-first, no team workspace, no JP. |
| Adjacent · personal exec assistant (1 user, email + calendar) | ||
| alfred_ ↗ $24.99/mo · individual | Email triage + scheduling + voice-matched drafts for one executive. Personal CoS. | Low — single-user tool. Different category. Doesn't address team density. |
| Adjacent · AI workspace tools (search + agents, enterprise lean) | ||
| Glean ↗ $7B+ valuation | Enterprise AI work assistant — unified search across all company apps, custom agents. Sells top-down to large enterprises. | Medium long-term — but they sell to 1,000+ seat enterprises through procurement. Different motion. No JP focus. |
| Mem ↗ Personal-first AI | AI-native note-taking and knowledge management. Personal use evolving to teams. | Low — knowledge-base oriented, not workflow/operations. |
| Lindy ↗ Agent builder | AI agent platform — users build custom agents for email, calendar, follow-ups. DIY composability. | Low — power-user tool, not a turnkey workspace. Different ICP. |
| Adjacent · meeting AI (transcribe + note, don't act) | ||
| Otter ↗ · Fireflies ↗ · Granola ↗ Mature category | Meeting transcription + AI summaries. Granola is the most loved by execs (notepad-style, no bot in room). | Medium — Granola in particular is a feature competitor for Zunou's meeting-AI surface. We compete by going broader (chat + tasks + AI in one). |
| Platform incumbents · the long-run threat | ||
| Slack AI ↗ Salesforce · APAC +19% YoY | AI summaries + search inside Slack. Distribution = every Slack workspace. | High long-run — but bounded to Slack. Can't see calendar / Notion / Linear. No Keigo Keigo 敬語 Japanese honorific speech. Required for any AI output that becomes external-facing. / Ringi Ringi 稟議 Japanese consensus-based written approval process. Documents circulate bottom-up through hierarchy. nuance. |
| Notion AI ↗ JP language ✓ | AI inside Notion — write, summarise, find. Distribution = every Notion workspace. | High long-run · bounded to Notion. Limited cross-app reach. |
| MS Copilot ↗ M365 + Teams | AI woven through M365 + Teams. Strong in JP enterprise on the Microsoft stack. | High long-run for the Microsoft-stack share of JP enterprise. Limited reach on Slack-native + Google-native teams. |
Notably not in this table: Ashley AI (askashley.com) — a retail/customer-service conversational AI, not exec ops; different category entirely. Other "AI Chief of Staff" branded tools that emerged in 2023 have either pivoted to consumer or gone quiet. Audit refreshed monthly; flag additions to Marco if a new entrant launches in this space.
Japanese AI companies = partners, not competitors
Sakana AI ($135M Series B, Nov 2025), LayerX ($100M Series B, Sep 2025), ELYZA (KDDI-backed), Rakuten AI 3.0 — all sell foundation models or back-office automation. None compete with the exec CoS surface. The right move is co-marketing (joint PR Times release, joint AiSalon Tokyo demo) — not competing.
What we can credibly own
The category is competitive but no one has built exactly Zunou's product: a unified workspace that lands via MCP integrations before asking for migration. Defensibility comes from three things the incumbents have weak incentives to build: the land-and-expand strategy via cross-app integrations (see §03), Japanese-localized affordances like Keigo Keigo 敬語 Japanese honorific speech. Required for any AI output that becomes external-facing. and Ringi Ringi 稟議 Japanese consensus-based written approval process. Documents circulate bottom-up through hierarchy. , and the community-distributed habit loop. The window stays open as long as no incumbent decides to break its walled garden.
Density manufactures product-market fit.
Not features. Not virality. Density — the threshold past which a small group's behavior changes.
Precedent · the threshold pattern
Past this threshold, retention jumps to 93%. Below it, teams churn. The product itself doesn't change — the team's behavior does.
Chamath Palihapitiya's growth team identified this exact number. Cross it and the user retained for life. Miss it and they churned. Every product roadmap decision was filtered through it.
Zunou's hypothesis · derived from how the product creates value
Why these numbers — the derivation
- 5
- Colleagues from the same community. Zunou's value compounds when cross-context exists — when your AI knows what others in your circle decided this week. Below ~5, the AI has thin context and reads as a chat box. At 5+, the cross-references start producing insights you couldn't get elsewhere. Why not 3 or 10: Slack's network-effect studies cluster around 4-7 as the activation band; we set 5 as a defensible mid-point to refit.
- 1
- Calendar connected. Without the calendar, Zunou can't surface meetings, prep, decisions, or follow-ups — and 80%+ of the product surface is dark. Calendar is the single integration that unlocks daily utility. Why exactly 1: empirical — every PLG study on calendar-adjacent products (Cron, Calendly, Reclaim) ties activation to first OAuth connection. There's no fractional version.
- 3
- AI actions accepted. One accepted action is a fluke. Two is a coincidence. Three within 14 days is a habit. Acceptance (rather than impression / view) is what tells us the AI's output is actually trusted. Why 14 days: matches industry SaaS-onboarding studies showing the first 2 weeks predict W4 retention with ~80% confidence (Mode, Amplitude). Why 3: below this users haven't internalized the value; above this they reach for Zunou unprompted.
These specific numbers are a starting hypothesis, not a commitment. We instrument them on day 1 and refit monthly against real cohort data. If at month 2 the actual threshold is 7/1/4 — we update. If at month 3 only 5/1/3 in 21 days correlates with W4 retention — we update the window too. The contract is the framework; the numbers are an iteration.
The launch mechanic that produces this density is the part of the strategy that sounds unusual. Instead of launching one Tokyo community at a time (the Eventbrite playbook — city-by-city, the "campus model"), we light four overlapping communities in the same week, picked specifically because their members already see each other.
The physics term for this is sympathetic detonation — adjacent explosive charges igniting each other through shockwave coupling. The growth-theory term is percolation threshold — the moment a sparse graph flips from disconnected clusters into one giant connected component.
In plain terms: an attendee at AI Tinkerers Ginza on Tuesday sees three people at Venture Café Toranomon on Thursday. By the end of the launch week, one sentence becomes literally true inside the Tokyo English-speaking founder graph —
"Everyone I know
is on Zunou."
When this becomes literally true for one person inside a launch community, the percolation threshold is crossed. From there it spreads passively.
One launch leaks. Four overlapping launches chain-react.
One pilot first. Learn. Double. Scale.
We don't sympathetic-detonate four launches before we know the platform is ready. We onboard one community deeply, take the operational learnings, fix what breaks — then double, then scale to the rest.
One community
Deep onboarding. Marco + Malek hand-hold the first ~50 users. Every friction point gets logged. Platform readiness becomes a known.
Fix what breaks
Onboarding gaps, integration friction, support questions. Real magic-number data replaces the hypothesis. Stickiness mechanics validated or rebuilt.
Adjacent overlap
Add 1–2 communities that share members with the pilot. Validates the cross-community percolation thesis before scaling further.
Sympathetic detonation
Now the multi-community simultaneous launch from a position of knowing the platform works. The original §05 mechanic, but earned not assumed.
Why pilot-first, not synchronized
A simultaneous 4-community launch only works if the platform is ready to handle 200 cold signups with no operational drag. We don't know that yet. Falling flat across 4 communities at once damages 4 relationships at once and the recovery cost is brutal in the Tokyo founder graph — everyone knows everyone. A messy pilot in 1 community recovers; a messy launch across 4 doesn't. The sympathetic-detonation play in §05 stays — we just earn the right to run it before we run it.
Attack list · 13 community targets across 4 priority tiers
We pursue P1 first and only commit Marco's time at P2/P3 once pilot signal is validated. But we keep all 13 warm — community organizers take 2–6 weeks to respond and we don't want to be stuck waiting on a single channel. Backup communities are not afterthoughts; they're the insurance against P1 stalling.
| Tier | Community | Audience · size | Why us · why now | Warmth | Action |
|---|---|---|---|---|---|
| P1 | TAI (Tokyo AI) ↗ tokyoai.jp | AI engineers · researchers · PMs ~4,000 members | Broadest overlap with every other community. Open membership. Builders run the most meetings + are most AI-fluent. | 🟢 Warm via Tokyo AI advisors | Demo at monthly meetup → onboard 30–50 |
| P2 | AI Tinkerers Ginza ↗ tokyo.aitinkerers.org | Shipping AI builders ~200 selective | Highest-signal users — they build the tools others adopt. Heavy overlap with TAI. | 🟢 Direct contact possible | Demo at next demo night |
| P2 | Venture Café Toranomon ↗ venturecafetokyo.org · weekly | Builders + founders + investors mix ~500 unique/yr | Weekly recurring touchpoint. Compound exposure. CIC Tokyo proximity helps later enterprise plays. | 🟡 Public events — apply | Speak at Thursday gathering |
| P3 | Tokyo Founders Group private Slack | Operator-founders ~150 | Closer to Zunou's "For Founders" persona. Private list = warm-intro access only. | 🟡 Need warm intro | Once pilot is proven |
| P3 | Startup Grind Tokyo ↗ startupgrind.com/tokyo | Founders + investors ~400 reachable | Monthly fireside event rhythm. Established credibility. Overlap with Venture Café. | 🟡 Apply for speaker slot | Speak / be a featured guest |
| P3 | Le Wagon Tokyo alumni ↗ blog.lewagon.com/tokyo | Tech bootcamp alumni ~300+ | Slack-active, tools-curious. Strong builder overlap with TAI + AI Tinkerers. | 🟢 Alumni network access | Workshop at alumni event |
| P3 | Tokyo Product Meetup ↗ PMs across Tokyo SaaS | Product managers ~600 active | PMs are ICP ICP Ideal Customer Profile — the specific type of company / user we're built for. — they run meeting-heavy cross-team work. High intent for Zunou. | 🟡 Apply to organize a talk | Talk + demo at meetup |
| P3 | Founders Live Tokyo ↗ founderslive.com | Founders ~200/event | Pitch-format events. International founder mix. Good warmth-builder before IVS. | 🟡 Apply or attend | Sponsor or pitch |
| P4 | Headline portfolio ↗ headline.com · runs IVS | JP VC portfolio ~80 cos | Portfolio-wide rollout = scale via VC relationship. Headline also runs IVS. | 🟡 Advisor warm-intro | Once we have paying logos |
| P4 | Coral Capital portfolio ↗ coralcap.co · YC-like program | JP VC portfolio ~60 cos | English-friendly founders. Coral's portfolio program creates community already. | 🟡 Cold-to-warm via James Riney | Sponsor a portfolio event |
| P4 | Genesia Ventures portfolio ↗ genesiaventures.com · SE Asia + JP | SE Asia + JP early-stage ~50 cos | Cross-border founders. Good fit for Zunou's English-tolerant user. | 🟡 Advisor intro needed | Investor demo |
| P4 | IVS Kyoto ↗ ivs.events · annual July | JP startup conference ~13,000 attendees | Anchor event of the year. Launchpad pitch slot is high-leverage with working platform. | 🔴 Apply months ahead | Launchpad / sponsor booth |
| P4 | Open Network Lab ↗ onlab.jp · seed accelerator | Digital Garage accelerator cohort: ~10–15 | Cohort founders need ops tools. Onlab's network gives advisor access. | 🟡 Direct application | Sponsor cohort tooling |
Warmth legend: 🟢 immediate access (advisor / direct contact) · 🟡 outreach needed (cold-to-warm in <2 weeks) · 🔴 long lead time (months ahead). Why 13 not 4: a P1 demo gets scheduled in weeks; P3 takes 2–3 months. Building warm chains across all 13 in parallel means no idle waiting if any one channel stalls. Marco's time-priority is P1; everyone else stays warm via email or one-pager.
Are we ready to onboard them?
A pilot only works if the platform handles the first 50 users without falling over. These are the questions we answer with platform / product / Malek before the TAI demo gets scheduled.
First 15 minutes, day 1
- · Sign-up → time-to-first-value: target ≤ 5 min.
- · Calendar OAuth: works on Google Workspace + iCloud + Outlook?
- · Slack OAuth: works on personal + workspace?
- · Voice setup: mic permission, fallback if denied?
- · First AI action: prep brief for next real meeting — does it land or feel generic?
- · What happens if integrations fail silently?
What TAI members complain about
- · "I can't keep up with my Slack DMs across multiple workspaces."
- · "I miss action items from meetings because note-taking sucks."
- · "Calendar prep takes 20 min I don't have."
- · "Event RSVPs scattered across Connpass / Lu.ma / Peatix."
- · "Follow-ups slip through the cracks; my CRM is a notebook."
- · If Zunou solves 2–3 of these well, they'll try it. If it solves 1 plus has rough edges, they won't.
Day 7, 30, 90 retention
- · Day 7: morning brief becomes the first app opened.
- · Day 30: 5+ colleagues active in same workspace (the magic number) — cross-context insights are visible.
- · Day 90: internal team chat shifts from Slack to Zunou for at least one workflow.
- · Habit loop: morning brief → meeting prep → in-meeting capture → post-meeting follow-up → next morning's brief.
Need answers before the demo
- · Can we handle 50 concurrent active users without degradation?
- · Inference cost per active user — what's the actual runrate?
- · Multi-workspace isolation — solid or leaky?
- · What's the support model when a TAI member DMs Malek with a bug at midnight?
- · APPI compliance posture — even if not enterprise, JP users will ask.
- · Disaster mode: what's the rollback plan if the launch demo glitches live?
The honest answer to "ready?": we don't know yet. Phase 0 (foundations) answers it. The TAI pilot demo doesn't get scheduled until the open-question column above is green. If platform readiness takes longer than expected, we slip the demo — not the platform's quality bar.
Discovery surface, driver tree, stickiness mechanics, analytics dashboard.
Four design questions that determine whether the pilot converts: how do unsigned-in visitors discover us, what makes people open Zunou again on day 2, what makes them open it on day 30, and how do we actually see whether any of it's working?
Should we have something public before sign-in?
Right now zunou.ai is a waitlist-only landing page. Anyone who hears about Zunou from a TAI demo, a friend, or a Tweet hits the same wall. No discovery surface = no organic top-of-funnel. This is the most concrete blocker we can fix early. Three options, none mutually exclusive:
Status quo
Keep zunou.ai as it is. Email capture for waitlist.
Risk: 0% discovery. Every user requires manual intro or community-event channel.
Free useful surface
A genuinely useful free tool that doesn't require sign-in: Tokyo AI events feed · meeting-summary template gallery · public showcase of community digests · weekly AI roundup.
Brings non-users to a Zunou-branded surface. Converts on trust + utility. SEO-indexable. Open question: which utility resonates with TAI / AI Tinkerers ICP enough to be worth building?
"Built with Zunou"
Public testimonial wall · case studies · a "see how teams use it" page anchored on real workflows from pilot users (with permission).
Builds credibility but useless until we have pilot users. Defer to Phase 2.
Recommendation: ship Option 2 in Phase 0 as a focused public utility — events feed is the obvious choice given TAI / AI Tinkerers / Venture Café cluster events on Connpass + Lu.ma. Cost is low (data already public, just aggregate + render). Open question for the team is which utility to lead with.
What are the levers we pull?
The KGI (ARR) breaks down into a driver tree. Each leaf is something Marco or the team can actually do something about. Bold the controllable nodes.
ARR
│
├─ Active users
│ ├─ Discovery surface traffic ← ship public utility (panel A)
│ ├─ Community demos / events ← Marco runs the calendar
│ ├─ Word-of-mouth virality ← engineered via shareable artifacts
│ └─ Sign-up → activation rate ← onboarding (§06.5)
│
├─ Paid conversion rate
│ ├─ Magic-number completion ← product mechanic
│ ├─ Free-tier limits hit ← pricing design
│ └─ Upgrade prompt timing ← UX
│
└─ ¥ per paid user / mo
├─ Tier pricing ← decision §12 #14
├─ Tier mix ← upsell path
└─ Add-ons / AI overage ← future revenue lever
Why this matters: if ARR isn't growing, we look at the driver tree and ask "which node is broken?" — not "let's try harder." Each driver maps to a KPI (panel D). When we say "the platform isn't ready" we mean a specific node, not a vibe.
What makes users love it, not just try it?
Trial-to-stick conversion is where most AI tools die — Gartner's 88% pilot-failure rate is exactly this gap. Five mechanics we design for explicitly:
1 First-session "aha"
User sees something they couldn't get elsewhere within their first 5 minutes. For Zunou: the meeting summary that catches a follow-up they would have missed.
2 Daily ritual hook
Morning brief at 8 AM with today's calendar + pending follow-ups + cross-context signals. One reason to open the app every day at the same time.
3 Compounding context
Every meeting / chat / decision makes the next AI output better. Switching cost goes up over time — leaving Zunou means losing months of trained context.
4 Social proof loop
"5 colleagues from your community are on Zunou." Visible, ambient, real. The magic-number's social dimension — you don't want to be the one who left.
5 Shareable artifacts
Meeting summaries → Slack DMs to non-users. "Here's what we decided" prep docs → emailed to collaborators. Every shared artifact is an ad for Zunou. Loom did this with video; we do it with structured outputs. This is decision #17 in §12.
How do we actually know what's working?
Without a daily-monitored dashboard, the strategy operates on vibes. The minimum we need before pilot launch:
| Cadence | What we watch | What it tells us | Lever if off |
|---|---|---|---|
| Daily | Sign-ups · activations · first-session completions · errors | Onboarding health · platform stability | Marco / Malek personally onboard problem signups |
| Weekly | WAU · magic-number completion · cohort retention W1 / W2 / W4 · NPS pulse | Whether pilot mechanics are firing | Adjust onboarding · re-run pilot demo · tune nudges |
| Monthly | Magic-number refit · paid conversion · Member-of-N · inference ¥/AU · CAC payback | Economics + hypothesis validation | Refit magic-number numbers · adjust pricing |
| Stage-gate | All 6 PMF criteria (§11) | Fuel · extend · pivot | The big decision |
Tool stack: PostHog or Mixpanel for product analytics · Linear for the issue tracker · a simple shared Notion or Linear-doc as the public dashboard so the team sees the same numbers daily. The dashboard itself is a Phase 0 deliverable — without it the pilot has no nervous system.
Five phases. Each gated on readiness, not on the calendar.
We don't promise dates we can't keep. We promise readiness gates the team agrees on in advance. Earliest plausible launch is summer 2026; later is fine if Phase 0 isn't clean.
- Phase 0 · Foundations
Ship the cost guardrails and platform readiness checks.
Token budget meter + multi-model routing tier — cheap model for high-volume work, premium for heavy reasoning, model-agnostic (Claude / GPT / Gemini routable based on cost-per-task). JP landing page on
zunou.anysigma.com. Magic-number counters in the PWA. Events-feed v0 (Connpass + Doorkeeper public APIs). Validate community overlap via roster sampling. Gate: all six items shipped, or no pilot. - Phase 1 · The synchronized launch
Four Tokyo communities, same week.
Week 1: AI Tinkerers Ginza demo + Tokyo Founders private launch. Week 2: TAI presentation + Venture Café Thursday Gathering. Same-week PR Times release. Member-of-N badge live in product. Gate: 100 signups across the four, ≥30% calendar-connected.
- Phase 2 · IVS Kyoto + density push
Convert the conference into ARR-credible logos.
IVS LAUNCHPAD pitch (or side-event regardless). Booth with live "summarize this booth's pitches" demo. Pre-conference auto-prep emailed to RSVP'd attendees who connect calendar. Gate: NSM NSM North Star Metric — the single product metric that proxies for healthy growth. Tracks weekly. ≥ 60 WAU WAU Weekly Active Users — users who took meaningful action this week. past magic number by end of phase.
- Phase 3 · Density compound + first paying logos
Prove paid conversion before the free runway expires.
First paid logo + PR Times release. Akai Wagon / Indelible portfolio rollouts. "This Week in Tokyo" digest hits 5,000 weekly uniques. Ringi automation alpha to 3 enterprise design partners. Apply for METI IT subsidy. Gate: ≥5 paid logos, MoM growth ≥ 20%.
- Phase 4 · Stage-gate review
The single yes/no/extend decision against six PMF criteria.
~6 months after Phase 1 launch. NSM NSM North Star Metric — the single product metric that proxies for healthy growth. Tracks weekly. ≥ 200 · One launch community at 25%+ density · 35%+ activation in 14 days · ≥3 paid logos · Inference ≤¥600/ AU AU Active User — used as denominator in per-user cost calculations. /mo · Member-of-N Member-of-N A user active in N partner communities. Our percolation indicator across the launch graph. ≥ 15%. Hit ≥4 → fuel. ≤2 → pivot. 3 → extend 60 days.
- Phase 5a · If PMF — fuel
Scale within the community lane that worked.
Raise sponsored-seat caps on the original pilot community. Open adjacent communities #5–10 (P3 tier from §06). Akai Wagon + Indelible portfolios warm-introduced where overlap exists. Push toward ¥30M ARR through community-led paid conversion — not via enterprise sales. SI / large-enterprise deferred to year 2. Trading-house and major SI conversations (NTT Data, Fujitsu, Itochu) require a proven product, multiple referenceable customers, full APPI APPI Act on the Protection of Personal Information — Japan's primary data privacy law. 2025–26 enforcement-focused regime. + ISMS ISMS Information Security Management System — ISO/IEC 27001 certification, table-stakes for Japanese enterprise SaaS contracts. posture, and a JP-domestic legal entity. Pre-building advisor warm chains earlier is fine. Selling at this layer is a year-2 decision conditional on community-phase outcomes.
- Phase 5b · If pivot
Rotate the wedge — most likely Ringi-first vertical or events-only product.
Re-run validation (10 calls in 4 weeks) on the new wedge. Keep the four community partnerships warm; don't burn them. Aim to re-launch a focused product within 90 days.
The pattern repeats. We're not improvising.
Four billion-dollar outcomes. Each manufactured density inside overlapping groups before going broad.
Won gaming guild leaders ('supernodes') first; shipped Twitch integration as cross-community accelerant.
Public gallery of design files / templates / plugins → SEO + activation + pull-mechanism.
Free weekly newsletter for 9 months before charging anything; paid Slack as the dense layer.
Where the numbers come from — none of them are locked.
Three financial questions the team needs to discuss: the KGI ARR target, the inference cost ceiling, and the budget envelope. Below: how the working estimates were derived, plus alternate scenarios. Numbers ratified together in §12 decisions, not unilaterally here.
Working forward from the population, not backward from the goal.
Most pitch decks pick an ARR target and back-solve for users + conversion + price. We do the opposite: start with the actual people we can reach, apply realistic funnel rates, multiply by sensible prices, and see what ARR falls out. If the working estimate disappoints, the answer is to fix specific drivers — not to inflate the inputs.
Start from the reachable population
≈ 6,500 reachable individualsThe Tokyo English-speaking AI / tech / founder graph is real and finite. Counting carefully (deduplicating overlap):
P4 layer (VC portfolios, IVS conference) adds ~2–3k more but those are post-stage-gate.
Apply realistic awareness × interest funnel
~1,950 sign-upsNot everyone in the population will hear about Zunou; not everyone who hears will sign up. Two-step funnel:
Awareness % is the lever the discovery surface (§06.7) directly moves. Without a public utility, awareness drops closer to 35–40% (only people in demo rooms).
Subtract the activation drop-off
~880 active usersSign-up is cheap. Hitting the magic number (5 colleagues + 1 calendar + 3 AI actions in 14 days) is the hard part. Industry SaaS benchmark for engaged free → activated is 40–55%:
This is the single biggest lever in the whole stack. Activation rate is what onboarding, integrations, and stickiness mechanics (§06.7 panel C) all serve. +5% activation = +¥1.3M ARR.
Convert activated users to paid
~105 paid seatsAmong activated users (people getting genuine value), free-to-paid conversion in workspace SaaS clusters at 8–15%. Conservative middle:
Comparable benchmarks: Notion ~10% (consumer-mixed), Linear ~13% (engineering-focused), Slack ~30% (network-locked-in over time). Year 1 in a new market with a new product trends lower; year 2+ trends higher.
Apply tier mix × price = ARR run-rate
≈ ¥2.5M ARR run-rate at month 12Not all paid users buy the same tier. Assuming Linear-style pricing (panel B option B): 70% on Standard (~¥2,000/mo), 25% on Pro (~¥3,500/mo), 5% on Business (~¥5,000/mo):
Realised year-1 cash collected is ~50–60% of exit-month ARR (revenue ramps as users sign up across the year). Exit-month ARR is the more important number — it's the run-rate going into year 2.
How exit-month ARR moves with each driver
The forward build above produces ~¥2.5M run-rate. Below: how that moves with each input, holding others constant. The point of this table is to show which driver moves ARR most — and therefore where to focus.
| Scenario | Population reached | Activation rate | Paid conv. | Avg ¥/user/mo | Exit-month ARR |
|---|---|---|---|---|---|
| Conservative | 3,000 | 35% | 8% | ¥2,000 | ~¥800k ARR |
| Forward-build ★ | 3,900 | 45% | 12% | ¥2,525 | ~¥2.5M ARR |
| Optimistic | 5,000 | 55% | 15% | ¥3,000 | ~¥7.4M ARR |
| Stretch (everything works) | 6,500 | 60% | 18% | ¥3,500 | ~¥14.7M ARR |
The honest reframe — and what to actually discuss
- · Year 1 isn't where ARR is made. A community-led launch builds a base, not a balance sheet. The right year-1 question is: have we built something that compounds? A ¥2.5M exit-month run-rate with 880 active users and tight unit economics is far more valuable than ¥10M ARR with high churn and lossy retention.
- · The original ¥30M target requires multiple non-default things to break right at once. Even the "everything works" stretch ends at ~¥15M. Reaching ¥30M in 12 months would require either bigger initial population (enterprise sales — but we deferred that to year 2), much faster conversion (pricing tested + product unmistakable), or a year-2 horizon.
- · Why this forward build makes sense: each step is testable against external benchmarks (TAI size · Notion conversion %s · Linear pricing). If pilot data invalidates any input by month 3, we adjust the next driver instead of pretending the original goal still applies.
-
· What to actually decide in §12:
— Do we ratify the forward-build run-rate (¥2.5M) as the working target, with ¥7.4M as upside?
— Or do we commit to the ¥30M aspiration and accept it requires materially different assumptions (enterprise sales, year-2 trajectory, or higher pricing)?
— Or something else: framework over a single number?
Free tier + paid · what shape, what price?
Pricing isn't decided. The point of this panel is to expose the trade-offs and benchmarks — final decision belongs in §12 with input from community owners + early users. Three patterns worth considering, none locked.
Generous free · AI extra
Free tier: basic chat + tasks, limited AI calls. Plus: ~¥1,200/mo. AI add-on: ~¥1,500/mo on top of any paid tier.
Lets users feel value before paying anything. Risk: heavy AI users on free tier burn inference cost — needs the per-user cap (panel C).
Per-user · AI included
Free tier: solo / hobby use, capped at 1 workspace. Standard: ~¥2,000/mo per user, AI fully included. Pro / Business: ~¥3,500/mo with team features.
Pricing is simple. AI cost rolled into per-seat. Working estimate uses this. ¥2,000 = ~$13 USD, well below Slack Business ($12.50) + an AI add-on combined.
Pay for AI actions
Free seat ~unlimited; AI actions metered at ~¥10–40 each. Heavy users pay more, light users pay nothing.
Aligns cost with value perfectly. Risk: usage-anxiety friction; users hesitate to use AI freely. Unconventional for the workspace category.
Reference: what comparable tools charge (US$, per user per month)
Zunou competes with all of these stacked. Charging less than each individual tool is the obvious early posture; charging more later (once we're indispensable across the stack) is the obvious longer arc.
How much AI usage can we afford to give away?
The free tier needs an inference budget per user so cost stays predictable. ¥600/user/mo is a working estimate, not a target. Below: how it was derived, and where the actual ceiling will land.
Where ¥600 came from
- · Free-tier user activity assumption: ~30 AI actions/month (mix of summaries, prep, follow-ups, chat)
- · Average cost-per-action assumption: ~¥20 (¥10 for cheap model, ¥40 for premium · routed by complexity)
- · 30 × ¥20 = ¥600 baseline
- · All of these inputs are guesses until we measure real usage in the pilot. Could be ¥300. Could be ¥1,200. We won't know until ~50 real users in week 4.
- · The cap matters more than the number: free tier inference cost stays bounded per user. Heavy users hit a soft limit and either upgrade or wait until next month.
Model-agnostic routing (§03) reduces this further: same workload on Gemini Flash vs GPT-5 vs Claude Sonnet swings cost by 5–10×. We route to the cheapest model that meets quality bar per task type. The ceiling is set, the floor moves with our routing intelligence.
What does the launch year cost?
Three line items dominate. None of them is "buy ads." This isn't a paid-acquisition budget; it's an operational + sponsored-seat budget for a community-led launch.
| Line item | Working estimate | How we got there |
|---|---|---|
|
Sponsored inference (free tier of paying users + waitlist) | ¥4–7M | ~3,500 avg free users × ¥600 cap × 12 months — actual lands lower because not everyone hits cap. |
|
Pilot community partnerships (sponsored seats + events + marketing) | ¥3–6M | 50 sponsored seats × ¥600 × N months × 4 communities if we run synchronized at scale. Pilot-only is much less (~¥360k). |
|
Ops + content + advisor (Marco contract · content · legal · advisor compensation) | ¥8–12M | The biggest single bucket. Includes Marco's contract, JP advisor fees, content production, APPI legal review, founder-dinner expenses. |
| Total working envelope | ¥15–25M | A range — actual depends on (a) how many communities we activate, (b) duration of free runway, (c) how much advisor / content we choose to fund. |
¥15–25M ≈ $100–165k USD. This is consistent with what a Tokyo seed-stage SaaS spends on a community-led launch year — but we should test it against Zunou's actual cash runway and prioritisation before treating it as the plan.
88% of AI agent pilots fail to graduate to production.
That's the Gartner finding for 2026. Most enterprise buyers have been burned, or seen peers burned. Our discipline is the response.
Source citations on every AI output. Addresses Gartner's #1 blocker — evaluation gaps (64% of failed pilots). Every action Zunou takes is anchored to the meeting / message / document it came from.
Human-in-the-loop on every external action. Addresses governance friction (57% of failed pilots). AI drafts; humans send. We never auto-act on someone's behalf.
Refuse to ship anything below 65% acceptance in beta. Addresses model reliability (51% of failed pilots). The metric is gated — if a feature can't beat the bar, it doesn't reach launch.
A pre-committed stop-or-go decision in ~6 months.
No 18-month death march. Six PMF criteria; three possible outcomes; one explicit rule we agree to in advance.
Open communities #5–10. Raise sponsored-seat caps. Push toward the KGI.
Then re-decide. Don't force it; don't kill it prematurely either.
Most likely candidates: Ringi-first vertical, or events-only product.
The six criteria: NSM NSM North Star Metric — the single product metric that proxies for healthy growth. Tracks weekly. ≥ 200 weekly active users past the magic number · One launch community at 25%+ density · 35%+ activation in 14 days · ≥ 3 paid logos · Inference cost ≤ ¥600 / AU AU Active User — used as denominator in per-user cost calculations. / mo · Member-of-N Member-of-N A user active in N partner communities. Our percolation indicator across the launch graph. ≥ 15%.
12 open decisions. Click your vote on each.
We don't need consensus — we need explicit objections so we can address them or proceed with the disagreement noted. Five decisions are already settled and shown for context. The rest are open.
Home market; second geography sequenced post-PMF, not preemptively.
Knowing it grows slowly at first.
Accept that users without calendar see a deliberately broken-feeling product.
≥4 of 6 PMF criteria → fuel; ≤2 → pivot; 3 → extend 60 days.
Replace with warm-intro + community + earned media.
Working estimate ~¥11.5M (math in §09 panel A). Original aspiration was ¥30M. Stretch ~¥65M. Which scenario do we ratify?
Working hypothesis. Derivation in §05. Refit monthly against real cohort data once pilot starts. Pushback now if you think the inputs are off.
Current recommendation in §06: pilot one community deep first (TAI), learn, then double. Original §05 plan was 4-at-once synchronized. Pilot-first means slower but lower risk. Vote on which approach.
Working estimate: 50 members × ¥600 × N months × 4 communities. Candidate Ns: 3 months vs 6 months. Question to test with community owners. Bounded liability either way — not 'free forever'.
Connpass + Doorkeeper APIs + Lu.ma at launch. The discovery question (§06.7): do we put this in front of unsigned-in users to drive top-of-funnel?
Per-user inference math + sponsored-seat caps + line-item placeholders. §09 panel D shows ¥15–25M working envelope. Ratify the methodology, not the total.
“Strategy in Notion. Tasks in Asana. Decisions in Slack. Good luck.” Effective for builders / operators; less so for enterprise.
A = passive emergence (default); B = lightweight in-product recs (Phase 3+ test); C = community discovery as a core surface.
Three patterns spec'd in §09 panel B (Notion-style / Linear-style / usage-metered). Validate with community owners + early users before locking.
In-app widget + Slack channel. First CSM hire post-day-180 if Phase 5a fuel.
Revisit at day 90 once data flows.
Loom rode this exact loop to 25M users — every shared video is an ad.
How voting works: click any chip below an open decision. Your vote is saved to your authenticated session — Cloudflare Access already knows who you are, so no separate login. Marco sees the aggregated results at /admin/votes. You can change your vote any time; the most recent click wins. Hover the chip after clicking to confirm the saved state.
What we still need to build before launch.
A real plan names its gaps. Four operational gaps surfaced in the audit became Decisions 14–17 above. Here's the rest of the list.
- · Pricing v0 (tier shape + self-serve upgrade)
- · Customer support model (who answers in JP / EN, on what channel)
- · Crisis / outage playbook for launch week
- · Hiring plan tied to day-180 stage-gate
- · Advisor / investor update cadence
- · Mobile / PWA experience details
- · Account deletion + data export (APPI)
- · Team admin / permissions UX
- · Notification preferences
- · Performance budget + JP CDN
- · Accessibility (WCAG AA)
- · Error recovery during meetings
- · Onboarding email copy (cadence ready)
- · Founder demo script for AI Tinkerers
- · Community-owner pitch deck (separate from this)
- · Investor pitch deck
- · Partnership MoU template
- · PR Times release calendar
- · JP press relationships
- · Customer reference library (Phase 2)
- · Cohort analysis cadence (Mon / first Friday)
- · Attribution model (last-touch v0)
- · Qualitative feedback loop post-launch
- · Churn / exit interview process
- · Single-founder dependency contingency
- · Competitive-event response (Notion AI launch, etc.)
- · Launch-community-cancellation backup
- · AI provider outage graceful degradation (multi-provider failover: Anthropic / OpenAI / Gemini)
Tagged: MUST before Phase 1 · SHOULD before Phase 2 · DEFER to Phase 3+. The 17 decisions above resolve the highest-priority gaps. The remaining 24 need owners, dates, and tracking — Phase 0 work items.
React on the 17. Push back specifically. Default = ship.
We don't need consensus — we need explicit objections so we can address them or proceed with the disagreement noted.
If you're on board
A single thumbs-up reply on the 17 is enough. We move forward and lock the Phase 0 owners + dates.
If you have a concern
Name the specific decision number + the alternative you'd ratify instead. We'll add it to the agenda for the live discussion.
Three ways to respond
Email or Slack DM. Decision number + ✅ / ⚠️ / ❌. One line each is fine.
Slack thread on the launch channel. Everyone sees the reactions at once — fastest path to alignment.
30-min decision meeting once we have written reactions. Calendar invite from Marco when scheduled.
"If you finish reading this and have no objections, we haven't written it well enough."
Glossary.
The international team doesn't all share the same vocabulary. Every acronym and strategic term used in this proposal — defined in one place.
Metrics & strategy
- KGI 重要目標達成指標
- Key Goal Indicator (重要目標達成指標) is the top-level outcome the company commits to. In Japanese strategy practice it sits above NSM and KPIs. Ours is ¥30M ARR within 12 months of synchronized launch.
- NSM
- North Star Metric — the single product metric that most closely tracks the value users get. Connects KGI (lagging) to KPIs (leading). Ours: weekly active users past the magic number, in a partner community.
- KPI
- Key Performance Indicator — the weekly-tracked leading metrics (activation rate, time-to-calendar-connect, W4 retention, etc.) that roll up into NSM. See §11 (Stage-gate) for our specific six.
- PMF
- Product-Market Fit — the point where customers pull the product through the funnel instead of being pushed. Often invisible from inside but unmistakable from outside.
- ARR
- Annual Recurring Revenue — the yearly value of subscription contracts. Standard B2B SaaS health metric. ¥30M ARR ≈ US$200k.
- MoM
- Month-over-Month growth rate — (this month − last month) / last month. We commit to ≥20% MoM in the second half of the launch window.
- ICP
- Ideal Customer Profile — the company / user we're built for and who we deliberately target. Ours: founder-led English-tolerant Tokyo scale-ups.
- WAU
- Weekly Active Users — users who engaged within the rolling 7-day window. We use this for NSM rather than DAU (daily) because executive use patterns are weekly, not daily.
- DAU
- Daily Active Users — meaningful action within 24h. DAU/WAU ratio is a 'stickiness' indicator.
- AU
- Active User — a single engaged user. Our inference budget is expressed per active user per month (¥600/AU/mo on free tier).
Growth & product
- Magic number
- The threshold (defined per product) above which retention jumps. Slack: 2,000 messages. Facebook: 7 friends in 10 days. Ours (v0 hypothesis): 5 colleagues from same community + 1 calendar + 3 AI actions accepted, in 14 days.
- Density
- Zunou-specific term — the condition where enough colleagues in the same community / team are using the product that its AI has the cross-context it needs to be useful. The product IS the density.
- PLG
- Product-Led Growth — go-to-market where the product (free tier + self-serve onboarding) does the selling. Notion, Linear, Figma, Slack all archetypal PLG.
- CLG
- Community-Led Growth — distribution model where the user community generates referrals, content, social proof. Often paired with PLG.
- Member-of-N
- A Zunou-specific KPI: the % of weekly active users who are members of 2+ partner communities. Tracks the cross-community percolation effect from the sympathetic-detonation launch.
Physics & network theory
- Sympathetic detonation
- Physics term used for our launch mechanic. When multiple communities with overlapping members are launched in the same week, attendance at one triggers attendance at another. Adjacent fuses light each other.
- Percolation threshold
- Statistical physics concept describing when adding edges to a graph causes the structure to flip from sparse disconnected clusters into one giant connected component. Our 4-community launch is engineered to cross this threshold inside Tokyo's English-speaking founder graph.
AI & technical
- MCP
- Model Context Protocol — open standard introduced by Anthropic in Nov 2024, adopted by OpenAI / Microsoft / Google in 2025–26, donated to the Linux Foundation in Dec 2025. 10,000+ public servers exist. Zunou is MCP-native.
- HITL
- Human-in-the-Loop — AI proposes / drafts, humans confirm before any externally-visible action (sending email, posting to Slack, creating events). Our discipline against the 88% agent-pilot failure rate.
- LLM
- Large Language Model — the family of AI systems (Claude / GPT / Gemini / Llama / Mistral / etc.) that Zunou routes through MCP-mediated context for each task.
Japan-specific
- Ringi 稟議
- 稟議 (ringi) — the standard Japanese corporate decision process: a written proposal (ringisho) circulates from lower-level employees upward, each stamping approval, before a final senior sign-off. Slow but builds organisational buy-in. Zunou's Ringi-automation alpha is a defensible JP-specific feature.
- Keigo 敬語
- 敬語 (keigo) — Japanese honorific speech system, with three registers: teineigo (polite), sonkeigo (respectful, for the listener), kenjogo (humble, for the speaker). External-facing communication that gets keigo wrong reads as offensive. The Zunou wedge against Notion AI / Slack AI / Copilot.
- APPI
- Act on the Protection of Personal Information — Japan's primary data privacy law. The 2025–26 amendments add administrative penalties and stricter cross-border transfer rules. Required compliance for any JP enterprise customer.
- ISMS
- Information Security Management System — the formal security-management framework defined by ISO/IEC 27001. Required by most Japanese enterprises before they sign a SaaS contract. Independent audit, ~6-month process, ongoing recertification. Together with APPI compliance, this is the gate to enterprise sales.
- METI
- 経済産業省 — Japan's Ministry of Economy, Trade and Industry. Runs national AI strategy and the SME AI subsidy programs (50–66% project cost reimbursement, ¥300k–¥4.5M per grant). JP-companies are eligible.
- IVS
- Infinity Ventures Summit — Japan's largest startup conference, held annually in Kyoto (July). 60+ alumni exits, freee and COVER among them. Organised by Headline Asia. Phase 2 of our rollout is anchored here.
- TAI
- Tokyo AI (TAI) — the largest technical AI community in Japan. Engineers, researchers, investors, PMs. Founded by Ilya Kulyatin. Monthly meetups + Connpass + Slack. 4,000+ members as of May 2026.
- AiSalon
- AiSalon Tokyo — global community for AI-focused founders, builders, investors. Tokyo chapter co-hosted with Tokyo AI, JETRO-supported. Monthly in-person events with lightning talks.
- JETRO
- Japan External Trade Organization — government-affiliated agency that supports foreign business with entry into Japan and Japanese business expansion globally. Free advisory; useful for non-JP companies setting up. Less relevant for Zunou (we're JP-native) except for inbound advisor relationships.
Business & process
- PWA
- Progressive Web App — a web app that installs to the user's home screen, runs offline, sends push notifications. Zunou's current shipping surface is a PWA.
- MoU
- Memorandum of Understanding — a written but typically non-binding agreement that signals commitment. Used in our portfolio-as-community play for the GP-Zunou agreement.
- MoSCoW
- Prioritization framework that buckets work into Must (do now), Should (after must), Could (if time), Won't (this cycle). Used in product-requirements.md to tier features for Phase 0 / 1 / 2 / deferred.
- OAuth
- Open Authorization — the protocol that lets you grant a third-party (like Zunou) access to your Google Calendar or Slack workspace without sharing your password. Foundation of all modern integrations.
- SaaS
- Software-as-a-Service — the dominant B2B software model: web-delivered, subscription-priced, continuously updated. Zunou is SaaS.
- CSM
- Customer Success Manager — a role responsible for post-sale customer adoption, retention, and expansion. Decision 15 commits to delaying our first CSM hire to post-day-180 stage-gate.
- GTM
- Go-to-Market — the strategy for launching, positioning, distributing, selling a product. This whole proposal is Zunou's Japan-led GTM plan.
- CDN
- Content Delivery Network — distributed servers around the world that cache static content close to users for fast delivery. Cloudflare runs one of the largest. zunou.anysigma.com is served from it.
References.
Every numerical claim above traces back to a public source. Listed here for anyone who wants to verify or go deeper.
Market & competitive
- GeekWire — Xembly discontinued service June 2024
- BigGo Finance — Yomiuri/Teikoku JP gen-AI adoption 34.6%
- Gartner — 40% of enterprise apps will embed AI agents by EOY 2026
- Joget summary — 88% of AI agent pilots fail to graduate to production (Gartner)
- TechCrunch — Sakana AI Series B ($135M at $2.65B valuation)
- TechCrunch — LayerX Series B ($100M)
Model Context Protocol
- Anthropic — Introducing the Model Context Protocol (Nov 2024)
- Anthropic — Donating MCP to the Linux Foundation (Dec 2025)
- Pento — A Year of MCP: from internal experiment to industry standard
- The New Stack — Why the Model Context Protocol won
- DigitalApplied — MCP adoption statistics 2026 (10,000+ servers, 97M SDK downloads)
- Wikipedia — Model Context Protocol (neutral overview)
Growth & PLG benchmarks
- Mode — Facebook's "7 friends in 10 days" aha moment
- First Round Review — Notion marketing playbook (YC + designer Twitter seed)
- Growthcurve — How Discord grew (133% MoM at 3M users)
- First Round Review — Figma's 5 phases of community-led growth
- Growth In Reverse — Lenny Rachitsky's path to 1M subscribers
- Wikipedia — Sympathetic detonation (physics)
- Wikipedia — Percolation threshold (statistical physics)
Japan-specific context
- White & Case — Japan's first AI law (2025, promotion-focused)
- OECD.AI — METI subsidies for AI (50–66% of project cost reimbursed)
- IDC — Japan AI infrastructure 7× growth ($5.5B by 2026)
- SecurePrivacy — Japan APPI compliance (2025–26 enforcement)
- DemandSage — Slack statistics 2026 (APAC 19% YoY, Japan 24.46% of traffic)
- Tokyo AI (TAI) — community page (4,000+ members)
- AI Tinkerers Tokyo (Ginza chapter)
- Venture Café Tokyo (Toranomon Thursday Gathering)
AI cost & infrastructure (model-agnostic routing)
- OpenAI — API pricing (GPT-5 family · GPT-5 Nano / mini / standard / Pro)
- Anthropic — Claude API pricing (Haiku / Sonnet / Opus)
- Google — Gemini API pricing (Flash / Pro / Ultra)
- Artificial Analysis — multi-provider model price + speed + quality comparisons
- OpenRouter — unified API across 100+ models for cost-optimised routing
Behavioral design
Last verified: 2026-05-11. If a source url is broken or you want a section traced to a specific footnote, ask Marco. The detailed source-by-claim mapping lives in the internal strategy markdown.