Evidence in
Product teams already write everything down: sales call notes, support tickets, customer-research transcripts, competitor scans, decision memos, retro outcomes. Cultivate asks for nothing new — just a single canonical place for it all to land. Winnow's raw/ folder is that place. Sub-folders (sales/, customer_success/, support/, research/, market/, competitors/, systems/, decisions/, delivery/, technologies/) keep evidence routed by source type so the synthesis prompts can be tuned per category. The technologies/sub-folder is the team's curated reading of the technology landscape — release notes, library shifts, hardware launches that change what's now feasible. It rolls up into a separate Capabilities surface that widens what's feasible when recommendations get shaped.
Four input paths converge on the same destination:
- Editor or filesystem — drop a markdown file into
raw/sales/from your editor of choice. - Web UI — a Create-evidence form with a WYSIWYG markdown tab and a PDF/DOCX upload tab.
- Webhook API — bearer-gated
POST /api/raw/{subdir}for n8n, Zapier, Make, or custom scrapers. Optionalauto_ingesttriggers the synthesis pipeline once the burst settles. - Cloud sync — symlink
raw/into a Dropbox, iCloud, Google Drive or OneDrive folder; drop files from any device.
Files are immutable once they're in. The original binary is preserved untouched; the LLM reads an extracted-text view at synthesis time.
The wiki maintains itself
The synthesis pipeline is the same shape on every run: discover unprocessed sources, classify them by content hash (new / revision / unchanged), pick the wiki pages most relevant to each source via a cheap selector model, then call a smarter synthesis model with the foundations, the selected pages, and the raw evidence. The model returns a structured edit plan. The runtime stages every change to a temp directory, validates it, and either applies atomically or rolls back — there's no half-applied wiki.
Seven commands cover the full lifecycle:
winnow ingest— general-purpose evidence (sales / support / research / market / competitors / systems).winnow decide— documented decisions.winnow deliver— delivery outcomes.winnow innovate— technology shifts. Readsraw/technologies/; writes capability pages that name what's now possible without proposing what to build.winnow discover— scans the wiki for tension and proposes opportunities.winnow recommend— generates recommendations from validated opportunities. Mature capabilities tied to a candidate opportunity flow in as constraint-relaxers, never as primary justification.winnow reconcile— two-step propose/apply for baseline foundation updates. Opt-in--include-capabilitieslets a mature capability propose striking through a constraint paragraph it now relaxes.
Every command supports --dry-run so you can preview the plan before anything writes. Reconcile is two-step by design: the propose step writes a sidecar; the apply step is always explicit human action.
Capabilities widen what's feasible
Customer evidence drives what a team should build. Technology evidence changes howthey'd build it — or whether a constraint that ruled an opportunity out last quarter still holds. Cultivate's seventh command, winnow innovate, is the lateral lane for that. Drop a release note in raw/technologies/ (Anthropic + OpenAI changelogs, Hacker News show-HN, vendor update emails — whatever the team already reads) and the LLM synthesises it into a Capability page in wiki/capabilities/with three things: a verb-imperative (“extract structured fields from scanned PDFs at <$0.01 per page”), a maturity rating (proposed / early / mature), and a list of existing opportunities it could relax a constraint for.
Two methodology guardrails hold the boundary tight. Capabilities never generate ideas on their own. The synthesis prompt is forbidden from creating opportunities or recommendations; the runtime denies any write outside wiki/capabilities/. Customer evidence still has to make the case for acting. A recommendation justified solely by “this technology now exists” is invalid. Capabilities can change howa recommendation gets shaped or which constraints to flag as stale — that's the limit.
Two consumers downstream. winnow recommendauto-loads mature capabilities tied to the candidate opportunity into the synthesis prompt as a “MATURE CAPABILITIES (constraint- relaxers; not primary justification)” block; recommendations that were informed by a capability persist a capabilities_consulted: frontmatter field. winnow reconcile --include-capabilities (opt-in, default off) lets a mature capability propose striking through a paragraph in constraints_and_assumptions.md it now relaxes — same propose / apply two-step every other reconcile path uses; capability-triggered baseline edits may only target the constraints foundation, never vision / lifecycle / markets / proposition.
Recommendations into AI builders
A recommendation that lives in the wiki carries everything an AI builder needs to start prototyping: a clear opportunity, cited evidence, expected impact, risks, alignment with strategy. Cultivate delivers that bundle into the tools where prototypes actually get made through three integration surfaces:
- Export to prompt — One click on a recommendation page assembles a paste-ready markdown brief: recommendation + parent opportunity + cited evidence + five foundations as non-negotiable constraints. Pure templating, no LLM call. Paste into Claude Code, Cursor, Lovable, v0, Replit — anything that takes markdown context.
- MCP server — AI builders pull context on demand instead of relying on copy-paste. Read-only by design. Two transports: stdio for local AI-builder subprocesses, HTTP (bearer-gated) for hosted use. Seven tools: list and fetch recommendations and opportunities, fetch any wiki page, fetch the foundations bundle, fetch the recent-changes log.
- OpenAI-compatible gateway —
winnow gatewayboots a chat-completions service at port 11434 (Ollama's default port, for client auto-discovery). Open WebUI, Continue.dev, LibreChat, LobeChat, Cursor's OpenAI-compatible mode — any client that speaks the OpenAI API can chat with the wiki as if it were a model.
The wiki is read-only over MCP and the gateway. Adding evidence still flows through the four input paths above, so the audit trail stays clean and the LLM remains the wiki's only editor.
Two ways to run it
Self-host. pip install winnow, or run the Docker image (~520 MB, single /data volume, backend + frontend in one container). Bring your own LLM key (Anthropic or OpenRouter). Runs on your laptop, your server, or behind a reverse proxy with trusted-header auth (Caddy, Nginx, Tailscale Funnel, Cloudflare Access).
Hosted at usewin.now. Same image, hosted for you. Each workspace gets a subdomain that reads as a sentence — cocacola.usewin.now, “Coca Cola use Winnow.” Bring your own LLM key (no token markup, no rate-limit accounting). Daily volume snapshots, 30-day export window on cancellation. Open in waves — join the waitlist for early access.