How to navigate Vannus
Vannus is an AI tool selection platform with an elimination-first methodology. The site has a few different surfaces, and which one you use depends on what you're trying to accomplish. This is a short tour.
Three things to keep in mind throughout: the public catalog is open with no signup, paid features are clearly labeled, and the scoring engine has zero awareness of partnership data. Architecture, not policy.
The four primary surfaces
Five tabs in the nav — the logo plus four destinations. Each does one job. None overlap.
The full curated catalog. 284 AI tools scored across nine trust dimensions: Data Sovereignty, Allied Infrastructure, Training Privacy, Conditional Privacy, Compliance Standard, Operational Resilience, Exit Portability, Real-World Utility, and Caution Flag.
Use this when: you want to browse or search by tier (Sovereign, Durable, Moderate, Fragile, Wrapper), check whether a specific tool meets your jurisdiction or compliance criteria, or compare options within a category.
The full evaluation framework, written publicly. Each of the nine trust dimensions is defined with the signals we use to score it. The exact weights are proprietary; the framework itself is open. Same posture established procurement frameworks have used for decades.
Use this when: you want to understand why a tool received its tier, what "Caution Flag" means and when it's applied, or how the elimination methodology differs from a paid-placement directory.
The Vannus Concierge audit. A founder-led, fixed-price evaluation of your full AI tool stack. $7,500, 14-day deliverable, written report plus a walkthrough call. Pre-screened on intake — if we don't think we can find meaningful savings in your stack, we decline before any contract signs.
Use this when: you want a written, signed-off evaluation of every AI tool your team uses, with replacement candidates and quantified savings projections. The audit is for SMBs with $5,000–$50,000 in annual AI spend; established procurement platforms (Tropic, Zluri, Vendr) start much higher.
Long-form thinking about AI procurement, evaluation methodology, and the structural problems with how the market currently helps buyers (or doesn't). Updated weekly.
Use this when: you want context for why the framework looks the way it does, or you want to read first before evaluating tools.
The supporting surfaces (in the footer)
Five more tools live in the Resources section of the footer on every page. They're not in the top nav because they are functional, not central — but they're real and useful.
The elimination engine, used interactively. Type your requirements; watch the catalog get pruned to the tools that match your constraints. The same scoring logic that powers the Tools page, but constraint-driven instead of catalog-driven.
Use this when: you have a specific use case (writing, research, coding, scraping) and want to see which tools survive the filter — not just browse alphabetically.
A four-question guided flow that asks about your task, industry, budget, and skill level. Returns a recommended stack of 3-4 tools that actually fit those constraints.
Use this when: you don't know what you're looking for yet, or you want a quick orientation before diving into the full catalog.
A back-of-envelope calculator that takes a tool's price, your team size, and how often you'd use it, and returns the cost-coverage window — how long it takes the tool to pay for itself in time saved.
Use this when: you're on the fence about a paid AI tool and want to sanity-check whether the math works for your team specifically.
Generates a vendor request-for-proposal template covering data sovereignty, training privacy, compliance certifications, exit portability, and uptime SLA. Aligned to the same nine trust dimensions used in the catalog.
Use this when: you're about to negotiate with an AI vendor and want a checklist of the right questions to ask before signing anything.
Four interactive modes: analyze your current stack, optimize it, compare alternatives, or plan a migration. The most flexible of the catalog-driven tools — useful when you have a hypothesis to test.
Use this when: you have an existing AI stack and want a structured way to evaluate it without booking the full Concierge audit.
The pricing question
The catalog and every supporting tool listed above are open at no charge with no signup required. The two paid surfaces are the Concierge audit ($7,500 fixed price, one-time engagement) and the Room workspace (Pro at $24/month, Pro+ at $59/month — launching Q3 2026 with a Founding 30 lock-in for the first 30 Pro+ subscribers).
The catalog will stay open. That's the structural commitment: Vannus charges buyers (subscriptions and audits), not vendors (placements). The buyer's trust in Vannus depends on us not being beholden to the tools we evaluate.
The transparency layer
The Partners & Transparency page lists every commercial relationship Vannus maintains, including affiliate partnerships with tools that appear in the catalog. The relationship between affiliate revenue and tool scoring is structural, not policy: the scoring code does not have access to partnership data. You can verify this against the published methodology.
Where to start
If you don't know where to begin, three reasonable defaults:
- If you're a founder or ops lead with a paid AI stack you can't quite justify: skip ahead to the Audit page or use the ROI Calculator as a quick warmup.
- If you're evaluating a single tool: browse the catalog by tier, or use Diagnosis with your specific constraints.
- If you're building a new stack from scratch: try Build My Stack first, then drill into the catalog for tools that survive the filter.
The site is small on purpose. Every surface is supposed to do one job clearly. If something feels redundant or out of place, that's signal — email drake@vannus.co directly.
About Vannus — we're an AI tool selection platform built around an elimination-first methodology. No vendor influence, no paid placements. We're paid by buyers (subscriptions and audits), not by the vendors we evaluate. Read the methodology.