Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.getbased.health/llms.txt

Use this file to discover all available pages before exploring further.

getbased supports six AI providers for PDF import, chat, and dashboard AI features. You can switch between them at any time in Settings → AI without losing any data. Your API keys are stored locally in your browser and are never sent to getbased servers.

Which features need AI?

FeatureRequires AI?
PDF importYes
AI chat panelYes
Focus card (dashboard insight)Yes
Health status dots on context cardsYes
AI-generated card tipsYes
Web search in chatYes (OpenRouter, PPQ, Venice)
Charts, tables, trend alertsNo
Manual entryNo
JSON export / importNo
Correlations, compare datesNo
All non-AI features work fully without any provider configured. You can load a demo profile, enter results manually, and explore charts and trends before setting up AI.

Set up a provider

OpenRouter is the easiest way to get started. It gives you access to 200+ models — Claude, GPT, Gemini, DeepSeek, Grok, Qwen, and more — with a single account. Pay with a card or USDC. Supports web search in chat.Connect with OAuth (easiest):
1

Select OpenRouter in Settings

Open Settings (gear icon in the header) and go to the AI tab. Select OpenRouter.
2

Click Connect with OpenRouter

Click the Connect with OpenRouter button. You’ll be redirected to OpenRouter to authorize getbased. No API key needed — the app handles the token exchange automatically.
3

Choose a model

Back in Settings, pick a model from the curated dropdown. Recommended models are sorted first and marked with a star.
Connect with an API key:
1

Get an API key

Create an account and generate an API key at openrouter.ai.
2

Paste your key in Settings

Open Settings → AI, select OpenRouter, and paste your API key.
3

Choose a model

Pick a model from the dropdown, or type any OpenRouter model ID into the custom input field.
The OAuth connect button also appears in the chat panel when no provider is configured — you can set up OpenRouter without going to Settings.
All providers show a tiered model dropdown with two groups:
  • Recommended — the latest, most capable models for lab interpretation, sorted first
  • Other — all remaining available models
Recommended models are chosen for accuracy with medical and scientific data. You can use any model, but recommended ones produce the most reliable results.
Use the same model for all your imports. When you import a lab PDF, the AI generates marker keys (like biochemistry.glucose) to map results. Different models may generate slightly different keys for the same marker, which can cause the same biomarker to appear as two separate entries in your charts. Pick a model and stick with it. If you do switch, getbased runs a pre-flight check before each import and warns you if your model has changed since the last import.

How much does it cost?

AI providers charge based on how much text is sent and received. getbased displays the exact cost of every interaction in the chat panel. Here is what real usage costs with the recommended models:
ModelProviderImport a lab PDFChat messageFirst month*Ongoing month**
Claude Sonnet 4.6OpenRouter / Routstr / PPQ~$0.04~$0.02~$1.00~$0.50
GPT 5.4OpenRouter / Venice / Routstr / PPQ~$0.03~$0.02~$0.80~$0.45
Gemini 3.1 ProOpenRouter / Venice / Routstr / PPQ~$0.03~$0.01~$0.60~$0.35
Grok 4OpenRouter / Venice / Routstr / PPQ~$0.01~$0.005~$0.25~$0.15
Any modelCustom API (direct key)VariesVariesVariesVaries
Any local modelLocal AI (Ollama, LM Studio, Jan)FreeFreeFreeFree
* First month: importing your first labs and setting up your profile through chat — typically 3–5 imports and 30+ chat messages. ** Ongoing month: 2–3 lab imports, 20–30 chat messages, dashboard AI features. Heavy users who chat daily may spend 2–3× more.
Most users spend well under $1 per month. Every AI response in getbased shows its cost below the message, so you always know exactly what you are spending. If your credits run out, the app shows a clear message with a link to add more.
Run a local model with Ollama, LM Studio, or Jan and pay nothing. You will need 8GB or more of VRAM — or a Mac with 16GB or more of unified memory — for capable models. The Model Advisor in Settings shows exactly what fits your hardware.