What’s in this Quick Start?
This Quick Start guides you through the end-to-end Credible workflow, focusing on a familiar and powerful pattern: a data modeler defines trusted business logic using an AI coding agent connected to Credible’s modeling tools, and anyone—regardless of data proficiency—can explore the governed data through Credible’s chat-based analysis. Once your data is modeled, analysis happens in natural language; data analysis expertise is not required. You will learn how to:- Build a semantic model using an AI coding agent with Credible’s modeling tools
- Publish your work to the Credible service for broader consumption
- Analyze data via chat using Credible’s Context Engine and your published semantic model
Step 0: Get Set Up
Before starting, make sure:- A Credible admin set up your organization.
- You have a basic understanding of semantic models and Malloy, the language Credible is built on. View Malloy Docs →
Install the CLI and Log In
Open a terminal and install the Credible CLI globally:Don't have npm installed?
Don't have npm installed?
Install Node.js (which includes npm) from nodejs.org.
<yourOrgName> with the organization name provided by Credible):
demo project (an example project for demonstration purposes):
Install the Credible Extension
Credible supports Cursor, VS Code, and Claude Code (running in VS Code) as IDEs for building models. The Credible Extension connects to Credible’s service, sets up managed connections, and configures modeling tools so the agent can help with semantic modeling workflows. The steps below use Cursor as the example, but the same workflow applies to VS Code and Claude Code in VS Code.- In Cursor or VS Code, go to the Extensions view (
Cmd+Shift+X) - Search for
Credibleand install the extension — select Auto Update when prompted - Also install the Malloy extension (make sure you install the release version, not pre-release)
You may see a pop-up asking to enable the Credible MCP server during setup. If it appears, click it to open MCP settings and enable Credible-Modeling. If you miss it, you can always enable it manually — see Enable the Credible-Modeling MCP Server below.
demo project, you’ll see a “bq_demo” connection under Connections.

Don't see the Credible Service panel?
Don't see the Credible Service panel?
The sidebar may be hidden by default in Cursor:
- Open the Explorer panel (
Cmd+Shift+E) (orCtrl+Shift+Eon Windows/Linux) - Look for the Credible panel at the bottom of the explorer sidebar
Panel controls
Panel controls
The Credible Extension can be configured via the Command Palette (
Cmd+Shift+P) or clicking icons in the Credible Service panel:- Disable Credible: Turn off the extension for this workspace
- Enable/Disable Modeling Tools: Toggle MCP modeling tools on or off
- Refresh: Reload projects, connections, and packages
- Select Organization: Switch between organizations
- Sign Out: Log out of Credible
Create a Workspace Folder
The Credible modeling tools require an open folder in your IDE. Create a new folder for your quickstart workspace and open it:- Create a new folder (e.g.,
quickstart-workspace) - In your IDE, go to File → Open Folder and select the folder you just created
- Press
Cmd+Bto open the primary sidebar, then expand the Credible panel
Enable the Credible-Modeling MCP Server
When you open a workspace or switch projects, you’ll see a pop-up to enable the Credible-Modeling MCP server — this gives the agent access to Credible’s modeling tools. Click to open MCP settings, then toggle Credible-Modeling on.

Don't see the pop-up?
Don't see the pop-up?
If you missed the pop-up or your agent can’t access MCP tools:
- Open Cursor Settings (
Cmd+Shift+J) or VS Code Settings - Navigate to Tools & MCP
- Find Credible-Modeling and toggle it on
- If the agent still can’t call MCP tools, reload the window (
Cmd+Shift+P→ “Reload Window”)
Using Claude Code in VS Code?
Using Claude Code in VS Code?
Claude Code requires manually registering the MCP server. After installing the Credible Extension, open For example, if your The port number is assigned dynamically and may differ in your workspace.
.vscode/mcp.json in your workspace to find the url value, then run:.vscode/mcp.json shows "url": "http://127.0.0.1:62409/mcp", run:Step 1: Build a Semantic Model
At the core of Credible is the semantic model—a governed, versioned interface that defines your data’s meaning. Think of it as an API for your data: it captures not just structure, but business meaning.Why semantic models matter
Why semantic models matter
In this tutorial, the agent uses table schemas to generate your model. But the real power of semantic modeling is encoding your data’s meaning—capturing institutional knowledge from dbt models, existing semantic layers, query logs, and documentation. When meaning is explicit and maintained as code, it becomes part of your infrastructure, ensuring trust and consistency across every dashboard, API, and AI experience.
Generate Your Semantic Model
For best results, set your Cursor LLM model to Claude Opus 4.6 (not “Auto”). Open Cursor Settings → Models and select Claude Opus 4.6.
What data is available to model?After reviewing what’s available, try:
Build a model of the ecommerce dataset
Review and Adjust Your Model
Open the generated.malloy model file in the editor. Review and approve the generated model. Explore the model by clicking “Explore” above any source definition. Or click “Run” above a view or query to see the results.

Understanding Malloy
Understanding Malloy
Sources define the underlying tables, join relationships, dimensions to slice by, and measures to compute aggregations on. Above each source definition, you’ll see three buttons:
- Schema: View table structure and joins
- Explore: Open the Data Explorer
- Preview: See the first 20 rows
Step 2: Publish to the Credible Service
When your model looks good, publish it to the Credible service. You can do this in one of two ways: Option A — Use the agent: Ask the agent to publish your model. The agent has access to an MCP tool that handles publishing for you. — or — Option B — Use the CLI directly:- Navigate to the folder containing your model
- Ensure there is a
publisher.jsonfile with a name and version - Run
cred publish --set-latest
0.0.1 appears.
Learn more about publishing
Learn more about publishing
- The
--set-latestflag pins this version as the default for consumers. Omit to publish without pinning. - Published packages are accessible to any agent or workspace with the correct permissions.
- Publishing makes your package ready for enterprise-scale serving via the Credible service.
- For details on versioning, see Understanding Package Versions.
- For best practices on organizing projects and packages, see Projects & Packages.
Step 3: Analyze Your Data
You’ve just completed the modeler workflow — building and publishing trusted business logic. Now switch to the consumer experience: chat with your data in the Credible app. This analysis experience is for anyone on your team; no data analysis expertise is required.Your model needs to be indexed before you can chat with it. You’ll see an alert in the chat if indexing is still in progress. You can also check indexing status on the project page under Packages & Connections.
Start a Conversation
- Navigate to
https://<your-org>.app.credibledata.comand log in - Click + New Chat under your personal workspace, or use the chat bar

“Let’s analyze sales by brand for ecommerce data”Credible ensures trustworthy analysis by using the Credible Context Engine to match your questions to governed definitions in your semantic model, allowing agents to execute trustworthy queries against your data.
How workspaces work
How workspaces work
Your personal workspace automatically has access to models published in your projects. Chats and reports in your personal workspace are private to you. To share with your team, create a shared workspace and add packages to it. All chats and reports created in a shared workspace are visible to its members. Workspaces can contain individual users or groups, making it easy to manage access for teams, departments, or projects.
How the chat works under the hood
How the chat works under the hood
The chat agent uses two MCP tools:
get_context: The Credible Context Engine breaks your question into phrases and matches each to data entities in your semantic model. It returns ranked matches — dimensions, measures, views, relationships — grounded in the meaning you’ve encoded. These aren’t guesses; they’re precise matches.execute_query: Runs Malloy queries against your data and returns results.This is why answers are trustworthy — the agent is grounded in governed definitions, not interpreting or guessing. No wrong answers from bad joins or misunderstood field names.Generate a Report
After exploring your data, ask the agent to create a report:“Please generate a report of our analysis”Reports created from chat are saved to your workspace and can be shared with your team.
What You’ve Accomplished
- Defined business logic in a semantic model using AI-assisted modeling
- Published the package organization-wide
- Analyzed data through chat-based natural language queries
- Generated a report to share insights with your team