Skip to main content

What’s in this Quick Start?

This Quick Start guides you through the end-to-end Credible workflow, focusing on a familiar and powerful pattern: a data modeler defines trusted business logic using an AI coding agent connected to Credible’s modeling tools, and anyone—regardless of data proficiency—can explore the governed data through Credible’s chat-based analysis. Once your data is modeled, analysis happens in natural language; data analysis expertise is not required. You will learn how to:
  1. Build a semantic model using an AI coding agent with Credible’s modeling tools
  2. Publish your work to the Credible service for broader consumption
  3. Analyze data via chat using Credible’s Context Engine and your published semantic model
For this tutorial, you’ll wear both hats—publishing as a modeler, then analyzing as a consumer (no analyst skills needed). New to semantic models? Learn more about semantic models and why they matter →

Step 0: Get Set Up

Before starting, make sure:

Install the CLI and Log In

Open a terminal and install the Credible CLI globally:
npm i -g @credibledata/cred-cli
Install Node.js (which includes npm) from nodejs.org.
Log in to your organization (replace <yourOrgName> with the organization name provided by Credible):
cred login <yourOrgName>
Set the CLI to point to the demo project (an example project for demonstration purposes):
cred set project demo

Install the Credible Extension

Credible supports Cursor, VS Code, and Claude Code (running in VS Code) as IDEs for building models. The Credible Extension connects to Credible’s service, sets up managed connections, and configures modeling tools so the agent can help with semantic modeling workflows. The steps below use Cursor as the example, but the same workflow applies to VS Code and Claude Code in VS Code.
  1. In Cursor or VS Code, go to the Extensions view (Cmd+Shift+X)
  2. Search for Credible and install the extension — select Auto Update when prompted
  3. Also install the Malloy extension (make sure you install the release version, not pre-release)
You may see a pop-up asking to enable the Credible MCP server during setup. If it appears, click it to open MCP settings and enable Credible-Modeling. If you miss it, you can always enable it manually — see Enable the Credible-Modeling MCP Server below.
You should see a Credible Service panel in your sidebar. The extension will automatically connect to your organization and project specified in your CLI login. In the demo project, you’ll see a “bq_demo” connection under Connections. Credible Service Panel in Cursor
The sidebar may be hidden by default in Cursor:
  1. Open the Explorer panel (Cmd+Shift+E) (or Ctrl+Shift+E on Windows/Linux)
  2. Look for the Credible panel at the bottom of the explorer sidebar
The Credible Extension can be configured via the Command Palette (Cmd+Shift+P) or clicking icons in the Credible Service panel:
  • Disable Credible: Turn off the extension for this workspace
  • Enable/Disable Modeling Tools: Toggle MCP modeling tools on or off
  • Refresh: Reload projects, connections, and packages
  • Select Organization: Switch between organizations
  • Sign Out: Log out of Credible
Click on a project to switch projects—you’ll see a list of available projects in the command palette.

Create a Workspace Folder

The Credible modeling tools require an open folder in your IDE. Create a new folder for your quickstart workspace and open it:
  1. Create a new folder (e.g., quickstart-workspace)
  2. In your IDE, go to File → Open Folder and select the folder you just created
  3. Press Cmd+B to open the primary sidebar, then expand the Credible panel
When the folder opens, you may see a popup asking to set up workspace modeling tools — click Yes.

Enable the Credible-Modeling MCP Server

When you open a workspace or switch projects, you’ll see a pop-up to enable the Credible-Modeling MCP server — this gives the agent access to Credible’s modeling tools. Click to open MCP settings, then toggle Credible-Modeling on. Pop-up prompting to open MCP settings Toggling on the Credible-Modeling MCP server in Cursor settings
If you missed the pop-up or your agent can’t access MCP tools:
  1. Open Cursor Settings (Cmd+Shift+J) or VS Code Settings
  2. Navigate to Tools & MCP
  3. Find Credible-Modeling and toggle it on
  4. If the agent still can’t call MCP tools, reload the window (Cmd+Shift+P → “Reload Window”)
Claude Code requires manually registering the MCP server. After installing the Credible Extension, open .vscode/mcp.json in your workspace to find the url value, then run:
claude mcp add --transport http Credible-Modeling <url>
For example, if your .vscode/mcp.json shows "url": "http://127.0.0.1:62409/mcp", run:
claude mcp add --transport http Credible-Modeling http://127.0.0.1:62409/mcp
The port number is assigned dynamically and may differ in your workspace.

Step 1: Build a Semantic Model

At the core of Credible is the semantic model—a governed, versioned interface that defines your data’s meaning. Think of it as an API for your data: it captures not just structure, but business meaning.
In this tutorial, the agent uses table schemas to generate your model. But the real power of semantic modeling is encoding your data’s meaning—capturing institutional knowledge from dbt models, existing semantic layers, query logs, and documentation. When meaning is explicit and maintained as code, it becomes part of your infrastructure, ensuring trust and consistency across every dashboard, API, and AI experience.

Generate Your Semantic Model

For best results, set your Cursor LLM model to Claude Opus 4.6 (not “Auto”). Open Cursor Settings → Models and select Claude Opus 4.6.
Open Cursor’s chat and ask the agent to build a model. For example:
What data is available to model?
After reviewing what’s available, try:
Build a model of the ecommerce dataset

Review and Adjust Your Model

Open the generated .malloy model file in the editor. Review and approve the generated model. Explore the model by clicking “Explore” above any source definition. Or click “Run” above a view or query to see the results. Malloy source with Schema, Explore, Preview buttons Make adjustments directly in the editor, or ask the agent to help in natural language (e.g., “make a dimension with customer age ranges”).
Sources define the underlying tables, join relationships, dimensions to slice by, and measures to compute aggregations on. Above each source definition, you’ll see three buttons:
  • Schema: View table structure and joins
  • Explore: Open the Data Explorer
  • Preview: See the first 20 rows
For more on Malloy syntax, see the Malloy documentation →

Step 2: Publish to the Credible Service

When your model looks good, publish it to the Credible service. You can do this in one of two ways: Option A — Use the agent: Ask the agent to publish your model. The agent has access to an MCP tool that handles publishing for you. — or — Option B — Use the CLI directly:
  1. Navigate to the folder containing your model
  2. Ensure there is a publisher.json file with a name and version
  3. Run cred publish --set-latest
Refresh the Credible Service panel to verify version 0.0.1 appears.
  • The --set-latest flag pins this version as the default for consumers. Omit to publish without pinning.
  • Published packages are accessible to any agent or workspace with the correct permissions.
  • Publishing makes your package ready for enterprise-scale serving via the Credible service.
  • For details on versioning, see Understanding Package Versions.
  • For best practices on organizing projects and packages, see Projects & Packages.

Step 3: Analyze Your Data

You’ve just completed the modeler workflow — building and publishing trusted business logic. Now switch to the consumer experience: chat with your data in the Credible app. This analysis experience is for anyone on your team; no data analysis expertise is required.
Your model needs to be indexed before you can chat with it. You’ll see an alert in the chat if indexing is still in progress. You can also check indexing status on the project page under Packages & Connections.

Start a Conversation

  1. Navigate to https://<your-org>.app.credibledata.com and log in
  2. Click + New Chat under your personal workspace, or use the chat bar
Personal workspace with New Chat button Type a question to start analyzing your published model:
“Let’s analyze sales by brand for ecommerce data”
Credible ensures trustworthy analysis by using the Credible Context Engine to match your questions to governed definitions in your semantic model, allowing agents to execute trustworthy queries against your data.
Your personal workspace automatically has access to models published in your projects. Chats and reports in your personal workspace are private to you. To share with your team, create a shared workspace and add packages to it. All chats and reports created in a shared workspace are visible to its members. Workspaces can contain individual users or groups, making it easy to manage access for teams, departments, or projects.
The chat agent uses two MCP tools:get_context: The Credible Context Engine breaks your question into phrases and matches each to data entities in your semantic model. It returns ranked matches — dimensions, measures, views, relationships — grounded in the meaning you’ve encoded. These aren’t guesses; they’re precise matches.execute_query: Runs Malloy queries against your data and returns results.This is why answers are trustworthy — the agent is grounded in governed definitions, not interpreting or guessing. No wrong answers from bad joins or misunderstood field names.

Generate a Report

After exploring your data, ask the agent to create a report:
“Please generate a report of our analysis”
Reports created from chat are saved to your workspace and can be shared with your team.

What You’ve Accomplished

  • Defined business logic in a semantic model using AI-assisted modeling
  • Published the package organization-wide
  • Analyzed data through chat-based natural language queries
  • Generated a report to share insights with your team
You’ve experienced the power of Credible: one semantic model that works everywhere — from modeling to analysis. Once modeled, anyone can analyze via natural language, regardless of data proficiency.

What’s Next?