Skip to main content

What’s in this Quick Start?

You will learn how to:
  1. Build a semantic model using an AI coding agent with Credible’s modeling tools
  2. Publish your work to the Credible service for broader consumption
  3. Analyze data via chat using Credible’s Context Engine and your published semantic model
You’ll wear both hats — data modeler and consumer. New to semantic models? Learn more →

Step 0: Get Set Up

Before starting, make sure: Credible supports Cursor, VS Code, and Claude Code (running in VS Code). The steps below use Cursor as the example, but the same workflow applies to all three.

Open a New Folder in Your IDE

Create a new folder and open it in your IDE (File → Open Folder).

Install the Credible Extension & Log In

  1. Go to the Extensions view (Cmd+Shift+X), search for Credible, and install the extension — select Auto Update when prompted
  2. Open the Explorer (Cmd+Shift+E), expand the Credible panel at the bottom of the sidebar, and click Login — then follow the steps in your browser
  3. Back in your IDE, select your organization from the list (if you only belong to one, it’s selected automatically), then select the demo project
The Credible Extension can be configured via the Command Palette (Cmd+Shift+P) or clicking icons in the Credible Service panel:
  • Disable Credible: Turn off the extension for this workspace
  • Enable/Disable Modeling Tools: Toggle MCP modeling tools on or off
  • Refresh: Reload projects, connections, and packages
  • Select Organization: Switch between organizations
  • Sign Out: Log out of Credible
Click on a project to switch projects—you’ll see a list of available projects in the command palette.

Enable the Credible-Modeling MCP Server

After logging in, you’ll see two pop-ups:
  1. Enable Credible modeling tools for this workspace — click Yes
  2. Open MCP settings — click to open, then toggle Credible-Modeling on
If you dismissed the pop-ups or your agent can’t access MCP tools, follow the steps for your IDE:
  1. Open Cursor Settings (Cmd+Shift+J on Mac, Ctrl+Shift+J on Windows/Linux)
  2. Navigate to Tools & MCP
  3. Find Credible-Modeling and toggle it on
Toggling on the Credible-Modeling MCP server in Cursor settings
  1. If the agent still can’t call MCP tools, reload the window (Cmd+Shift+P → “Reload Window”)

Step 1: Build a Semantic Model

Generate Your Semantic Model

For best results, set your Cursor LLM model to Claude Opus 4.6 (not “Auto”). Open Cursor Settings → Models and select Claude Opus 4.6.
Open Cursor’s chat and ask the agent to build a model. For example:
What data is available to model?
After reviewing what’s available, try:
Build a model of the ecommerce dataset

Review and Adjust Your Model

Open the generated .malloy model file. Malloy source with Schema, Explore, Preview buttons Make adjustments directly in the editor, or ask the agent to help in natural language (e.g., “make a dimension with customer age ranges”).
Above each source definition, you’ll see three buttons:
  • Schema: View table structure and joins
  • Explore: Open the Data Explorer
  • Preview: See the first 20 rows
For more on Malloy syntax, see the Malloy documentation →

Step 2: Publish to the Credible Service

When your model looks good, ask the agent to publish your model (it has an MCP tool for this).
  • The --set-latest flag pins this version as the default for consumers. Omit to publish without pinning.
  • Published packages are accessible to any agent or workspace with the correct permissions.
  • Publishing makes your package ready for enterprise-scale serving via the Credible service.
  • For details on versioning, see Understanding Package Versions.
  • For best practices on organizing projects and packages, see Projects & Packages.

Step 3: Analyze Your Data

Now switch to the consumer experience: chat with your data in the Credible app. No data expertise required.
Your model needs to be indexed before you can chat with it. You’ll see an alert in the chat if indexing is still in progress. You can also check indexing status on the project page under Packages & Connections.

Start a Conversation

  1. Navigate to https://<your-org>.app.credibledata.com and log in
  2. Click + New Chat under your personal workspace, or use the chat bar
Personal workspace with New Chat button Type a question to start analyzing your published model:
“Let’s analyze sales by brand for ecommerce data”
Your personal workspace automatically has access to models published in your projects. Chats and reports in your personal workspace are private to you. To share with your team, create a shared workspace and add packages to it. All chats and reports created in a shared workspace are visible to its members. Workspaces can contain individual users or groups, making it easy to manage access for teams, departments, or projects.
The chat agent uses two MCP tools:get_context: The Credible Context Engine breaks your question into phrases and matches each to data entities in your semantic model. It returns ranked matches — dimensions, measures, views, relationships — grounded in the meaning you’ve encoded. These aren’t guesses; they’re precise matches.execute_query: Runs Malloy queries against your data and returns results.This is why answers are trustworthy — the agent is grounded in governed definitions, not interpreting or guessing. No wrong answers from bad joins or misunderstood field names.

Generate a Report

After exploring your data, ask the agent to create a report:
“Please generate a report of our analysis”
Reports are saved to your workspace and can be shared with your team.

What You’ve Accomplished

  • Built and published a semantic model using AI-assisted modeling
  • Analyzed data through chat-based natural language queries
  • Generated a report to share with your team

What’s Next?

Connect Your Database

Connect to BigQuery, Postgres, Snowflake, or other data sources

Connect Your LLM

Connect LLMs like Claude or ChatGPT to query your semantic models via MCP