Skip to main content
Once your semantic models are published and indexed, you can analyze your data in several ways:

Credible App

Chat with data, create and share reports, explore models visually, and view published notebooks — all in governed workspaces at app.credibledata.com.

Your Own LLM

Connect Claude, ChatGPT, Copilot, or any MCP-compatible tool to your semantic models via Credible’s Consumption MCP server.

Slack

Ask data questions directly from Slack channels and DMs using the @CredibleData bot.
Both paths use the same Credible Context Engine and the same get_context / execute_query MCP tools — the only difference is whether the agent runs inside Credible or inside your own LLM client. The quality of analysis depends on the #(doc) and #(index_values) metadata in your models — see Metadata Tags to improve discoverability.