In Credible, database connection configurations are stored securely in the Credible control plane. Database credentials never leave the Credible service. All access—whether for model development purposes or serving semantic models—goes through the Credible platform. In turn, the Credible service provides a secure perimeter around your databases.
Prerequisites
- Admin access to an organization in the Credible platform in order to create a project and connections.
If you don’t have admin access, contact your organization administrator to set up connections for you.
Setup Process
Let’s connect your data sources (BigQuery, Snowflake, PostgreSQL, Trino, MySQL, DuckDB, MotherDuck) to Credible to start building semantic models. Connections provide secure access to your databases and data warehouses. Connections are added to a project and become available to all packages within that project.
Credible App (Recommended)
- Access your organization at
https://your-org.app.credibledata.com
- Select your project from the left sidebar under Packages & Connections
- Click ”+ Add Connection” in the Connections section
- Choose your data source type and fill in the connection details:
Connection names cannot contain spaces or hyphens. Use underscores instead
(e.g., my_connection).
BigQuery
PostgreSQL
Snowflake
Trino
MySQL
DuckDB
MotherDuck
Required:
- Connection name
- Service Account Key (JSON file) - upload or paste the JSON key file from GCP
Optional Configuration:
All of these can be specified in the service account key or overridden here:
- Default Project ID
- Billing Project ID
- Location (e.g., US, EU, asia-northeast1)
- Maximum Bytes Billed
- Query Timeout (milliseconds)
Option 1: Individual Fields
- Connection name
- Host (hostname or IP address)
- Port (default: 5432)
- Database Name
- Username
- Password
Option 2: Connection String
- Connection name
- Connection string:
postgresql://<username>:<password>@<host>:<port>/<database>
- Optional timeout:
postgresql://<username>:<password>@<host>:<port>/<database>?statement_timeout=30000
Required:
- Connection name
- Account (e.g.,
myaccount.us-east-1)
- Username
Authentication (choose one):
- Password - Standard username/password authentication
- RSA Private Key - Key-pair authentication using an RSA private key
Optional:
- Warehouse
- Database
- Schema
- Role
- Response Timeout (milliseconds)
Server Configuration:
Configure your Trino server details. The server URL must be in the format http://server:port or https://server:port.
- Connection name (no spaces or hyphens, use underscores)
- Server URL (e.g.,
https://trino.example.com:4567)
Catalog and Schema:
Specify the catalog and schema to connect to.
- Catalog (e.g.,
hive, mysql, postgresql, data_warehouse) - represents a data source
- Schema (e.g.,
default, public) - the schema within the catalog to connect to
Authentication:
Provide credentials for Trino authentication.
- Username (e.g.,
trino)
- Password (optional for HTTPS connections with API key)
- API Key (optional for authentication)
Required:
- Connection name
- Host (hostname or IP address)
- Port (default: 3306)
- Database Name
- Username
- Password
Required:DuckDB connections use an in-process database engine. This is useful for working with embedded data files (CSV, Parquet) included in your packages.
Required:
- Connection name
- Access Token - your MotherDuck API token
Optional:
- Database - the MotherDuck database to connect to
MotherDuck is a serverless cloud analytics platform built on DuckDB. Get your access token from your MotherDuck account settings.
- Test the connection to verify connectivity
- Click Next: Configure Scope to proceed
After configuring your connection details, you’ll select which schemas and tables to index for AI-assisted modeling. The Schema Browser lets you browse and select specific tables.
- Browse schemas on the left, select tables on the right
- Use Select All to include all tables in a schema, or pick individual tables
- Check “Do not include any tables for AI-assisted modeling” if you only need the connection for manual queries
These indexing limits apply to AI-assisted model creation:
- 100 tables per schema for metadata indexing
- 25 tables or fewer for automated join inference
You can manually write Malloy models for any size dataset. Published models are indexed separately for analysis.
Click Update Connection to save. You’ll see an indexing status page confirming the connection is being indexed:
CLI Option
Use the Credible command-line tool for programmatic connection management and automation.
-
Install the CLI:
-
Login to your organization:
cred login <organizationName>
-
Add a connection:
cred add connection <connectionFileName>
The connection file should be a JSON file containing an array of connection objects. See the CLI reference for detailed connection file formats and examples.
Next Steps
After connecting your data, you’ll move to building your model in Cursor: