Skip to main content
In Credible, database connection configurations are stored securely in the Credible control plane. Database credentials never leave the Credible service. All access—whether for model development purposes or serving semantic models—goes through the Credible platform. In turn, the Credible service provides a secure perimeter around your databases.

Prerequisites

  • Admin access to an organization in the Credible platform in order to create a project and connections.
If you don’t have admin access, contact your organization administrator to set up connections for you.

Setup Process

Let’s connect your data sources (BigQuery, Snowflake, PostgreSQL, Trino, MySQL, DuckDB, MotherDuck) to Credible to start building semantic models. Connections provide secure access to your databases and data warehouses. Connections are added to a project and become available to all packages within that project.
  1. Access your organization at https://your-org.admin.credibledata.com
  2. Select your project from the project list
  3. Click “Add Connection” in the Connections section
  4. Choose your data source type and fill in the connection details:
Connection names cannot contain spaces or hyphens. Use underscores instead (e.g., my_connection).
  • BigQuery
  • PostgreSQL
  • Snowflake
  • Trino
  • MySQL
  • DuckDB
  • MotherDuck
BigQuery ConnectionRequired:
  • Connection name
  • Service Account Key (JSON file) - upload or paste the JSON key file from GCP
Optional Configuration: All of these can be specified in the service account key or overridden here:
  • Default Project ID
  • Billing Project ID
  • Location (e.g., US, EU, asia-northeast1)
  • Maximum Bytes Billed
  • Query Timeout (milliseconds)
  1. Test the connection to verify connectivity
  2. Create the connection for use in your models
Once created, you can use the View Schema button to explore your connection’s tables and schemas: Connection Explorer

Table Limits for AI-Assisted Modeling

When using AI-assisted model creation, the following limits apply:
  • 100 tables per schema for metadata indexing
  • 25 tables or fewer for automated join inference
These limits only apply to AI-assisted model creation. You can manually write Malloy models for any size dataset. Published models are indexed separately for analysis, regardless of these settings.Support for larger datasets is actively under development.Learn more about AI-Assisted Modeling.

Table Filters (Advanced)

If your dataset exceeds these limits OR if you just want AI-assisted modeling to focus on specific tables, use the table filters settings to specify which tables to include or exclude from this connection. Include tables (e.g., sales.orders, finance.*, public.users) Tables to include in this connection. Use schema.* to include all tables in a schema. Exclude tables (e.g., temp_data.*, backup.records) Tables to exclude from this connection. Use schema.* to exclude all tables in a schema.
Schema refers to a collection of tables. In BigQuery, this is called a dataset.
Table filters help you:
  • Focus AI assistance on your most important tables and ensure you stay within AI-assisted modeling limits when you have large datasets
  • Exclude temporary or backup tables from indexing

CLI Option

Use the Credible command-line tool for programmatic connection management and automation.
  1. Install the CLI:
    npm install -g cred
    
  2. Login to your organization:
    cred login <organizationName>
    
  3. Add a connection:
    cred add connection <connectionFileName>
    
    The connection file should be a JSON file containing an array of connection objects. See the CLI reference for detailed connection file formats and examples.

Next Steps

After connecting your data, you’ll move to building your model in VS Code: