Movatterモバイル変換


[0]ホーム

URL:


Skip to main content
New: Bootstrap Projects and Oracle SupportRead now

Atlas Project Configuration (atlas.hcl)

Config Files

Atlas config files provide a convenient way to describe and interact with multiple environments when working with Atlas.In addition, they allow you to read data from external sources, define input variables, configure linting and migration policies, and more.

By default, when running an Atlas command with the--env flag, Atlas searches for a file namedatlas.hcl in the current working directory.However, by using the-c /--config flag, you can specify the path to a config file in a different location or with a different name.

  • MySQL
  • MariaDB
  • PostgreSQL
  • SQLite
  • SQL Server
  • ClickHouse
  • Redshift
// Define an environment named "local"
env"local"{
// Declare where the schema definition resides.
// Also supported: ["file://multi.hcl", "file://schema.hcl"].
src="file://project/schema.hcl"

// Define the URL of the database which is managed
// in this environment.
url="mysql://user:pass@localhost:3306/schema"

// Define the URL of the Dev Database for this environment
// See: https://atlasgo.io/concepts/dev-database
dev="docker://mysql/8/dev"
}

env"dev"{
// ... a different env
}

Flags

Once the project configuration has been defined, you can interact with it using one of the following options:

  • Env
  • Custom File
  • Global Config (without --env)

To run theschema apply command using thelocal configuration defined inatlas.hcl file located in your working directory:

atlas schema apply--envlocal

Will run theschema apply command against the database that is defined for thelocalenvironment.

Unlabeledenv blocks

It is possible to define anenv block whose name is dynamically set during command execution using the--env flag.This is useful when multiple environments share the same configuration and the arguments are dynamically set duringexecution:

env{
name= atlas.env
url= var.url
format{
migrate{
apply= format(
"{{ json . | json_merge %q }}",
jsonencode({
EnvName : atlas.env
})
)
}
}
}

Projects with Versioned Migrations

Environments may declare amigration block to configure how versioned migrationswork in the specific environment:

env"local"{
// ..
migration{
// URL where the migration directory resides.
dir="file://migrations"
}
}

Once defined,migrate commands can use this configuration, for example:

atlas migrate validate--envlocal

Will run themigrate validate command against the Dev Database defined in thelocal environment.

Passing Input Values

Config files may passinput values to variables defined in Atlas HCL schemas. To do this,define anhcl_schema data source, pass it the input values, and then designate it as thedesired schema within theenv block:

  • atlas.hcl
  • schema.hcl
data"hcl_schema""app"{
path="schema.hcl"
vars={
// Variables are passed as input values to "schema.hcl".
tenant="ariga"
}
}

env"local"{
src= data.hcl_schema.app.url
url="sqlite://test?mode=memory&_fk=1"
}

Builtin Functions

file

Thefile function reads the content of a file and returns it as a string. The file path is relative to the projectdirectory or an absolute path.

variable "cloud_token"{
type= string
default= file("/var/run/secrets/atlas_token")
}

fileset

Thefileset function returns the list of files that match the given pattern. The pattern is relative to the projectdirectory.

data"hcl_schema""app"{
paths= fileset("schema/*.pg.hcl")
}

getenv

Thegetenv function returns the value of the environment variable named by the key. It returns an empty string if thevariable is not set.

env"local"{
url= getenv("DATABASE_URL")
}

Project Input Variables

atlas.hcl file may also declareinput variables that can be supplied to the CLI at runtime. For example:

atlas.hcl
variable "tenant"{
type= string
}

data"hcl_schema""app"{
path="schema.hcl"
vars={
// Variables are passed as input values to "schema.hcl".
tenant= var.tenant
}
}

env"local"{
src= data.hcl_schema.app.url
url="sqlite://test?mode=memory&_fk=1"
}

To set the value for this variable at runtime, use the--var flag:

atlas schema apply--envlocal--vartenant=rotemtam

It is worth mentioning that when running Atlas commands within a project usingthe--env flag, all input values supplied at the command-line are passed onlyto the config file, and not propagated automatically to children schema files.This is done with the purpose of creating an explicit contract between the environmentand the schema file.

Supported Blocks

Atlas configuration files support various blocks and attributes. Below are the common examples; see theAtlas Config Schema for the full list.

Input Variables

Config files support defining input variables that can be injected through the CLI,read more here.

  • type - The type constraint of a variable.
  • default - Define if the variable is optional by setting its default value.
variable "tenants"{
type= list(string)
}

variable "url"{
type= string
default="mysql://root:pass@localhost:3306/"
}

variable "cloud_token"{
type= string
default= getenv("ATLAS_TOKEN")
}

env"local"{
// Reference an input variable.
url= var.url
}

Local Values

Thelocals block allows defining a list of local variables that can be reused multiple times in the project.

locals{
tenants=["tenant_1","tenant_2"]
base_url="mysql://${var.user}:${var.pass}@${var.addr}"

// Reference local values.
db1_url="${local.base_url}/db1"
db2_url="${local.base_url}/db2"
}

Atlas Block

Theatlas block allows configuring your Atlas account. The supported attributes are:

  • org - Specifies the organization to log in to. If Atlas executes usingatlas.hcl without logging in to thespecified organization, the command will be aborted.
  • token - CI/CD pipelines can use thetoken attribute for Atlas authentication.
atlas.hcl
atlas{
cloud{
org="acme"
}
}
tip

Atlas Pro users are advised to set theorg inatlas.hcl to ensure that any engineer interacting with Atlas in theproject context is running in logged-in mode. This ensures Pro features are enabled and the correct migration is generated.

Data Sources

Data sources enable users to retrieve information stored in an external service or database. The currently supporteddata sources are:

note

Data sources are evaluated only if they are referenced by top-level blocks likelocals orvariables, or by theselected environment, for instance,atlas schema apply --env dev.

Data source:sql

Thesql data source allows executing SQL queries on a database and using the results in the project.

Arguments
  • url - TheURL of the target database.
  • query - Query to execute.
  • args - Optional arguments for any placeholder parameters in the query.
Attributes
  • count - The number of returned rows.
  • values - The returned values. e.g.list(string).
  • value - The first value in the list, ornil.
data"sql""tenants"{
url= var.url
query=<<EOS
SELECT `schema_name`
FROM `information_schema`.`schemata`
WHERE `schema_name` LIKE ?
EOS
args=[var.pattern]
}

env"prod"{
// Reference a data source.
for_each= toset(data.sql.tenants.values)
url= urlsetpath(var.url, each.value)
}

Data source:external

Theexternal data source allows the execution of an external program and uses its output in the project.

Arguments
  • program - The first element of the string is the program to run. The remaining elements are optional command line arguments.
  • working_dir - The working directory to run the program from. Defaults to the current working directory.
Attributes
  • The command output is astring type with no attributes.

Usage example

atlas.hcl
data"external""dot_env"{
program=[
"npm",
"run",
"load-env.js"
]
}

locals{
dot_env= jsondecode(data.external.dot_env)
}

env"local"{
src= local.dot_env.URL
dev="docker://mysql/8/dev"
}

Data source:runtimevar

Arguments
  • url - TheURL identifies the variable. See, theCDKdocumentation for more information.Usetimeout=X to control the operation's timeout. If not specified, the timeout defaults to 10s.
Attributes
  • The loaded variable is astring type with no attributes.
  • GCP Runtime Configurator
  • GCP Secret Manager
  • AWS Parameter Store
  • AWS Secrets Manager
  • HTTP
  • File

The data source usesApplication Default Credentials by default;if you have authenticated viagcloud auth application-default login,it will use those credentials.

atlas.hcl
data"runtimevar""db"{
url="gcpruntimeconfig://projects/<project>/configs/<config-id>/variables/<variable>?decoder=string"
}

env"dev"{
src="schema.hcl"
url="mysql://root:pass@host:3306/${data.runtimevar.db}"
}

Usage example

gcloud auth application-default login
atlas schema apply--env dev
GOOGLE_APPLICATION_CREDENTIALS="/path/to/credentials.json" atlas schema apply--env dev

Data source:hcl_schema

Thehcl_schema data source allows the loading of an Atlas HCL schema from a file or directory, with optional variables.

Arguments
  • path - The path to the HCL file or directory (cannot be used withpaths).
  • paths - List of paths to HCL files or directories (cannot be used withpath).
  • vars - A map of variables to pass to the HCL schema.
Attributes
  • url - TheURL of the loaded schema.
  • atlas.hcl
  • schema.hcl
variable "tenant"{
type= string
}

data"hcl_schema""app"{
path="schema.hcl"
vars={
tenant= var.tenant
}
}


env"local"{
src= data.hcl_schema.app.url
url="sqlite://test?mode=memory&_fk=1"
}

Data source:external_schema

Theexternal_schema data source enables the import of an SQL schema from an external program into Atlas' desired state.With this data source, users have the flexibility to represent the desired state of the database schema in any language.

Arguments
  • program - The first element of the string is the program to run. The remaining elements are optional command line arguments.
  • working_dir - The working directory to run the program from. Defaults to the current working directory.
Attributes
  • url - TheURL of the loaded schema.

Usage example

By runningatlas migrate diff with the given configuration, the external program will be executed and its loaded statewill be compared against the current state of the migration directory. In case of a difference between the two states,a new migration file will be created with the necessary SQL statements.

atlas.hcl
data"external_schema""graph"{
program=[
"npm",
"run",
"generate-schema"
]
}

env"local"{
src= data.external_schema.graph.url
dev="docker://mysql/8/dev"
migration{
dir="file://migrations"
}
}

Data source:composite_schemaAtlas Pro

Thecomposite_schema data source allows the composition of multiple Atlas schemas into a unified schema graph. Thisfunctionality is useful when projects schemas are split across various sources such as HCL, SQL, or application ORMs.For example, each service have its own database schema, or an ORM schema is extended or relies on other database schemas.

Referring to theurl returned by this data source allows reading the entire project schemas as a single unit by any ofthe Atlas commands, such asmigrate diff,schema apply, orschema inspect.

Arguments

schema - one or more blocks containing theURL to read the schema from.

Usage Details
Mapping to Database Schemas

The name of theschema block represents the database schema to be created in the composed graph. For example, thefollowing schemas refer to thepublic andprivate schemas within a PostgreSQL database:

data"composite_schema""project"{
schema"public"{
url= ...
}
schema"private"{
url= ...
}
}
Schema Dependencies

The order of theschema blocks defines the order in which Atlas will load the schemas to compose the entire databasegraph. This is useful in the case of dependencies between the schemas. For example, the following schemas refer to theinventory andauth schemas, where theauth schema depends on theinventory schema and therefore should be loadedafter it:

data"composite_schema""project"{
schema"inventory"{
url= ...
}
schema"auth"{
url= ...
}
}
Schema Composition

Defining multipleschema blocks with the same name enables extending the same database schema from multiple sources.For example, the following configuration shows how an ORM schema, which relies on database types that cannot be definedwithin the ORM itself, can load them separately from another schema source that supports it:

data"composite_schema""project"{
schema"public"{
url="file://types.pg.hcl"
}
schema"public"{
url="ent://ent/schema"
}
}
Labeled vs. Unlabeled Schema Blocks

Note, if theschema block is labeled (e.g.,schema "public"), the schema will be created if it does not exist,and the computation for loading the state from the URL will be done within the scope of this schema.

If theschema block is unlabeled (e.g.,schema { ... }), no schema will be created, and the computation for loadingthe state from the URL will be done within the scope of the database. Read more about this inSchema vs. Database Scopedoc.

Attributes
  • url - TheURL of the composite schema.

Usage example

By runningatlas migrate diff with the given configuration, Atlas loads theinventory schema from theSQLAlchemy schema,thegraph schema froment/schema, and theauth andinternal schemas from HCL and SQL schemas defined inAtlas format. Then, the composite schema, which represents these four schemas combined, will be compared against thecurrent state of the migration directory. In case of a difference between the two states, a new migration file will becreated with the necessary SQL statements.

atlas.hcl
data"composite_schema""project"{
schema"inventory"{
url= data.external_schema.sqlalchemy.url
}
schema"graph"{
url="ent://ent/schema"
}
schema"auth"{
url="file://path/to/schema.hcl"
}
schema"internal"{
url="file://path/to/schema.sql"
}
}

env"dev"{
src= data.composite_schema.project.url
dev="docker://postgres/15/dev"
migration{
dir="file://migrations"
}
}

Data source:remote_dir

Theremote_dir data source reads the state of a migration directory fromAtlas Cloud. Forinstructions on how to connect a migration directory to Atlas Cloud, please refer to thecloud documentation.

Arguments
  • name - The slug of the migration directory, as defined in Atlas Cloud.
  • tag (optional) - The tag of the migration directory, such as Git commit. If not specified, the latesttag (e.g.,master branch) will be used.
Attributes
  • url - AURL to the loaded migration directory.
note

Theremote_dir data source predates theatlas:// URL scheme. The example below is equivalent to executing Atlas with--dir "atlas://myapp".

atlas.hcl
variable "database_url"{
type= string
default= getenv("DATABASE_URL")
}

data"remote_dir""migrations"{
// The slug of the migration directory in Atlas Cloud.
// In this example, the directory is named "myapp".
name="myapp"
}

env{
// Set environment name dynamically based on --env value.
name= atlas.env
url= var.database_url
migration{
dir= data.remote_dir.migrations.url
}
}

Usage example

ATLAS_TOKEN="<ATLAS_TOKEN>"\
atlas migrate apply\
--url"<DATABASE_URL>"\
-c file://path/to/atlas.hcl\
--env prod
DATABASE_URL="<DATABASE_URL>"ATLAS_TOKEN="<ATLAS_TOKEN>"\
atlas migrate apply\
-c file://path/to/atlas.hcl\
--env prod
Reporting Cloud Deployments

In case thecloud block was activated with a valid token, Atlas logs migration runs in your cloud accountto facilitate the monitoring and troubleshooting of executed migrations. The following is a demonstration of how itappears in action:

Screenshot example

Data source:template_dir

Thetemplate_dir data source renders a migration directory from a template directory. It does this by parsing theentire directory asGo templates, executing top-level (template) files thathave the.sql file extension, and generating an in-memory migration directory from them.

Arguments
  • path - A path to the template directory.
  • vars - A map of variables to pass to the template.
Attributes
  • url - AURL to the generated migration directory.
  • Read only templates
  • Variables shared between HCL and directory
atlas.hcl
variable "path"{
type= string
description="A path to the template directory"
}

data"template_dir""migrations"{
path= var.path
vars={
Key1="value1"
Key2="value2"
// Pass the --env value as a template variable.
Env= atlas.env
}
}

env"dev"{
url= var.url
migration{
dir= data.template_dir.migrations.url
}
}

Data source:blob_dirAtlas Pro

Theblob_dir use thegocloud.dev/blob to open the bucket and read the migration directory from it.It is useful for reading migration directories from cloud storage providers such as AWS S3.

Atlas only requires the read permission to the bucket.

Arguments
  • url - The URL of the blob storage bucket. The URL should be in the formats3://bucket-name/path/to/directory for AWS.
Attributes
  • url - AURL to the generated migration directory.
atlas.hcl
data"blob_dir""migrations"{
url="s3://my-bucket/path/to/migrations?profile=aws-profile"
}

env"dev"{
url= var.url
migration{
dir= data.blob_dir.migrations.url
}
}

You can provide theprofile query parameter to use a specific AWS profile from your local AWS credentials file. Or set thecredentials using environment variables:AWS_ACCESS_KEY_ID andAWS_SECRET_ACCESS_KEY.

Data source:aws_rds_token

Theaws_rds_token data source generates a short-lived token for anAWS RDS databaseusingIAM Authentication.

To use this data source:

  1. Enable IAM Authentication for your database. For instructions on how to do this,see the AWS documentation.
  2. Create a database user and grant it permission to authenticate using IAM, seethe AWS documentationfor instructions.
  3. Create an IAM role with therds-db:connect permission for the specific database and user. For instructions on how to do this,see the AWS documentation.
Arguments
  • region - The AWS region of the database (Optional).
  • endpoint - The endpoint of the database (hostname:port).
  • username - The database user to authenticate as.
  • profile - The AWS profile to use for authentication (Optional).
Attributes
  • The loaded variable is astring type with no attributes. Notice that the token contains special characters thatneed to be escaped when used in a URL. To escape the token, use theurlescape function.
Example
atlas.hcl
locals{
user="iamuser"
endpoint="hostname-of-db.example9y7k.us-east-1.rds.amazonaws.com:5432"
}

data"aws_rds_token""db"{
region="us-east-1"
endpoint= local.endpoint
username= local.user
}

env"rds"{
url="postgres://${local.user}:${urlescape(data.aws_rds_token.db)}@${local.endpoint}/postgres"
}

Data source:gcp_cloudsql_token

Thegcp_cloudsql_token data source generates a short-lived token for anGCP CloudSQL databaseusingIAM Authentication.

To use this data source:

  1. Enable IAM Authentication for your database. For instructions on how to do this,see the GCP documentation.
  2. Create a database user and grant it permission to authenticate using IAM, seethe GCP documentationfor instructions.
Attributes
  • The loaded variable is astring type with no attributes. Notice that the token contains special characters thatneed to be escaped when used in a URL. To escape the token, use theurlescape function.
Example
atlas.hcl
locals{
user="iamuser"
endpoint="34.143.100.1:3306"
}

data"gcp_cloudsql_token""db"{}

env"rds"{
url="mysql://${local.user}:${urlescape(data.gcp_cloudsql_token.db)}@${local.endpoint}/?allowCleartextPasswords=1&tls=skip-verify&parseTime=true"
}
note

TheallowCleartextPasswords andtls parameters are required for the MySQL driver to connect to CloudSQL. For PostgreSQL, usesslmode=require to connect to the database.

Environments

Theenv block defines an environment block that can be selected by using the--env flag.

Arguments
  • for_each - A meta-argument that accepts a map or a set of strings and is used to compute anenv instance for eachset or map item. See the examplebelow.

  • src - TheURL of or reference to for the desired schema of this environment. For example:

    • file://schema.hcl
    • file://schema.sql
    • file://relative/path/to/file.hcl
    • Directories are also accepted:file://schema/
    • Lists are accepted as well:
      env"local"{
      src=[
      "file://a.hcl",
      "file://b.hcl"
      ]
      }
    • As mentioned, references to data sources such asexternal_schemaorcomposite_schema are a valid value for thesrc attribute.
  • url - TheURL of the target database.

  • dev - TheURL of theDev Database.

  • schemas - A list of strings defines the schemas that Atlas manages.

  • exclude - A list of strings defines glob patterns used to filter resources on inspection.

  • migration - A block defines the migration configuration of the env.

    • dir - TheURL to the migration directory.
    • baseline - An optional version to start the migration history from. Read morehere.
    • exec_order - Set the file execution order [LINEAR (default),LINEAR_SKIP,NON_LINEAR]. Read morehere.
    • lock_timeout - An optional timeout to wait for a database lock to be released. Defaults to10s.
    • revisions_schema - An optional name to control the schema that the revisions table resides in.
    • repo. - The repository configuration for the migrations directory in theregistry.
      • name - The repository name.
  • schema -The configuration for the desired schema.

    • src - TheURL to the desired schema state.
    • repo. - The repository configuration for the desired schema in theregistry.
      • name - The repository name.
  • format - A block defines the formatting configuration of the env per command (previously namedlog).

    • migrate
      • apply - Set custom formatting formigrate apply.
      • diff - Set custom formatting formigrate diff.
      • lint - Set custom formatting formigrate lint.
      • status - Set custom formatting formigrate status.
    • schema
      • inspect - Set custom formatting forschema inspect.
      • apply - Set custom formatting forschema apply.
      • diff - Set custom formatting forschema diff.
  • lint - A block defines the migration linting configuration of the env.

    • format - Override the--format flag by setting a custom logging formigrate lint (previously namedlog).
    • latest - A number configures the--latest option.
    • git.base - A run analysis against the base Git branch.
    • git.dir - A path to the repository working directory.
    • review - The policy to use when deciding whether the user should be prompted to review and approve the changes.Currently works with declarative migrations and requires the user to log in. Supported options:
      • ALWAYS - Always prompt the user to review and approve the changes.
      • WARNING - Prompt if any diagnostics are found.
      • ERROR - Prompt if any severe diagnostics (errors) are found. By default this will happen on destructive changes only.
  • diff - A block defines the schema diffing policy.

Multi Environment Example

Atlas adopts thefor_each meta-argument thatTerraform usesforenv blocks. Setting thefor_each argument will compute anenv block for each item in the provided value. Notethatfor_each accepts a map or a set of strings.

  • Versioned Migration
  • Declarative Migration
atlas.hcl
env"prod"{
for_each= toset(data.sql.tenants.values)
url= urlsetpath(var.url, each.value)
migration{
dir="file://migrations"
}
format{
migrate{
apply= format(
"{{ json . | json_merge %q }}",
jsonencode({
Tenant : each.value
})
)
}
}
}

Configure Migration Linting

Config files may declarelint blocks to configure how migration linting runs in a specific environment or globally.

lint{
destructive{
// By default, destructive changes cause migration linting to error
// on exit (code 1). Setting `error` to false disables this behavior.
error=false
}
// Custom logging can be enabled using the `format` attribute (previously named `log`).
format=<<EOS
{{- range $f := .Files }}
{{- json $f }}
{{- end }}
EOS
}

env"local"{
// Define a specific migration linting config for this environment.
// This block inherits and overrides all attributes of the global config.
lint{
latest=1
}
}

env"ci"{
lint{
git{
base="master"
// An optional attribute for setting the working
// directory of the git command (-C flag).
dir="<path>"
}
}
}

Configure Diff Policy

Config files may definediff blocks to configure how schema diffing runs in a specific environment or globally.

diff{
skip{
// By default, none of the changes are skipped.
drop_schema=true
drop_table=true
}
concurrent_index{
create=true
drop=true
}
}

env"local"{
// Define a specific schema diffing policy for this environment.
diff{
skip{
drop_schema=true
}
}
}

[8]ページ先頭

©2009-2025 Movatter.jp