ALTER TABLE

On this page Carat arrow pointing down

The ALTER TABLE statement changes the definition of a table. For information on using ALTER TABLE, see the pages for its subcommands.

Note:

This schema change statement is registered as a job. You can view long-running jobs with SHOW JOBS.

Required privileges

Refer to the respective subcommands.

Synopsis

ALTER TABLE table_name alter_table_cmd , SPLIT AT select_stmt WITH EXPIRATION a_expr UNSPLIT AT select_stmt ALL CONFIGURE ZONE USING variable = COPY FROM PARENT value , variable = value COPY FROM PARENT DISCARD RENAME TO table_new_name SET SCHEMA schema_name locality OWNER TO role_spec IF EXISTS table_name alter_table_cmd , RENAME TO table_new_name SET SCHEMA schema_name locality OWNER TO role_spec


where alter_table_cmd is:

ADD COLUMN IF NOT EXISTS column_name typename col_qualification CONSTRAINT IF NOT EXISTS constraint_name constraint_elem NOT VALID RENAME COLUMN column_name TO column_new_name CONSTRAINT constraint_name TO constraint_new_name ALTER COLUMN column_name SET DEFAULT ON UPDATE a_expr VISIBLE NOT VISIBLE NULL DROP DEFAULT ON UPDATE NOT NULL IDENTITY IF EXISTS STORED ADD generated_always_as generated_by_default_as IDENTITY ( opt_sequence_option_list ) set_generated_always set_generated_default identity_option_list SET DATA TYPE typename COLLATE collation_name USING a_expr PRIMARY KEY USING COLUMNS ( index_params ) USING HASH WITH ( storage_parameter_key = value , ) DROP COLUMN IF EXISTS column_name CONSTRAINT IF EXISTS constraint_name CASCADE RESTRICT VALIDATE CONSTRAINT constraint_name EXPERIMENTAL_AUDIT SET READ WRITE OFF PARTITION ALL BY LIST ( name_list ) ( list_partitions RANGE ( name_list ) ( range_partitions ) NOTHING SET ( storage_parameter_key = value , RESET ( storage_parameter_key , ) ,

Parameters

Parameter Description
IF EXISTS Change the table only if a table with the current name exists; if one does not exist, do not return an error.
table_name The name of the table you want to change.

Additional parameters are documented for the respective subcommands.

Subcommands

Tip:

Some subcommands can be used in combination in a single ALTER TABLE statement. For example, you can atomically rename a column and add a new column with the old name of the existing column.

Subcommand Description Can combine with other subcommands?
ADD COLUMN Add columns to tables. Yes
ADD CONSTRAINT Add constraints to columns. Yes
ALTER COLUMN Change an existing column. Yes
ALTER PRIMARY KEY Change the primary key of a table. Yes
CONFIGURE ZONE Replication Controls for a table. No
DROP COLUMN Remove columns from tables. Yes
DROP CONSTRAINT Remove constraints from columns. Yes
EXPERIMENTAL_AUDIT Enable per-table audit logs, for security purposes. Yes
OWNER TO Change the owner of the table. No
PARTITION BY Partition, re-partition, or un-partition a table. Yes
RENAME COLUMN Change the names of columns. Yes
RENAME CONSTRAINT Change constraints columns. Yes
RENAME TO Change the names of tables. No
RESET {storage parameter} Reset a storage parameter on a table to its default value. Yes
SET {storage parameter} Set a storage parameter on a table. Yes
SET LOCALITY Set the table locality for a table in a multi-region database. No
SET SCHEMA Change the schema of a table. No
SPLIT AT Force a range split at the specified row in the table. No
UNSPLIT AT Remove a range split enforcement in the table. No
VALIDATE CONSTRAINT Check whether values in a column match a constraint on the column. Yes

ADD COLUMN

Use ALTER TABLE ... ADD COLUMN to add columns to existing tables.

For examples, see Add columns.

Required privileges

The user must have the CREATE privilege on the table.

Parameters

Parameter Description
IF NOT EXISTS Add a column only if a column of the same name does not already exist; if one does exist, do not return an error.
column_name The name of the column you want to add. The column name must follow these identifier rules and must be unique within the table but can have the same name as indexes or constraints.
typename The data type of the new column.
col_qualification An optional list of column qualifications.

For usage, see Synopsis.

ADD CONSTRAINT

Use ALTER TABLE ... ADD CONSTRAINT to add the following constraints to columns:

To add a primary key constraint to a table, you should explicitly define the primary key at table creation. To replace an existing primary key, you can use ADD CONSTRAINT ... PRIMARY KEY. For details, see Changing primary keys with ADD CONSTRAINT ... PRIMARY KEY.

The DEFAULT and NOT NULL constraints are managed through ALTER COLUMN.

For examples, see Add constraints.

Required privileges

The user must have the CREATE privilege on the table.

Parameters

Parameter Description
IF NOT EXISTS Add a constraint only if a constraint of the same name does not already exist; if one does exist, do not return an error.
constraint_name The name of the constraint, which must be unique to its table and follow these identifier rules.
constraint_elem The CHECK, foreign key, or UNIQUE constraint you want to add.

Adding/changing a DEFAULT constraint is done through ALTER COLUMN.

Adding/changing the table's PRIMARY KEY is not supported through ALTER TABLE; it can only be specified during table creation.
NOT VALID Create unvalidated constraints. When creating an unvalidated constraint, the system does not check that existing table data satisfies the constraint. The constraint is still enforced when table data is modified. An unvalidated constraint can later be validated using VALIDATE CONSTRAINT.

For usage, see Synopsis.

Changing primary keys with ADD CONSTRAINT ... PRIMARY KEY

When you change a primary key with ALTER TABLE ... ALTER PRIMARY KEY, the existing primary key index becomes a secondary index. The secondary index created by ALTER PRIMARY KEY takes up node memory and can slow down write performance to a cluster. If you do not have queries that filter on the primary key that you are replacing, you can use ADD CONSTRAINT to replace the existing primary index without creating a secondary index.

You can use ADD CONSTRAINT ... PRIMARY KEY to add a primary key to an existing table if one of the following is true:

Aliases

In CockroachDB, the following are aliases for ALTER TABLE ... ADD CONSTRAINT ... PRIMARY KEY:

  • ALTER TABLE ... ADD PRIMARY KEY

ALTER COLUMN

Use ALTER TABLE ... ALTER COLUMN to do the following:

Note:

Support for altering column data types is in preview, with certain limitations. For details, see Altering column data types.

For examples, see Alter columns.

Required privileges

The user must have the CREATE privilege on the table.

Parameters

Parameter Description
column_name The name of the column to modify.
a_expr The new default value to set.
b_expr The ON UPDATE expression to set.
[NOT] VISIBLE The visibility of a column when using * in a SELECT clause.
typename The new data type you want to use.
Support for altering column types is in preview, with certain limitations. For details, see Alter column data types.
USING a_expr How to compute a new column value from the old column value.

For usage, see Synopsis.

Alter column data types

Support for altering column data types is in preview, with certain limitations. To enable column type altering, set the enable_experimental_alter_column_type_general session variable to true.

The following are equivalent in CockroachDB:

  • ALTER TABLE ... ALTER ... TYPE
  • ALTER TABLE ... ALTER COLUMN TYPE
  • ALTER TABLE ... ALTER COLUMN SET DATA TYPE

Known limitations

You cannot alter the data type of a column if:

  • The column is part of an index.
  • The column has CHECK constraints.
  • The column owns a sequence.
  • The column has a DEFAULT expression. This will result in an ERROR: ... column ... cannot also have a DEFAULT expression with SQLSTATE: 42P16.
  • The ALTER COLUMN TYPE statement is part of a combined ALTER TABLE statement.
  • The ALTER COLUMN TYPE statement is inside an explicit transaction.
Note:

Most ALTER COLUMN TYPE changes are finalized asynchronously. Schema changes on the table with the altered column may be restricted, and writes to the altered column may be rejected until the schema change is finalized.

ALTER PRIMARY KEY

Use ALTER TABLE ... ALTER PRIMARY KEY to change the primary key of a table.

Note the following:

  • You cannot change the primary key of a table that is currently undergoing a primary key change, or any other schema change.

  • ALTER PRIMARY KEY might need to rewrite multiple indexes, which can make it an expensive operation.

  • When you change a primary key with ALTER PRIMARY KEY, the old primary key index becomes a UNIQUE secondary index. This helps optimize the performance of queries that still filter on the old primary key column.

  • ALTER PRIMARY KEY does not alter the partitions on a table or its indexes, even if a partition is defined on a column in the original primary key. If you alter the primary key of a partitioned table, you must update the table partition accordingly.

  • The secondary index created by ALTER PRIMARY KEY will not be partitioned, even if a partition is defined on a column in the original primary key. To ensure that the table is partitioned correctly, you must create a partition on the secondary index, or drop the secondary index.

  • Any new primary key column set by ALTER PRIMARY KEY must have an existing NOT NULL constraint. To add a NOT NULL constraint to an existing column, use ALTER TABLE ... ALTER COLUMN ... SET NOT NULL.

Tip:

To change an existing primary key without creating a secondary index from that primary key, use DROP CONSTRAINT ... PRIMARY KEY/ADD CONSTRAINT ... PRIMARY KEY. For examples, see Add constraints and Drop constraints.

For examples, see Alter a primary key.

Required privileges

The user must have the CREATE privilege on a table to alter its primary key.

Parameters

Parameter Description
index_params The name of the column(s) that you want to use for the primary key. These columns replace the current primary key column(s).
USING HASH Creates a hash-sharded index.

For usage, see Synopsis.

CONFIGURE ZONE

ALTER TABLE ... CONFIGURE ZONE is used to add, modify, reset, or remove replication zones for a table. To view details about existing replication zones, use SHOW ZONE CONFIGURATIONS. For more information about replication zones, see Replication Controls.

You can use replication zones to control the number and location of replicas for specific sets of data, both when replicas are first added and when they are rebalanced to maintain cluster equilibrium.

For examples, see Replication Controls.

Required privileges

The user must be a member of the admin role or have been granted CREATE or ZONECONFIG privileges. To configure system objects, the user must be a member of the admin role.

Parameters

Parameter Description
variable The name of the replication zone variable to change.
value The value of the replication zone variable to change.
DISCARD Remove a replication zone.

For usage, see Synopsis.

DROP COLUMN

Use ALTER TABLE ... DROP COLUMN to remove columns from a table.

Warning:

When used in an explicit transaction combined with other schema changes to the same table, DROP COLUMN can result in data loss if one of the other schema changes fails or is canceled. To work around this, move the DROP COLUMN statement to its own explicit transaction or run it in a single statement outside the existing transaction.

By default, DROP COLUMN drops any indexes on the column being dropped, and any indexes that reference the column, including indexes with STORING clauses that reference the column.

For examples, see Drop columns.

Known limitations

CockroachDB prevents a column from being dropped using ALTER TABLE ... DROP COLUMN if it is referenced by a partial index predicate. To drop such a column, the partial indexes need to be dropped first using DROP INDEX. #97813.

Required privileges

The user must have the CREATE privilege on the table.

Parameters

Parameter Description
column_name The name of the column you want to drop.

When a column with a CHECK constraint is dropped, the CHECK constraint is also dropped.
CASCADE Drop the column even if objects (such as views) depend on it; drop the dependent objects, as well. CASCADE will drop a column with a foreign key constraint if it is the only column in the reference.

CASCADE does not list the objects it drops, so should be used cautiously.

CASCADE is not required to drop an indexed column, or a column that is referenced by an index. By default, DROP COLUMN drops any indexes on the column being dropped, and any indexes that reference the column, including partial indexes with predicates that reference the column and indexes with STORING clauses that reference the column.
RESTRICT (Default) Do not drop the column if any objects (such as views) depend on it.

For usage, see Synopsis.

DROP CONSTRAINT

Use ALTER TABLE ... DROP CONSTRAINT to remove CHECK and FOREIGN KEY constraints from columns.

PRIMARY KEY constraints can be dropped with DROP CONSTRAINT if an ADD CONSTRAINT statement follows the DROP CONSTRAINT statement in the same transaction.

Tip:

When you change a primary key with ALTER TABLE ... ALTER PRIMARY KEY, the old primary key index becomes a secondary index. If you do not want the old primary key to become a secondary index, use DROP CONSTRAINT/ADD CONSTRAINT to change the primary key.

Note:

For information about removing other constraints, see Constraints: Remove Constraints.

For examples, see Drop constraints.

Required privileges

The user must have the CREATE privilege on the table.

Parameters

Parameter Description
constraint_name The name of the constraint you want to drop.

For usage, see Synopsis.

EXPERIMENTAL_AUDIT

ALTER TABLE ... EXPERIMENTAL_AUDIT enables or disables the recording of SQL audit events to the SENSITIVE_ACCESS logging channel for a table. The SENSITIVE_ACCESS log output is also called the SQL audit log. For details on using SQL audit logs, see SQL Audit Logging.

Note:

This feature is in preview. This feature is subject to change. To share feedback and/or issues, contact Support.

SQL audit logs contain detailed information about queries being executed against your system, including:

  • Full text of the query (which may include personally identifiable information (PII))
  • Date/Time
  • Client address
  • Application name

CockroachDB stores audit log information in a way that ensures durability, but negatively impacts performance. As a result, we recommend using SQL audit logs for security purposes only. For more information, see Performance considerations.

For examples, see Configure audit logging.

Required privileges

Only members of the admin role can enable audit logs on a table. By default, the root user belongs to the admin role.

Parameters

Parameter Description
READ Log all table reads to the audit log file.
WRITE Log all table writes to the audit log file.
OFF Turn off audit logging.

For usage, see Synopsis.

Note:

This command logs all reads and writes, and both the READ and WRITE parameters are required (as shown in the examples).

OWNER TO

ALTER TABLE ... OWNER TO is used to change the owner of a table.

For examples, see Change table owner.

Required privileges

To change the owner of a table, the user must be an admin user, or the current owner of the table and a member of the new owner role. The new owner role must also have the CREATE privilege on the schema to which the table belongs.

Parameters

Parameter Description
role_spec The role to set as the owner of the table.

For usage, see Synopsis.

PARTITION BY

Required privileges

ALTER TABLE ... PARTITION BY is used to partition, re-partition, or un-partition a table. After defining partitions, CONFIGURE ZONE is used to control the replication and placement of partitions.

For examples, see Define partitions.

Parameters

Parameter Description
name_list List of columns you want to define partitions on (in the order they are defined in the primary key).
list_partitions Name of list partition followed by the list of values to be included in the partition.
range_partitions Name of range partition followed by the range of values to be included in the partition.

For usage, see Synopsis.

RENAME COLUMN

ALTER TABLE ... RENAME COLUMN changes the name of a column in a table.

Note:

It is not possible to rename a column referenced by a view. For more details, see View Dependencies.

For examples, see Rename columns.

Required privileges

The user must have the CREATE privilege on the table.

Parameters

Parameter Description
IF EXISTS Rename the column only if a table of table_name exists; if one does not exist, do not return an error.
column_name The current name of the column.
column_new_name The name you want to use for the column, which must be unique to its table and follow these identifier rules.

For usage, see Synopsis.

RENAME CONSTRAINT

ALTER TABLE ... RENAME CONSTRAINT changes the name of a constraint on a column.

Note:

It is not possible to rename a constraint for a column referenced by a view. For more details, see View Dependencies.

For examples, see Rename constraints.

Required privileges

The user must have the CREATE privilege on the table.

Parameters

Parameter Description
IF EXISTS Rename the constraint only if a constraint of current_name exists; if one does not exist, do not return an error.
constraint_name The current name of the constraint.
constraint_new_name The new name you want to use for the constraint, which must be unique to its table and follow these identifier rules.

For usage, see Synopsis.

RENAME TO

ALTER TABLE ... RENAME TO changes the name of a table.

Note:

ALTER TABLE ... RENAME TO cannot be used to move a table from one schema to another. To change a table's schema, use SET SCHEMA.

ALTER TABLE ... RENAME TO cannot be used to move a table from one database to another. To change a table's database, use BACKUP and RESTORE.

Note:

It is not possible to rename a table referenced by a view. For more details, see View Dependencies.

For examples, see Rename tables.

Required privileges

The user must have the DROP privilege on the table and the CREATE on the parent database. When moving a table from one database to another, the user must have the CREATE privilege on both the source and target databases.

Parameters

Parameter Description
table_new_name The new name of the table, which must be unique within its database and follow these identifier rules. When the parent database is not set as the default, the name must be formatted as database.name.

The UPSERT and INSERT ON CONFLICT statements use a temporary table called excluded to handle uniqueness conflicts during execution. It's therefore not recommended to use the name excluded for any of your tables.

For usage, see Synopsis.

RESET {storage parameter}

ALTER TABLE ... RESET {storage parameter} reverts the value of a storage parameter on a table to its default value.

Note:

To reset a storage parameter on an existing index, you must drop and recreate the index without the storage parameter.

For examples, see Set and reset storage parameters.

Required privileges

The user must be a member of the admin or owner roles, or have the CREATE privilege on the table.

Parameters

Parameter Description
storage_parameter_key The name of the storage parameter you are changing. See Table storage parameters for a list of available parameters.

For usage, see Synopsis.

SET {storage parameter}

ALTER TABLE ... SET {storage parameter} sets a storage parameter on an existing table.

Note:

To set a storage parameter on an existing index, you must drop and recreate the index with the storage parameter.

For examples, see Set and reset storage parameters.

Required privileges

The user must be a member of the admin or owner roles, or have the CREATE privilege on the table.

Parameters

Parameter Description
storage_parameter_key The name of the storage parameter. See Table storage parameters for a list of available parameters.
value The value to assign the storage parameter.

For usage, see Synopsis.

Table storage parameters

Parameter name Description Data type Default value
exclude_data_from_backup Exclude the data in this table from any future backups. Boolean false
schema_locked Disallow schema changes on this table. Enabling schema_locked can help improve performance of changefeeds running on this table. Boolean false
sql_stats_automatic_collection_enabled Enable automatic statistics collection for this table. Boolean true
sql_stats_automatic_collection_min_stale_rows Minimum number of stale rows in this table that will trigger a statistics refresh. Integer 500
sql_stats_automatic_collection_fraction_stale_rows Fraction of stale rows in this table that will trigger a statistics refresh. Float 0.2
sql_stats_forecasts_enabled Enable forecasted statistics collection for this table. Boolean true

The following parameters are included for PostgreSQL compatibility and do not affect how CockroachDB runs:

  • autovacuum_enabled
  • fillfactor

For the list of storage parameters that affect how Row-Level TTL works, see the list of TTL storage parameters.

SET LOCALITY

ALTER TABLE .. SET LOCALITY changes the table locality of a table in a multi-region database.

While CockroachDB is processing an ALTER TABLE .. SET LOCALITY statement that enables or disables REGIONAL BY ROW on a table within a database, any ADD REGION and DROP REGION statements on that database will fail.

For examples, see Set localities.

Required privileges

The user must be a member of the admin or owner roles, or have the CREATE privilege on the table.

Parameters

Parameter Description
locality The LOCALITY clause, followed by the locality to apply to this table. Allowed values:

For usage, see Synopsis.

For more information about which table locality is right for your use case, see Table localities.

SET SCHEMA

ALTER TABLE ... SET SCHEMA changes the schema of a table.

Note:

CockroachDB supports SET SCHEMA as an alias for setting the search_path session variable.

For examples, see Set table schema.

Required privileges

The user must have the DROP privilege on the table, and the CREATE privilege on the schema.

Parameters

Parameter Description
schema_name The name of the new schema for the table.

For usage, see Synopsis.

SPLIT AT

ALTER TABLE ... SPLIT AT forces a range split at a specified row in the table.

CockroachDB breaks data into ranges. By default, CockroachDB attempts to keep ranges below the default range size. To do this, the system will automatically split a range if it grows larger than this limit. For most use cases, this automatic range splitting is sufficient, and you should never need to worry about when or where the system decides to split ranges.

However, there are reasons why you may want to perform manual splits on the ranges that store tables or indexes:

  • When a table only consists of a single range, all writes and reads to the table will be served by that range's leaseholder. If a table only holds a small amount of data but is serving a large amount of traffic, load distribution can become unbalanced and a hot spot can occur. Splitting the table's ranges manually can allow the load on the table to be more evenly distributed across multiple nodes. For tables consisting of more than a few ranges, load will naturally be distributed across multiple nodes and this will not be a concern.

  • When a table is created, it will only consist of a single range. If you know that a new table will immediately receive significant write traffic, you may want to preemptively split the table based on the expected distribution of writes before applying the load. This can help avoid reduced workload performance that results when automatic splits are unable to keep up with write traffic and a hot spot occurs.

For examples, see Split and unsplit tables.

Required privileges

The user must have the INSERT privilege on the table.

Parameters

Parameter Description
select_stmt A selection query that produces one or more rows at which to split the table.
a_expr The expiration of the split enforcement on the table. This can be a DECIMAL, INTERVAL, TIMESTAMP, or TIMESTAMPZ.

For usage, see Synopsis.

UNSPLIT AT

ALTER TABLE ... UNSPLIT AT removes a split enforcement on a range split, at a specified row in the table.

Removing a split enforcement from a table or index ("unsplitting") allows CockroachDB to merge ranges as needed, to help improve your cluster's performance. For more information, see Range Merges.

For examples, see Split and unsplit tables.

Required privileges

The user must have the INSERT privilege on the table.

Parameters

Parameter Description
select_stmt A selection query that produces one or more rows at which to unsplit a table.
ALL Remove all split enforcements for a table.

For usage, see Synopsis.

VALIDATE CONSTRAINT

ALTER TABLE ... VALIDATE CONSTRAINT checks whether values in a column match a constraint on the column.

This statement is especially useful after applying a constraint to an existing column via ADD CONSTRAINT. In this case, VALIDATE CONSTRAINT can be used to find values already in the column that do not match the constraint.

For examples, see Validate constraints.

Required privileges

The user must have the CREATE privilege on the table.

Parameters

Parameter Description
constraint_name The name of the constraint to validate.

For usage, see Synopsis.

View schema changes

This schema change statement is registered as a job. You can view long-running jobs with SHOW JOBS.

Examples

Add columns

The following examples use the bank demo database schema.

To follow along, run cockroach demo bank to start a temporary, in-memory cluster with the bank schema and dataset preloaded:

icon/buttons/copy
$ cockroach demo bank

Add a single column

icon/buttons/copy
> ALTER TABLE bank ADD COLUMN active BOOL;
icon/buttons/copy
> SHOW COLUMNS FROM bank;
  column_name | data_type | is_nullable | column_default | generation_expression |  indices    | is_hidden
--------------+-----------+-------------+----------------+-----------------------+-------------+------------
  id          | INT8      |    false    | NULL           |                       | {bank_pkey} |   false
  balance     | INT8      |    true     | NULL           |                       | {bank_pkey} |   false
  payload     | STRING    |    true     | NULL           |                       | {bank_pkey} |   false
  active      | BOOL      |    true     | NULL           |                       | {bank_pkey} |   false
(4 rows)

Add multiple columns

icon/buttons/copy
> ALTER TABLE bank ADD COLUMN location STRING, ADD COLUMN currency STRING;
icon/buttons/copy
> SHOW COLUMNS FROM bank;
  column_name | data_type | is_nullable | column_default | generation_expression |  indices    | is_hidden
--------------+-----------+-------------+----------------+-----------------------+-------------+------------
  id          | INT8      |    false    | NULL           |                       | {bank_pkey} |   false
  balance     | INT8      |    true     | NULL           |                       | {bank_pkey} |   false
  payload     | STRING    |    true     | NULL           |                       | {bank_pkey} |   false
  active      | BOOL      |    true     | NULL           |                       | {bank_pkey} |   false
  location    | STRING    |    true     | NULL           |                       | {bank_pkey} |   false
  currency    | STRING    |    true     | NULL           |                       | {bank_pkey} |   false
(6 rows)

Add a column with a NOT NULL constraint and a DEFAULT value

icon/buttons/copy
> ALTER TABLE bank ADD COLUMN interest DECIMAL NOT NULL DEFAULT (DECIMAL '1.3');
icon/buttons/copy
> SHOW COLUMNS FROM bank;
  column_name | data_type | is_nullable |     column_default     | generation_expression |  indices    | is_hidden
--------------+-----------+-------------+------------------------+-----------------------+-------------+------------
  id          | INT8      |    false    | NULL                   |                       | {bank_pkey} |   false
  balance     | INT8      |    true     | NULL                   |                       | {bank_pkey} |   false
  payload     | STRING    |    true     | NULL                   |                       | {bank_pkey} |   false
  active      | BOOL      |    true     | NULL                   |                       | {bank_pkey} |   false
  location    | STRING    |    true     | NULL                   |                       | {bank_pkey} |   false
  currency    | STRING    |    true     | NULL                   |                       | {bank_pkey} |   false
  interest    | DECIMAL   |    false    | 1.3:::DECIMAL::DECIMAL |                       | {bank_pkey} |   false
(7 rows)

Add a column with a UNIQUE constraint

icon/buttons/copy
> ALTER TABLE bank ADD COLUMN address STRING UNIQUE;
icon/buttons/copy
> SHOW COLUMNS FROM bank;
  column_name | data_type | is_nullable | column_default | generation_expression |          indices             | is_hidden
--------------+-----------+-------------+----------------+-----------------------+------------------------------+------------
  id          | INT8      |    false    | NULL           |                       | {bank_address_key,bank_pkey} |   false
  balance     | INT8      |    true     | NULL           |                       | {bank_pkey}                  |   false
  payload     | STRING    |    true     | NULL           |                       | {bank_pkey}                  |   false
  active      | BOOL      |    true     | NULL           |                       | {bank_pkey}                  |   false
  location    | STRING    |    true     | NULL           |                       | {bank_pkey}                  |   false
  currency    | STRING    |    true     | NULL           |                       | {bank_pkey}                  |   false
  interest    | DECIMAL   |    false    | 1.3:::DECIMAL  |                       | {bank_pkey}                  |   false
  address     | STRING    |    true     | NULL           |                       | {bank_address_key,bank_pkey} |   false
(8 rows)

Add a column with a FOREIGN KEY constraint

icon/buttons/copy
> CREATE TABLE customers (
  id INT PRIMARY KEY,
  name STRING
);
icon/buttons/copy
> ALTER TABLE bank ADD COLUMN cust_number INT REFERENCES customers(id);
icon/buttons/copy
> SHOW COLUMNS FROM bank;
  column_name | data_type | is_nullable | column_default | generation_expression |          indices             | is_hidden
--------------+-----------+-------------+----------------+-----------------------+------------------------------+------------
  id          | INT8      |    false    | NULL           |                       | {bank_address_key,bank_pkey} |   false
  balance     | INT8      |    true     | NULL           |                       | {bank_pkey}                  |   false
  payload     | STRING    |    true     | NULL           |                       | {bank_pkey}                  |   false
  active      | BOOL      |    true     | NULL           |                       | {bank_pkey}                  |   false
  location    | STRING    |    true     | NULL           |                       | {bank_pkey}                  |   false
  currency    | STRING    |    true     | NULL           |                       | {bank_pkey}                  |   false
  interest    | DECIMAL   |    false    | 1.3:::DECIMAL  |                       | {bank_pkey}                  |   false
  address     | STRING    |    true     | NULL           |                       | {bank_address_key,bank_pkey} |   false
  cust_number | INT8      |    true     | NULL           |                       | {bank_pkey}                  |   false

(9 rows)
icon/buttons/copy
> SHOW CONSTRAINTS FROM bank;
  table_name |    constraint_name    | constraint_type |                      details                       | validated
-------------+-----------------------+-----------------+----------------------------------------------------+------------
  bank       | bank_address_key      | UNIQUE          | UNIQUE (address ASC)                               |     t
  bank       | bank_cust_number_fkey | FOREIGN KEY     | FOREIGN KEY (cust_number) REFERENCES customers(id) |     t
  bank       | bank_pkey             | PRIMARY KEY     | PRIMARY KEY (id ASC)                               |     t
(3 rows)

Add a column with collation

icon/buttons/copy
> ALTER TABLE bank ADD COLUMN more_names STRING COLLATE en;
icon/buttons/copy
> SHOW COLUMNS FROM bank;
  column_name |     data_type     | is_nullable | column_default | generation_expression |          indices             | is_hidden
--------------+-------------------+-------------+----------------+-----------------------+------------------------------+------------
  id          | INT8              |    false    | NULL           |                       | {bank_address_key,bank_pkey} |   false
  balance     | INT8              |    true     | NULL           |                       | {bank_pkey}                  |   false
  payload     | STRING            |    true     | NULL           |                       | {bank_pkey}                  |   false
  active      | BOOL              |    true     | NULL           |                       | {bank_pkey}                  |   false
  location    | STRING            |    true     | NULL           |                       | {bank_pkey}                  |   false
  currency    | STRING            |    true     | NULL           |                       | {bank_pkey}                  |   false
  interest    | DECIMAL           |    false    | 1.3:::DECIMAL  |                       | {bank_pkey}                  |   false
  address     | STRING            |    true     | NULL           |                       | {bank_address_key,bank_pkey} |   false
  cust_number | INT8              |    true     | NULL           |                       | {bank_pkey}                  |   false
  more_names  | STRING COLLATE en |    true     | NULL           |                       | {bank_pkey}                  |   false
(10 rows)

Add a column and assign it to a column family

Add a column and assign it to a new column family
icon/buttons/copy
> ALTER TABLE bank ADD COLUMN location1 STRING CREATE FAMILY f1;
icon/buttons/copy
> SHOW CREATE TABLE bank;
  table_name |                                                          create_statement
-------------+--------------------------------------------------------------------------------------------------------------------------------------
  bank       | CREATE TABLE bank (
             |     id INT8 NOT NULL,
             |     balance INT8 NULL,
             |     payload STRING NULL,
             |     active BOOL NULL,
             |     location STRING NULL,
             |     currency STRING NULL,
             |     interest DECIMAL NOT NULL DEFAULT 1.3:::DECIMAL,
             |     address STRING NULL,
             |     cust_number INT8 NULL,
             |     more_names STRING COLLATE en NULL,
             |     location1 STRING NULL,
             |     CONSTRAINT bank_pkey PRIMARY KEY (id ASC),
             |     CONSTRAINT fk_cust_number_ref_customers FOREIGN KEY (cust_number) REFERENCES customers(id),
             |     UNIQUE INDEX bank_address_key (address ASC),
             |     FAMILY fam_0_id_balance_payload (id, balance, payload, active, location, currency, interest, address, cust_number, more_names),
             |     FAMILY f1 (location1)
             | )
(1 row)
Add a column and assign it to an existing column family
icon/buttons/copy
> ALTER TABLE bank ADD COLUMN location2 STRING FAMILY f1;
icon/buttons/copy
> SHOW CREATE TABLE bank;
  table_name |                                                          create_statement
-------------+--------------------------------------------------------------------------------------------------------------------------------------
  bank       | CREATE TABLE bank (
             |     id INT8 NOT NULL,
             |     balance INT8 NULL,
             |     payload STRING NULL,
             |     active BOOL NULL,
             |     location STRING NULL,
             |     currency STRING NULL,
             |     interest DECIMAL NOT NULL DEFAULT 1.3:::DECIMAL,
             |     address STRING NULL,
             |     cust_number INT8 NULL,
             |     more_names STRING COLLATE en NULL,
             |     location1 STRING NULL,
             |     location2 STRING NULL,
             |     CONSTRAINT bank_pkey PRIMARY KEY (id ASC),
             |     CONSTRAINT fk_cust_number_ref_customers FOREIGN KEY (cust_number) REFERENCES customers(id),
             |     UNIQUE INDEX bank_address_key (address ASC),
             |     FAMILY fam_0_id_balance_payload (id, balance, payload, active, location, currency, interest, address, cust_number, more_names),
             |     FAMILY f1 (location1, location2)
             | )
(1 row)
Add a column and create a new column family if column family does not exist
icon/buttons/copy
> ALTER TABLE bank ADD COLUMN new_name STRING CREATE IF NOT EXISTS FAMILY f2;
icon/buttons/copy
> SHOW CREATE TABLE bank;
  table_name |                                                          create_statement
-------------+--------------------------------------------------------------------------------------------------------------------------------------
  bank       | CREATE TABLE bank (
             |     id INT8 NOT NULL,
             |     balance INT8 NULL,
             |     payload STRING NULL,
             |     active BOOL NULL,
             |     location STRING NULL,
             |     currency STRING NULL,
             |     interest DECIMAL NOT NULL DEFAULT 1.3:::DECIMAL,
             |     address STRING NULL,
             |     cust_number INT8 NULL,
             |     more_names STRING COLLATE en NULL,
             |     location1 STRING NULL,
             |     location2 STRING NULL,
             |     new_name STRING NULL,
             |     CONSTRAINT bank_pkey PRIMARY KEY (id ASC),
             |     CONSTRAINT fk_cust_number_ref_customers FOREIGN KEY (cust_number) REFERENCES customers(id),
             |     UNIQUE INDEX bank_address_key (address ASC),
             |     FAMILY fam_0_id_balance_payload (id, balance, payload, active, location, currency, interest, address, cust_number, more_names),
             |     FAMILY f1 (location1, location2),
             |     FAMILY f2 (new_name)
             | )
(1 row)
Move a column from one column family to another

Moving frequently updated columns to their own column family can increase performance.

To move a column from one column family to another column family, create a temporary, non-visible stored computed column in the target column family, then rename the columns. Once this succeeds, you can drop the original, now renamed column.

For example, to move the new_name column from f2 to f1:

  1. Create a temporary computed column in the target column family of the same data type as the column you want to move:

    icon/buttons/copy
    ALTER TABLE bank
        ADD COLUMN newer_name STRING
        FAMILY f1 NOT VISIBLE AS (new_name) STORED;
    

    This causes newer_name to have the same values as new_name.

  2. Rename the columns:

    icon/buttons/copy
    ALTER TABLE bank
        ALTER COLUMN newer_name DROP STORED,
        ALTER COLUMN newer_name SET VISIBLE,
        RENAME COLUMN new_name TO old_name,
        RENAME COLUMN newer_name TO new_name,
        ALTER COLUMN old_name SET NOT VISIBLE;
    
  3. Drop the old column:

    icon/buttons/copy
    SET sql_safe_updates = false;
    ALTER TABLE bank DROP COLUMN old_name;
    SET sql_safe_updates = true;
    
    Note:

    You must set the sql_safe_updates session variable to false to drop a column in a table that has data.

Moving a column to another column family executes writes to the underlying storage equal to two times the number of rows. For example, if the table has 10 million rows, there will be 20 million writes to the storage layer: 10 million writes when creating the temporary stored computed column, and 10 million writes when removing the original column.

Add a column with an ON UPDATE expression

ON UPDATE expressions set the value for a column when other values in a row are updated.

For example, suppose you add a new column to the bank table:

icon/buttons/copy
> ALTER TABLE bank ADD COLUMN last_updated TIMESTAMPTZ DEFAULT now() ON UPDATE now();
icon/buttons/copy
> SELECT id, balance, last_updated FROM bank LIMIT 5;
  id | balance |         last_updated
-----+---------+--------------------------------
   0 |       0 | 2021-10-21 17:03:41.213557+00
   1 |       0 | 2021-10-21 17:03:41.213557+00
   2 |       0 | 2021-10-21 17:03:41.213557+00
   3 |       0 | 2021-10-21 17:03:41.213557+00
   4 |       0 | 2021-10-21 17:03:41.213557+00
(5 rows)

When any value in any row of the bank table is updated, CockroachDB re-evaluates the ON UPDATE expression and updates the last_updated column with the result.

icon/buttons/copy
> UPDATE bank SET balance = 500 WHERE id = 0;
icon/buttons/copy
> SELECT id, balance, last_updated FROM bank LIMIT 5;
  id | balance |         last_updated
-----+---------+--------------------------------
   0 |     500 | 2021-10-21 17:06:42.211261+00
   1 |       0 | 2021-10-21 17:03:41.213557+00
   2 |       0 | 2021-10-21 17:03:41.213557+00
   3 |       0 | 2021-10-21 17:03:41.213557+00
   4 |       0 | 2021-10-21 17:03:41.213557+00
(5 rows)

Add constraints

Setup

To follow along, run cockroach demo to start a temporary, in-memory cluster with the movr sample dataset preloaded:

icon/buttons/copy
$ cockroach demo

Add the UNIQUE constraint

Adding the UNIQUE constraint requires that all of a column's values be distinct from one another (except for NULL values).

icon/buttons/copy
> ALTER TABLE users ADD CONSTRAINT id_name_unique UNIQUE (id, name);

Add the CHECK constraint

Adding the CHECK constraint requires that all of a column's values evaluate to TRUE for a Boolean expression.

icon/buttons/copy
> ALTER TABLE rides ADD CONSTRAINT check_revenue_positive CHECK (revenue >= 0);

In the process of adding the constraint CockroachDB will run a background job to validate existing table data. If CockroachDB finds a row that violates the constraint during the validation step, the ADD CONSTRAINT statement will fail.

Add constraints to columns created during a transaction

You can add check constraints to columns that were created earlier in the transaction. For example:

icon/buttons/copy
> BEGIN;
> ALTER TABLE users ADD COLUMN is_owner STRING;
> ALTER TABLE users ADD CONSTRAINT check_is_owner CHECK (is_owner IN ('yes', 'no', 'unknown'));
> COMMIT;
BEGIN
ALTER TABLE
ALTER TABLE
COMMIT
Note:

The entire transaction will be rolled back, including any new columns that were added, in the following cases:

  • If an existing column is found containing values that violate the new constraint.
  • If a new column has a default value or is a computed column that would have contained values that violate the new constraint.

Add the foreign key constraint with CASCADE

To add a foreign key constraint, use the following steps.

Given two tables, users and vehicles, without foreign key constraints:

icon/buttons/copy
> SHOW CREATE users;
  table_name |                      create_statement
-------------+--------------------------------------------------------------
  users      | CREATE TABLE users (
             |     id UUID NOT NULL,
             |     city VARCHAR NOT NULL,
             |     name VARCHAR NULL,
             |     address VARCHAR NULL,
             |     credit_card VARCHAR NULL,
             |     CONSTRAINT users_pkey PRIMARY KEY (city ASC, id ASC)
             | )
(1 row)
icon/buttons/copy
> SHOW CREATE vehicles;
  table_name |                                       create_statement
-------------+------------------------------------------------------------------------------------------------
  vehicles   | CREATE TABLE vehicles (
             |     id UUID NOT NULL,
             |     city VARCHAR NOT NULL,
             |     type VARCHAR NULL,
             |     owner_id UUID NULL,
             |     creation_time TIMESTAMP NULL,
             |     status VARCHAR NULL,
             |     current_location VARCHAR NULL,
             |     ext JSONB NULL,
             |     CONSTRAINT vehicles_pkey PRIMARY KEY (city ASC, id ASC),
             | )
(1 row)

You can include a foreign key action to specify what happens when a foreign key is updated or deleted.

Using ON DELETE CASCADE will ensure that when the referenced row is deleted, all dependent objects are also deleted.

Warning:

CASCADE does not list the objects it drops or updates, so it should be used with caution.

icon/buttons/copy
> ALTER TABLE vehicles ADD CONSTRAINT users_fk FOREIGN KEY (city, owner_id) REFERENCES users (city, id) ON DELETE CASCADE;

For an example of validating this constraint, see Validate a constraint.

Note:

By default, referenced columns must be in the same database as the referencing foreign key column. To enable cross-database foreign key references, set the sql.cross_db_fks.enabled cluster setting to true.

Drop and add a primary key constraint

Suppose that you want to add name to the composite primary key of the users table, without creating a secondary index of the existing primary key.

icon/buttons/copy
> SHOW CREATE TABLE users;
  table_name |                      create_statement
-------------+--------------------------------------------------------------
  users      | CREATE TABLE users (
             |     id UUID NOT NULL,
             |     city VARCHAR NOT NULL,
             |     name VARCHAR NULL,
             |     address VARCHAR NULL,
             |     credit_card VARCHAR NULL,
             |     CONSTRAINT users_pkey PRIMARY KEY (city ASC, id ASC)
             | )
(1 row)
  1. Add a NOT NULL constraint to the name column with ALTER COLUMN.

    icon/buttons/copy
    > ALTER TABLE users ALTER COLUMN name SET NOT NULL;
    
  2. In the same transaction, DROP the old "primary" constraint and ADD the new one:

    icon/buttons/copy
    > BEGIN;
    > ALTER TABLE users DROP CONSTRAINT "primary";
    > ALTER TABLE users ADD CONSTRAINT "primary" PRIMARY KEY (city, name, id);
    > COMMIT;
    
    NOTICE: primary key changes are finalized asynchronously; further schema changes on this table may be restricted until the job completes
    
  3. View the table structure:

    icon/buttons/copy
    > SHOW CREATE TABLE users;
    
      table_name |                          create_statement
    -------------+---------------------------------------------------------------------
      users      | CREATE TABLE users (
                |     id UUID NOT NULL,
                |     city VARCHAR NOT NULL,
                |     name VARCHAR NOT NULL,
                |     address VARCHAR NULL,
                |     credit_card VARCHAR NULL,
                |     CONSTRAINT "primary" PRIMARY KEY (city ASC, name ASC, id ASC),
                |     FAMILY "primary" (id, city, name, address, credit_card)
                | )
    (1 row)
    

Using ALTER PRIMARY KEY would have created a UNIQUE secondary index called users_city_id_key. Instead, there is just one index for the primary key constraint.

Add a unique index to a REGIONAL BY ROW table

In multi-region deployments, most users should use REGIONAL BY ROW tables instead of explicit index partitioning. When you add an index to a REGIONAL BY ROW table, it is automatically partitioned on the crdb_region column. Explicit index partitioning is not required.

While CockroachDB process an ADD REGION or DROP REGION statement on a particular database, creating or modifying an index will throw an error. Similarly, all ADD REGION and DROP REGION statements will be blocked while an index is being modified on a REGIONAL BY ROW table within the same database.

This example assumes you have a simulated multi-region database running on your local machine following the steps described in Low Latency Reads and Writes in a Multi-Region Cluster. It shows how a UNIQUE index is partitioned, but it's similar to how all indexes are partitioned on REGIONAL BY ROW tables.

To show how the automatic partitioning of indexes on REGIONAL BY ROW tables works, we will:

  1. Add a column to the users table in the MovR dataset.
  2. Add a UNIQUE constraint to that column.
  3. Verify that the index is automatically partitioned for better multi-region performance by using SHOW INDEXES and SHOW PARTITIONS.

First, add a column and its unique constraint. We'll use email since that is something that should be unique per user.

icon/buttons/copy
ALTER TABLE users ADD COLUMN email STRING;
icon/buttons/copy
ALTER TABLE users ADD CONSTRAINT user_email_unique UNIQUE (email);

Next, issue the SHOW INDEXES statement. You will see that the implicit region column that was added when the table was converted to regional by row is now indexed:

icon/buttons/copy
SHOW INDEXES FROM users;
  table_name |    index_name     | non_unique | seq_in_index | column_name | direction | storing | implicit| visible
-------------+-------------------+------------+--------------+-------------+-----------+---------+---------+--------
  users      | users_pkey        |     f      |            1 | region      | ASC       |   f     |   t     |   t
  users      | users_pkey        |     f      |            2 | id          | ASC       |   f     |   f     |   t
  users      | users_pkey        |     f      |            3 | city        | N/A       |   t     |   f     |   t
  users      | users_pkey        |     f      |            4 | name        | N/A       |   t     |   f     |   t
  users      | users_pkey        |     f      |            5 | address     | N/A       |   t     |   f     |   t
  users      | users_pkey        |     f      |            6 | credit_card | N/A       |   t     |   f     |   t
  users      | users_pkey        |     f      |            7 | email       | N/A       |   t     |   f     |   t
  users      | user_email_unique |     f      |            1 | region      | ASC       |   f     |   t     |   t
  users      | user_email_unique |     f      |            2 | email       | ASC       |   f     |   f     |   t
  users      | user_email_unique |     f      |            3 | id          | ASC       |   f     |   t     |   t
  users      | users_city_idx    |     t      |            1 | region      | ASC       |   f     |   t     |   t
  users      | users_city_idx    |     t      |            2 | city        | ASC       |   f     |   f     |   t
  users      | users_city_idx    |     t      |            3 | id          | ASC       |   f     |   t     |   t
(13 rows)

Next, issue the SHOW PARTITIONS statement. The following output (which is edited for length) will verify that the unique index was automatically partitioned for you. It shows that the user_email_unique index is now partitioned by the database regions europe-west1, us-east1, and us-west1.

icon/buttons/copy
SHOW PARTITIONS FROM TABLE users;
  database_name | table_name | partition_name | column_names |       index_name        | partition_value  |  ...
----------------+------------+----------------+--------------+-------------------------+------------------+-----
  movr          | users      | europe-west1   | region       | users@user_email_unique | ('europe-west1') |  ...
  movr          | users      | us-east1       | region       | users@user_email_unique | ('us-east1')     |  ...
  movr          | users      | us-west1       | region       | users@user_email_unique | ('us-west1')     |  ...

To ensure that the uniqueness constraint is enforced properly across regions when rows are inserted, or the email column of an existing row is updated, the database needs to do the following additional work when indexes are partitioned:

  1. Run a one-time-only validation query to ensure that the existing data in the table satisfies the unique constraint.
  2. Thereafter, the optimizer will automatically add a "uniqueness check" when necessary to any INSERT, UPDATE, or UPSERT statement affecting the columns in the unique constraint.

Note that the SQL engine will avoid sending requests to nodes in other regions when it can instead read a value from a unique column that is stored locally. This capability is known as locality optimized search.

Using DEFAULT gen_random_uuid() in REGIONAL BY ROW tables

To auto-generate unique row identifiers in REGIONAL BY ROW tables, use the UUID column with the gen_random_uuid() function as the default value:

icon/buttons/copy
> CREATE TABLE users (
        id UUID NOT NULL DEFAULT gen_random_uuid(),
        city STRING NOT NULL,
        name STRING NULL,
        address STRING NULL,
        credit_card STRING NULL,
        CONSTRAINT users_pkey PRIMARY KEY (city ASC, id ASC)
);
icon/buttons/copy
> INSERT INTO users (name, city) VALUES ('Petee', 'new york'), ('Eric', 'seattle'), ('Dan', 'seattle');
icon/buttons/copy
> SELECT * FROM users;
                   id                  |   city   | name  | address | credit_card
+--------------------------------------+----------+-------+---------+-------------+
  cf8ee4e2-cd74-449a-b6e6-a0fb2017baa4 | new york | Petee | NULL    | NULL
  2382564e-702f-42d9-a139-b6df535ae00a | seattle  | Eric  | NULL    | NULL
  7d27e40b-263a-4891-b29b-d59135e55650 | seattle  | Dan   | NULL    | NULL
(3 rows)
Note:

When using DEFAULT gen_random_uuid() on columns in REGIONAL BY ROW tables, uniqueness checks on those columns are disabled by default for performance purposes. CockroachDB assumes uniqueness based on the way this column generates UUIDs. To enable this check, you can modify the sql.optimizer.uniqueness_checks_for_gen_random_uuid.enabled cluster setting. Note that while there is virtually no chance of a collision occurring when enabling this setting, it is not truly zero.

Using implicit vs. explicit index partitioning in REGIONAL BY ROW tables

In REGIONAL BY ROW tables, all indexes are partitioned on the region column (usually called crdb_region).

These indexes can either include or exclude the partitioning key (crdb_region) as the first column in the index definition:

  • If crdb_region is included in the index definition, a UNIQUE index will enforce uniqueness on the set of columns, just like it would in a non-partitioned table.
  • If crdb_region is excluded from the index definition, that serves as a signal that CockroachDB should enforce uniqueness on only the columns in the index definition.

In the latter case, the index alone cannot enforce uniqueness on columns that are not a prefix of the index columns, so any time rows are inserted or updated in a REGIONAL BY ROW table that has an implicitly partitioned UNIQUE index, the optimizer must add uniqueness checks.

Whether or not to explicitly include crdb_region in the index definition depends on the context:

  • If you only need to enforce uniqueness at the region level, then including crdb_region in the UNIQUE index definition will enforce these semantics and allow you to get better performance on INSERTs, UPDATEs, and UPSERTs, since there will not be any added latency from uniqueness checks.
  • If you need to enforce global uniqueness, you should not include crdb_region in the UNIQUE (or PRIMARY KEY) index definition, and the database will automatically ensure that the constraint is enforced.

To illustrate the different behavior of explicitly vs. implicitly partitioned indexes, we will perform the following tasks:

  • Create a schema that includes an explicitly partitioned index, and an implicitly partitioned index.
  • Check the output of several queries using EXPLAIN to show the differences in behavior between the two.
  1. Start cockroach demo as follows:

    icon/buttons/copy
    cockroach demo --geo-partitioned-replicas
    
  2. Create a multi-region database and an employees table. There are three indexes in the table, all UNIQUE and all partitioned by the crdb_region column. The table schema guarantees that both id and email are globally unique, while desk_id is only unique per region. The indexes on id and email are implicitly partitioned, while the index on (crdb_region, desk_id) is explicitly partitioned. UNIQUE indexes can only directly enforce uniqueness on all columns in the index, including partitioning columns, so each of these indexes enforce uniqueness for id, email, and desk_id per region, respectively.

    icon/buttons/copy
    CREATE DATABASE multi_region_test_db PRIMARY REGION "europe-west1" REGIONS "us-west1", "us-east1";
    
    icon/buttons/copy
    USE multi_region_test_db;
    
    icon/buttons/copy
    CREATE TABLE employee (
      id INT PRIMARY KEY,
      email STRING UNIQUE,
      desk_id INT,
      UNIQUE (crdb_region, desk_id)
    ) LOCALITY REGIONAL BY ROW;
    
  3. In the following statement, we add a new user with the required id, email, and desk_id columns. CockroachDB needs to do additional work to enforce global uniqueness for the id and email columns, which are implicitly partitioned. This additional work is in the form of "uniqueness checks" that the optimizer adds as part of mutation queries.

    icon/buttons/copy
    EXPLAIN INSERT INTO employee VALUES (1, '[email protected]', 1);
    

    The following EXPLAIN output shows that the optimizer has added two constraint-check post queries to check the uniqueness of the implicitly partitioned indexes id and email. There is no check needed for desk_id (really (crdb_region, desk_id)), since that constraint is automatically enforced by the explicitly partitioned index we added in the preceding CREATE TABLE statement.

                                             info
    --------------------------------------------------------------------------------------
      distribution: local
      vectorized: true
    
      • root
      │
      ├── • insert
      │   │ into: employee(id, email, desk_id, crdb_region)
      │   │
      │   └── • buffer
      │       │ label: buffer 1
      │       │
      │       └── • values
      │             size: 5 columns, 1 row
      │
      ├── • constraint-check
      │   │
      │   └── • error if rows
      │       │
      │       └── • lookup join (semi)
      │           │ table: employee@primary
      │           │ equality: (lookup_join_const_col_@15, column1) = (crdb_region,id)
      │           │ equality cols are key
      │           │ pred: column10 != crdb_region
      │           │
      │           └── • cross join
      │               │ estimated row count: 3
      │               │
      │               ├── • values
      │               │     size: 1 column, 3 rows
      │               │
      │               └── • scan buffer
      │                     label: buffer 1
      │
      └── • constraint-check
          │
          └── • error if rows
              │
              └── • lookup join (semi)
                  │ table: employee@employee_email_key
                  │ equality: (lookup_join_const_col_@25, column2) = (crdb_region,email)
                  │ equality cols are key
                  │ pred: (column1 != id) OR (column10 != crdb_region)
                  │
                  └── • cross join
                      │ estimated row count: 3
                      │
                      ├── • values
                      │     size: 1 column, 3 rows
                      │
                      └── • scan buffer
                            label: buffer 1
    
  4. The following statement updates the user's email column. Because the unique index on the email column is implicitly partitioned, the optimizer must perform a uniqueness check.

    icon/buttons/copy
    EXPLAIN UPDATE employee SET email = '[email protected]' WHERE id = 1;
    

    In the following EXPLAIN output, the optimizer performs a uniqueness check for email since we're not updating any other columns (see the constraint-check section).

                                                      info
    --------------------------------------------------------------------------------------------------------
      distribution: local
      vectorized: true
    
      • root
      │
      ├── • update
      │   │ table: employee
      │   │ set: email
      │   │
      │   └── • buffer
      │       │ label: buffer 1
      │       │
      │       └── • render
      │           │ estimated row count: 1
      │           │
      │           └── • union all
      │               │ estimated row count: 1
      │               │ limit: 1
      │               │
      │               ├── • scan
      │               │     estimated row count: 1 (100% of the table; stats collected 1 minute ago)
      │               │     table: employee@primary
      │               │     spans: [/'us-east1'/1 - /'us-east1'/1]
      │               │
      │               └── • scan
      │                     estimated row count: 1 (100% of the table; stats collected 1 minute ago)
      │                     table: employee@primary
      │                     spans: [/'europe-west1'/1 - /'europe-west1'/1] [/'us-west1'/1 - /'us-west1'/1]
      │
      └── • constraint-check
          │
          └── • error if rows
              │
              └── • lookup join (semi)
                  │ table: employee@employee_email_key
                  │ equality: (lookup_join_const_col_@18, email_new) = (crdb_region,email)
                  │ equality cols are key
                  │ pred: (id != id) OR (crdb_region != crdb_region)
                  │
                  └── • cross join
                      │ estimated row count: 3
                      │
                      ├── • values
                      │     size: 1 column, 3 rows
                      │
                      └── • scan buffer
                            label: buffer 1
    
  5. If we only update the user's desk_id, no uniqueness checks are needed, since the index on that column is explicitly partitioned (it's really (crdb_region, desk_id)).

    icon/buttons/copy
    EXPLAIN UPDATE employee SET desk_id = 2 WHERE id = 1;
    

    Because no uniqueness check is needed, there is no constraint-check section in the EXPLAIN output.

                                                  info
    ------------------------------------------------------------------------------------------------
      distribution: local
      vectorized: true
    
      • update
      │ table: employee
      │ set: desk_id
      │ auto commit
      │
      └── • render
          │ estimated row count: 1
          │
          └── • union all
              │ estimated row count: 1
              │ limit: 1
              │
              ├── • scan
              │     estimated row count: 1 (100% of the table; stats collected 2 minutes ago)
              │     table: employee@primary
              │     spans: [/'us-east1'/1 - /'us-east1'/1]
              │
              └── • scan
                    estimated row count: 1 (100% of the table; stats collected 2 minutes ago)
                    table: employee@primary
                    spans: [/'europe-west1'/1 - /'europe-west1'/1] [/'us-west1'/1 - /'us-west1'/1]
    

Alter columns

Set or change a DEFAULT value

Setting the DEFAULT value constraint inserts the value when data's written to the table without explicitly defining the value for the column. If the column already has a DEFAULT value set, you can use this statement to change it.

The following example inserts the Boolean value true whenever you inserted data to the subscriptions table without defining a value for the newsletter column.

icon/buttons/copy
> ALTER TABLE subscriptions ALTER COLUMN newsletter SET DEFAULT true;

Remove DEFAULT constraint

If the column has a defined DEFAULT value, you can remove the constraint, which means the column will no longer insert a value by default if one is not explicitly defined for the column.

icon/buttons/copy
> ALTER TABLE subscriptions ALTER COLUMN newsletter DROP DEFAULT;

Set NOT NULL constraint

To specify that the column cannot contain NULL values, set the NOT NULL constraint.

icon/buttons/copy
> ALTER TABLE subscriptions ALTER COLUMN newsletter SET NOT NULL;

Remove NOT NULL constraint

If the column has the NOT NULL constraint applied to it, you can remove the constraint, which means the column becomes optional and can have NULL values written into it.

icon/buttons/copy
> ALTER TABLE subscriptions ALTER COLUMN newsletter DROP NOT NULL;

Convert a computed column into a regular column

You can convert a stored, computed column into a regular column by using ALTER TABLE.

In this example, create a simple table with a computed column:

icon/buttons/copy
> CREATE TABLE office_dogs (
    id INT PRIMARY KEY,
    first_name STRING,
    last_name STRING,
    full_name STRING AS (CONCAT(first_name, ' ', last_name)) STORED
  );

Then, insert a few rows of data:

icon/buttons/copy
> INSERT INTO office_dogs (id, first_name, last_name) VALUES
    (1, 'Petee', 'Hirata'),
    (2, 'Carl', 'Kimball'),
    (3, 'Ernie', 'Narayan');
icon/buttons/copy
> SELECT * FROM office_dogs;
+----+------------+-----------+---------------+
| id | first_name | last_name |   full_name   |
+----+------------+-----------+---------------+
|  1 | Petee      | Hirata    | Petee Hirata  |
|  2 | Carl       | Kimball   | Carl Kimball  |
|  3 | Ernie      | Narayan   | Ernie Narayan |
+----+------------+-----------+---------------+
(3 rows)

The full_name column is computed from the first_name and last_name columns without the need to define a view. You can view the column details with the SHOW COLUMNS statement:

icon/buttons/copy
> SHOW COLUMNS FROM office_dogs;
+-------------+-----------+-------------+----------------+------------------------------------+-------------+
| column_name | data_type | is_nullable | column_default |       generation_expression        |   indices   |
+-------------+-----------+-------------+----------------+------------------------------------+-------------+
| id          | INT       |    false    | NULL           |                                    | {"primary"} |
| first_name  | STRING    |    true     | NULL           |                                    | {}          |
| last_name   | STRING    |    true     | NULL           |                                    | {}          |
| full_name   | STRING    |    true     | NULL           | concat(first_name, ' ', last_name) | {}          |
+-------------+-----------+-------------+----------------+------------------------------------+-------------+
(4 rows)

Now, convert the computed column (full_name) to a regular column:

icon/buttons/copy
> ALTER TABLE office_dogs ALTER COLUMN full_name DROP STORED;

Check that the computed column was converted:

icon/buttons/copy
> SHOW COLUMNS FROM office_dogs;
+-------------+-----------+-------------+----------------+-----------------------+-------------+
| column_name | data_type | is_nullable | column_default | generation_expression |   indices   |
+-------------+-----------+-------------+----------------+-----------------------+-------------+
| id          | INT       |    false    | NULL           |                       | {"primary"} |
| first_name  | STRING    |    true     | NULL           |                       | {}          |
| last_name   | STRING    |    true     | NULL           |                       | {}          |
| full_name   | STRING    |    true     | NULL           |                       | {}          |
+-------------+-----------+-------------+----------------+-----------------------+-------------+
(4 rows)

The computed column is now a regular column and can be updated as such:

icon/buttons/copy
> INSERT INTO office_dogs (id, first_name, last_name, full_name) VALUES (4, 'Lola', 'McDog', 'This is not computed');
icon/buttons/copy
> SELECT * FROM office_dogs;
+----+------------+-----------+----------------------+
| id | first_name | last_name |      full_name       |
+----+------------+-----------+----------------------+
|  1 | Petee      | Hirata    | Petee Hirata         |
|  2 | Carl       | Kimball   | Carl Kimball         |
|  3 | Ernie      | Narayan   | Ernie Narayan        |
|  4 | Lola       | McDog     | This is not computed |
+----+------------+-----------+----------------------+
(4 rows)

Alter the formula for a computed column

To alter the formula for a computed column, you must DROP and ADD the column back with the new definition. Take the following table for instance:

icon/buttons/copy
> CREATE TABLE x (
a INT NULL,
b INT NULL AS (a * 2) STORED,
c INT NULL AS (a + 4) STORED,
FAMILY "primary" (a, b, rowid, c)
);
CREATE TABLE


Time: 4ms total (execution 4ms / network 0ms)

Add a computed column d:

icon/buttons/copy
> ALTER TABLE x ADD COLUMN d INT AS (a // 2) STORED;
ALTER TABLE


Time: 199ms total (execution 199ms / network 0ms)

If you try to alter it, you'll get an error:

icon/buttons/copy
> ALTER TABLE x ALTER COLUMN d INT AS (a // 3) STORED;
invalid syntax: statement ignored: at or near "int": syntax error
SQLSTATE: 42601
DETAIL: source SQL:
ALTER TABLE x ALTER COLUMN d INT AS (a // 3) STORED
                             ^
HINT: try \h ALTER TABLE

However, you can drop it and then add it with the new definition:

icon/buttons/copy
> SET sql_safe_updates = false;
> ALTER TABLE x DROP COLUMN d;
> ALTER TABLE x ADD COLUMN d INT AS (a // 3) STORED;
> SET sql_safe_updates = true;
SET


Time: 1ms total (execution 0ms / network 0ms)

ALTER TABLE


Time: 195ms total (execution 195ms / network 0ms)

ALTER TABLE


Time: 186ms total (execution 185ms / network 0ms)

SET


Time: 0ms total (execution 0ms / network 0ms)

Convert to a different data type

The TPC-C database has a customer table with a column c_credit_lim of type DECIMAL(10,2):

icon/buttons/copy
> WITH x AS (SHOW COLUMNS FROM customer) SELECT column_name, data_type FROM x WHERE column_name='c_credit_lim';
  column_name  |   data_type
---------------+----------------
  c_credit_lim | DECIMAL(10,2)
(1 row)

To change the data type from DECIMAL to STRING:

  1. Set the enable_experimental_alter_column_type_general session variable to true:

    icon/buttons/copy
    > SET enable_experimental_alter_column_type_general = true;
    
  2. Alter the column type:

    icon/buttons/copy
    > ALTER TABLE customer ALTER c_credit_lim TYPE STRING;
    
    NOTICE: ALTER COLUMN TYPE changes are finalized asynchronously; further schema changes on this table may be restricted until the job completes; some writes to the altered column may be rejected until the schema change is finalized
    
  3. Verify the type:

    icon/buttons/copy
    > WITH x AS (SHOW COLUMNS FROM customer) SELECT column_name, data_type FROM x WHERE column_name='c_credit_lim';
    
      column_name  | data_type
    ---------------+------------
      c_credit_lim | STRING
    (1 row)
    

Change a column type's precision

The TPC-C customer table contains a column c_balance of type DECIMAL(12,2):

icon/buttons/copy
> WITH x AS (SHOW COLUMNS FROM customer) SELECT column_name, data_type FROM x WHERE column_name='c_balance';
  column_name |   data_type
--------------+----------------
  c_balance   | DECIMAL(12,2)
(1 row)

To increase the precision of the c_balance column from DECIMAL(12,2) to DECIMAL(14,2):

icon/buttons/copy
> ALTER TABLE customer ALTER c_balance TYPE DECIMAL(14,2);
icon/buttons/copy
> WITH x AS (SHOW COLUMNS FROM customer) SELECT column_name, data_type FROM x WHERE column_name='c_balance';
  column_name |   data_type
--------------+----------------
  c_balance   | DECIMAL(14,2)
(1 row)

Change a column's type using an expression

You can change the data type of a column and create a new, computed value from the old column values, with a USING clause. For example:

icon/buttons/copy
> WITH x AS (SHOW COLUMNS FROM customer) SELECT column_name, data_type FROM x WHERE column_name='c_discount';
  column_name |  data_type
--------------+---------------
  c_discount  | DECIMAL(4,4)
(1 row)
icon/buttons/copy
> SELECT c_discount FROM customer LIMIT 10;
  c_discount
--------------
      0.1569
      0.4629
      0.2932
      0.0518
      0.3922
      0.1106
      0.0622
      0.4916
      0.3072
      0.0316
(10 rows)
icon/buttons/copy
> ALTER TABLE customer ALTER c_discount TYPE STRING USING ((c_discount*100)::DECIMAL(4,2)::STRING || ' percent');
NOTICE: ALTER COLUMN TYPE changes are finalized asynchronously; further schema changes on this table may be restricted until the job completes; some writes to the altered column may be rejected until the schema change is finalized
icon/buttons/copy
> WITH x AS (SHOW COLUMNS FROM customer) SELECT column_name, data_type FROM x WHERE column_name='c_discount';
  column_name | data_type
--------------+------------
  c_discount  | STRING
(1 row)
icon/buttons/copy
> SELECT c_discount FROM customer LIMIT 10;
   c_discount
-----------------
  15.69 percent
  46.29 percent
  29.32 percent
  5.18 percent
  39.22 percent
  11.06 percent
  6.22 percent
  49.16 percent
  30.72 percent
  3.16 percent
(10 rows)

Set the visibility of a column

To specify that a column won't be returned when using * in a SELECT clause, set the NOT VISIBLE property. You can set the NOT VISIBLE property only on individual columns.

For example, the users table of the movr database contains the credit_card column. If you don't want users to see that column when running SELECT * FROM users;, you can hide it as follows:

icon/buttons/copy
> ALTER TABLE users ALTER COLUMN credit_card SET NOT VISIBLE;

When you run SELECT *, the column does not appear:

icon/buttons/copy
> SELECT * FROM users WHERE city = 'rome';
id                                     | city |       name        |            address
---------------------------------------+------+-------------------+--------------------------------
e6666666-6666-4800-8000-00000000002d   | rome | Misty Adams       | 82289 Natasha River Suite 12
eb851eb8-51eb-4800-8000-00000000002e   | rome | Susan Morse       | 49364 Melissa Squares Suite 4
f0a3d70a-3d70-4000-8000-00000000002f   | rome | Victoria Jennings | 31562 Krista Squares Suite 62
f5c28f5c-28f5-4000-8000-000000000030   | rome | Eric Perez        | 57624 Kelly Forks
fae147ae-147a-4000-8000-000000000031   | rome | Richard Bullock   | 21194 Alexander Estate
(5 rows)

The column is still selectable if you name it directly in the target_elem parameter:

> SELECT id, credit_card FROM users WHERE city = 'rome';
id                                     | credit_card
---------------------------------------+--------------
e6666666-6666-4800-8000-00000000002d   | 4418943046
eb851eb8-51eb-4800-8000-00000000002e   | 0655485426
f0a3d70a-3d70-4000-8000-00000000002f   | 2232698265
f5c28f5c-28f5-4000-8000-000000000030   | 2620636730
fae147ae-147a-4000-8000-000000000031   | 2642076323
(5 rows)

To unhide the column, run:

icon/buttons/copy
> ALTER TABLE users ALTER COLUMN credit_card SET VISIBLE;
icon/buttons/copy
> SELECT * from user WHERE city = 'rome';
                   id                  | city |       name        |            address            | credit_card
---------------------------------------+------+-------------------+-------------------------------+--------------
  e6666666-6666-4800-8000-00000000002d | rome | Misty Adams       | 82289 Natasha River Suite 12  |  4418943046
  eb851eb8-51eb-4800-8000-00000000002e | rome | Susan Morse       | 49364 Melissa Squares Suite 4 |  0655485426
  f0a3d70a-3d70-4000-8000-00000000002f | rome | Victoria Jennings | 31562 Krista Squares Suite 62 |  2232698265
  f5c28f5c-28f5-4000-8000-000000000030 | rome | Eric Perez        | 57624 Kelly Forks             |  2620636730
  fae147ae-147a-4000-8000-000000000031 | rome | Richard Bullock   | 21194 Alexander Estate        |  2642076323
(5 rows)

Alter a primary key

Demo

Alter a single-column primary key

Suppose that you are storing the data for users of your application in a table called users, defined by the following CREATE TABLE statement:

icon/buttons/copy
> CREATE TABLE users (
  name STRING PRIMARY KEY,
  email STRING
);

The primary key of this table is on the name column. This is a poor choice, as some users likely have the same name, and all primary keys enforce a UNIQUE constraint on row values of the primary key column. Per our best practices, you should instead use a UUID for single-column primary keys, and populate the rows of the table with generated, unique values.

You can add a column and change the primary key with a couple of ALTER TABLE statements:

icon/buttons/copy
> ALTER TABLE users ADD COLUMN id UUID NOT NULL DEFAULT gen_random_uuid();
icon/buttons/copy
> ALTER TABLE users ALTER PRIMARY KEY USING COLUMNS (id);
icon/buttons/copy
> SHOW CREATE TABLE users;
  table_name |                create_statement
-------------+--------------------------------------------------
  users      | CREATE TABLE users (
             |     name STRING NOT NULL,
             |     email STRING NULL,
             |     id UUID NOT NULL DEFAULT gen_random_uuid(),
             |     CONSTRAINT users_pkey PRIMARY KEY (id ASC),
             |     UNIQUE INDEX users_name_key (name ASC)
             | )
(1 row)

Alter an existing primary key to use hash sharding

Let's assume the events table already exists:

icon/buttons/copy
> CREATE TABLE events (
    product_id INT8,
    owner UUID,
    serial_number VARCHAR,
    event_id UUID,
    ts TIMESTAMP,
    data JSONB,
    PRIMARY KEY (product_id, owner, serial_number, ts, event_id),
    INDEX (ts) USING HASH
);

You can change an existing primary key to use hash sharding by adding the USING HASH clause at the end of the key definition:

icon/buttons/copy
> ALTER TABLE events ALTER PRIMARY KEY USING COLUMNS (product_id, owner, serial_number, ts, event_id) USING HASH;
icon/buttons/copy
> SHOW INDEX FROM events;
  table_name |  index_name   | non_unique | seq_in_index |                            column_name                            | direction | storing | implicit
-------------+---------------+------------+--------------+-------------------------------------------------------------------+-----------+---------+-----------
  events     | events_pkey   |   false    |            1 | crdb_internal_event_id_owner_product_id_serial_number_ts_shard_16 | ASC       |  false  |   true
  events     | events_pkey   |   false    |            2 | product_id                                                        | ASC       |  false  |  false
  events     | events_pkey   |   false    |            3 | owner                                                             | ASC       |  false  |  false
  events     | events_pkey   |   false    |            4 | serial_number                                                     | ASC       |  false  |  false
  events     | events_pkey   |   false    |            5 | ts                                                                | ASC       |  false  |  false
  events     | events_pkey   |   false    |            6 | event_id                                                          | ASC       |  false  |  false
  events     | events_pkey   |   false    |            7 | data                                                              | N/A       |  true   |  false
  events     | events_ts_idx |    true    |            1 | crdb_internal_ts_shard_16                                         | ASC       |  false  |   true
  events     | events_ts_idx |    true    |            2 | ts                                                                | ASC       |  false  |  false
  events     | events_ts_idx |    true    |            3 | crdb_internal_event_id_owner_product_id_serial_number_ts_shard_16 | ASC       |  false  |   true
  events     | events_ts_idx |    true    |            4 | product_id                                                        | ASC       |  false  |   true
  events     | events_ts_idx |    true    |            5 | owner                                                             | ASC       |  false  |   true
  events     | events_ts_idx |    true    |            6 | serial_number                                                     | ASC       |  false  |   true
  events     | events_ts_idx |    true    |            7 | event_id                                                          | ASC       |  false  |   true
(14 rows)
icon/buttons/copy
> SHOW COLUMNS FROM events;
                             column_name                            | data_type | is_nullable | column_default |                                     generation_expression                                     |           indices           | is_hidden
--------------------------------------------------------------------+-----------+-------------+----------------+-----------------------------------------------------------------------------------------------+-----------------------------+------------
  product_id                                                        | INT8      |    false    | NULL           |                                                                                               | {events_pkey,events_ts_idx} |   false
  owner                                                             | UUID      |    false    | NULL           |                                                                                               | {events_pkey,events_ts_idx} |   false
  serial_number                                                     | VARCHAR   |    false    | NULL           |                                                                                               | {events_pkey,events_ts_idx} |   false
  event_id                                                          | UUID      |    false    | NULL           |                                                                                               | {events_pkey,events_ts_idx} |   false
  ts                                                                | TIMESTAMP |    false    | NULL           |                                                                                               | {events_pkey,events_ts_idx} |   false
  data                                                              | JSONB     |    true     | NULL           |                                                                                               | {events_pkey}               |   false
  crdb_internal_ts_shard_16                                         | INT8      |    false    | NULL           | mod(fnv32(crdb_internal.datums_to_bytes(ts)), 16)                                             | {events_ts_idx}             |   true
  crdb_internal_event_id_owner_product_id_serial_number_ts_shard_16 | INT8      |    false    | NULL           | mod(fnv32(crdb_internal.datums_to_bytes(event_id, owner, product_id, serial_number, ts)), 16) | {events_pkey,events_ts_idx} |   true
(8 rows)

Note that the old primary key index becomes a secondary index, in this case, users_name_key. If you do not want the old primary key to become a secondary index when changing a primary key, you can use DROP CONSTRAINT/ADD CONSTRAINT instead.

Configure replication zones

Setup

The following examples use MovR, a fictional vehicle-sharing application, to demonstrate CockroachDB SQL statements. For more information about the MovR example application and dataset, see MovR: A Global Vehicle-sharing App.

To follow along, run cockroach demo with the --geo-partitioned-replicas flag. This command opens an interactive SQL shell to a temporary, 9-node in-memory cluster with the movr database.

icon/buttons/copy
$ cockroach demo --geo-partitioned-replicas

Create a replication zone for a table

To control replication for a specific table, use the ALTER TABLE ... CONFIGURE ZONE statement to define the relevant values (other values will be inherited from the parent zone):

icon/buttons/copy
> ALTER TABLE users CONFIGURE ZONE USING num_replicas = 5, gc.ttlseconds = 100000;
CONFIGURE ZONE 1
icon/buttons/copy
> SHOW ZONE CONFIGURATION FROM TABLE users;
    target    |             raw_config_sql
--------------+-----------------------------------------
  TABLE users | ALTER TABLE users CONFIGURE ZONE USING
              |     range_min_bytes = 134217728,
              |     range_max_bytes = 536870912,
              |     gc.ttlseconds = 100000,
              |     num_replicas = 5,
              |     constraints = '[]',
              |     lease_preferences = '[]'
(1 row)

Edit a replication zone

icon/buttons/copy
> ALTER TABLE users CONFIGURE ZONE USING range_min_bytes = 0, range_max_bytes = 90000, gc.ttlseconds = 89999, num_replicas = 4;
CONFIGURE ZONE 1

Reset a replication zone

icon/buttons/copy
> ALTER TABLE t CONFIGURE ZONE USING DEFAULT;
CONFIGURE ZONE 1

Remove a replication zone

Note:

When you discard a zone configuration, the objects it was applied to will then inherit a configuration from an object "the next level up"; e.g., if the object whose configuration is being discarded is a table, it will use its parent database's configuration.

You cannot DISCARD any zone configurations on multi-region tables, indexes, or partitions if the multi-region abstractions created the zone configuration.

icon/buttons/copy
> ALTER TABLE t CONFIGURE ZONE DISCARD;
CONFIGURE ZONE 1

Drop columns

Setup

To follow along, run cockroach demo to start a temporary, in-memory cluster with the movr sample dataset preloaded:

icon/buttons/copy
$ cockroach demo

Drop a column

If you no longer want a column in a table, you can drop it.

icon/buttons/copy
> SHOW COLUMNS FROM users;
  column_name | data_type | is_nullable | column_default | generation_expression |  indices  | is_hidden
--------------+-----------+-------------+----------------+-----------------------+-----------+------------
  id          | UUID      |    false    | NULL           |                       | {primary} |   false
  city        | VARCHAR   |    false    | NULL           |                       | {primary} |   false
  name        | VARCHAR   |    true     | NULL           |                       | {primary} |   false
  address     | VARCHAR   |    true     | NULL           |                       | {primary} |   false
  credit_card | VARCHAR   |    true     | NULL           |                       | {primary} |   false
(5 rows)

If there is data in the table, the sql_safe_updates session variable must be set to false.

icon/buttons/copy
> ALTER TABLE users DROP COLUMN credit_card;
ERROR: rejected (sql_safe_updates = true): ALTER TABLE DROP COLUMN will remove all data in that column
SQLSTATE: 01000
icon/buttons/copy
> SET sql_safe_updates = false;
icon/buttons/copy
> ALTER TABLE users DROP COLUMN credit_card;
icon/buttons/copy
> SHOW COLUMNS FROM users;
  column_name | data_type | is_nullable | column_default | generation_expression |  indices  | is_hidden
--------------+-----------+-------------+----------------+-----------------------+-----------+------------
  id          | UUID      |    false    | NULL           |                       | {primary} |   false
  city        | VARCHAR   |    false    | NULL           |                       | {primary} |   false
  name        | VARCHAR   |    true     | NULL           |                       | {primary} |   false
  address     | VARCHAR   |    true     | NULL           |                       | {primary} |   false
(4 rows)

Prevent dropping columns with dependent objects (RESTRICT)

If the column has dependent objects, such as views, CockroachDB will not drop the column by default. However, if you want to be sure of the behavior you can include the RESTRICT clause.

icon/buttons/copy
> CREATE VIEW expensive_rides AS SELECT id, city FROM rides WHERE revenue > 90;
icon/buttons/copy
> ALTER TABLE rides DROP COLUMN revenue RESTRICT;
ERROR: cannot drop column "revenue" because view "expensive_rides" depends on it
SQLSTATE: 2BP01
HINT: you can drop expensive_rides instead.

Drop a column and its dependent objects (CASCADE)

If you want to drop the column and all of its dependent options, include the CASCADE clause.

Warning:

CASCADE does not list objects it drops, so should be used cautiously.

icon/buttons/copy
> SHOW CREATE expensive_rides;
    table_name    |                                              create_statement
------------------+-------------------------------------------------------------------------------------------------------------
  expensive_rides | CREATE VIEW public.expensive_rides (id, city) AS SELECT id, city FROM movr.public.rides WHERE revenue > 90
(1 row)
icon/buttons/copy
> ALTER TABLE rides DROP COLUMN revenue CASCADE;
icon/buttons/copy
> SHOW CREATE expensive_rides;
ERROR: relation "expensive_rides" does not exist
SQLSTATE: 42P01

Drop an indexed column

DROP COLUMN drops a column and any indexes on the column being dropped.

icon/buttons/copy
> CREATE INDEX start_end_idx ON rides(start_time, end_time);
icon/buttons/copy
> WITH x AS (SHOW INDEXES FROM rides) SELECT * FROM x WHERE index_name='start_end_idx';
  table_name |  index_name   | non_unique | seq_in_index | column_name | direction | storing | implicit | visible
-------------+---------------+------------+--------------+-------------+-----------+---------+----------+----------
  rides      | start_end_idx |     t      |            1 | start_time  | ASC       |    f    |    f     |    t
  rides      | start_end_idx |     t      |            2 | end_time    | ASC       |    f    |    f     |    t
  rides      | start_end_idx |     t      |            3 | city        | ASC       |    f    |    t     |    t
  rides      | start_end_idx |     t      |            4 | id          | ASC       |    f    |    t     |    t
(4 rows)
icon/buttons/copy
> ALTER TABLE rides DROP COLUMN start_time;
NOTICE: the data for dropped indexes is reclaimed asynchronously
HINT: The reclamation delay can be customized in the zone configuration for the table.
ALTER TABLE
icon/buttons/copy
> WITH x AS (SHOW INDEXES FROM rides) SELECT * FROM x WHERE index_name='start_end_idx';
  table_name | index_name | non_unique | seq_in_index | column_name | direction | storing | implicit
-------------+------------+------------+--------------+-------------+-----------+---------+-----------
(0 rows)

Drop constraints

Setup

To follow along, run cockroach demo to start a temporary, in-memory cluster with the movr sample dataset preloaded:

icon/buttons/copy
$ cockroach demo

Drop a foreign key constraint

icon/buttons/copy
> SHOW CONSTRAINTS FROM vehicles;
  table_name |  constraint_name  | constraint_type |                         details                         | validated
-------------+-------------------+-----------------+---------------------------------------------------------+------------
  vehicles   | fk_city_ref_users | FOREIGN KEY     | FOREIGN KEY (city, owner_id) REFERENCES users(city, id) |   true
  vehicles   | vehicles_pkey     | PRIMARY KEY     | PRIMARY KEY (city ASC, id ASC)                          |   true
(2 rows)
icon/buttons/copy
> ALTER TABLE vehicles DROP CONSTRAINT fk_city_ref_users;
icon/buttons/copy
> SHOW CONSTRAINTS FROM vehicles;
  table_name | constraint_name | constraint_type |            details             | validated
-------------+-----------------+-----------------+--------------------------------+------------
  vehicles   | vehicles_pkey   | PRIMARY KEY     | PRIMARY KEY (city ASC, id ASC) |   true
(1 row)

Configure audit logging

Turn on audit logging

Let's say you have a customers table that contains personally identifiable information (PII). To turn on audit logs for that table, run the following command:

icon/buttons/copy
ALTER TABLE customers EXPERIMENTAL_AUDIT SET READ WRITE;

Now, every access of customer data is logged to the SENSITIVE_ACCESS channel in a sensitive_table_access event that looks like the following:

I210323 18:50:10.951550 1182 8@util/log/event_log.go:32 ⋮ [n1,client=‹[::1]:49851›,hostnossl,user=root] 4 ={"Timestamp":1616525410949087000,"EventType":"sensitive_table_access","Statement":"‹SELECT * FROM \"\".\"\".customers›","User":"‹root›","DescriptorID":52,"ApplicationName":"‹$ cockroach sql›","ExecMode":"exec","NumRows":2,"Age":2.514,"FullTableScan":true,"TxnCounter":38,"TableName":"‹defaultdb.public.customers›","AccessMode":"r"}
Note:

The preceding example shows the default crdb-v2 log format. This can be changed to a different format (e.g., JSON). For details, see Configure Logs.

Tip:

For descriptions of all SQL audit event types and their fields, see Notable Event Types.

To turn on auditing for more than one table, issue a separate ALTER statement for each table.

Tip:

For a more detailed example, see SQL Audit Logging.

Turn off audit logging

To turn off logging, issue the following command:

icon/buttons/copy
ALTER TABLE customers EXPERIMENTAL_AUDIT SET OFF;

Change table owner

Change a table's owner

Suppose that the current owner of the rides table is root and you want to change the owner to a new user named max.

icon/buttons/copy
> ALTER TABLE promo_codes OWNER TO max;

To verify that the owner is now max, query the pg_catalog.pg_tables table:

icon/buttons/copy
> SELECT tableowner FROM pg_catalog.pg_tables WHERE tablename = 'promo_codes';
  tableowner
--------------
  max
(1 row)
Note:

If the user running the command is not an admin user, they must own the table and be a member of the new owning role. Also, the new owner role must also have the CREATE privilege on the schema to which the table belongs.

Define partitions

Define a list partition on a table

Suppose we have a table called students_by_list, and the primary key of the table is defined as (country, id). We can define partitions on the table by list:

icon/buttons/copy
> ALTER TABLE students_by_list PARTITION BY LIST (country) (
    PARTITION north_america VALUES IN ('CA','US'),
    PARTITION australia VALUES IN ('AU','NZ'),
    PARTITION DEFAULT VALUES IN (default)
  );

Define a range partition on a table

Suppose we have a table called students_by_range, and the primary key of the table is defined as (expected_graduation_date, id). We can define partitions on the table by range:

icon/buttons/copy
> ALTER TABLE students_by_range PARTITION BY RANGE (expected_graduation_date) (
    PARTITION graduated VALUES FROM (MINVALUE) TO ('2017-08-15'),
    PARTITION current VALUES FROM ('2017-08-15') TO (MAXVALUE)
  );

Define subpartitions on a table

Suppose we have a table named students, and the primary key is defined as (country, expected_graduation_date, id). We can define partitions and subpartitions on the table:

icon/buttons/copy
> ALTER TABLE students PARTITION BY LIST (country) (
    PARTITION australia VALUES IN ('AU','NZ') PARTITION BY RANGE (expected_graduation_date) (
      PARTITION graduated_au VALUES FROM (MINVALUE) TO ('2017-08-15'),
      PARTITION current_au VALUES FROM ('2017-08-15') TO (MAXVALUE)
    ),
    PARTITION north_america VALUES IN ('US','CA') PARTITION BY RANGE (expected_graduation_date) (
      PARTITION graduated_us VALUES FROM (MINVALUE) TO ('2017-08-15'),
      PARTITION current_us VALUES FROM ('2017-08-15') TO (MAXVALUE)
    )
  );

Repartition a table

icon/buttons/copy
> ALTER TABLE students_by_range PARTITION BY RANGE (expected_graduation_date) (
    PARTITION graduated VALUES FROM (MINVALUE) TO ('2018-08-15'),
    PARTITION current VALUES FROM ('2018-08-15') TO (MAXVALUE)
  );

Unpartition a table

icon/buttons/copy
> ALTER TABLE students PARTITION BY NOTHING;

Rename columns

Rename a column

icon/buttons/copy
> CREATE TABLE users (
    id INT PRIMARY KEY,
    first_name STRING,
    family_name STRING
  );
icon/buttons/copy
> ALTER TABLE users RENAME COLUMN family_name TO last_name;
  table_name |                 create_statement
+------------+--------------------------------------------------+
  users      | CREATE TABLE users (
             |     id INT8 NOT NULL,
             |     first_name STRING NULL,
             |     last_name STRING NULL,
             |     CONSTRAINT "primary" PRIMARY KEY (id ASC),
             |     FAMILY "primary" (id, first_name, last_name)
             | )
(1 row)

Add and rename columns atomically

Some subcommands can be used in combination in a single ALTER TABLE statement. For example, let's say you create a users table with 2 columns, an id column for the primary key and a name column for each user's last name:

icon/buttons/copy
> CREATE TABLE users (
    id INT PRIMARY KEY,
    name STRING
  );

Then you decide you want distinct columns for each user's first name, last name, and full name, so you execute a single ALTER TABLE statement renaming name to last_name, adding first_name, and adding a computed column called name that concatenates first_name and last_name:

icon/buttons/copy
> ALTER TABLE users
    RENAME COLUMN name TO last_name,
    ADD COLUMN first_name STRING,
    ADD COLUMN name STRING
      AS (CONCAT(first_name, ' ', last_name)) STORED;
icon/buttons/copy
> SHOW CREATE TABLE users;
  table_name |                           create_statement
+------------+----------------------------------------------------------------------+
  users      | CREATE TABLE users (
             |     id INT8 NOT NULL,
             |     last_name STRING NULL,
             |     first_name STRING NULL,
             |     name STRING NULL AS (concat(first_name, ' ', last_name)) STORED,
             |     CONSTRAINT "primary" PRIMARY KEY (id ASC),
             |     FAMILY "primary" (id, last_name, first_name, name)
             | )
(1 row)

Rename constraints

Rename a constraint

icon/buttons/copy
> CREATE TABLE logon (
    login_id INT PRIMARY KEY,
    customer_id INT NOT NULL,
    sales_id INT,
    UNIQUE (customer_id, sales_id)
  );
icon/buttons/copy
> SHOW CONSTRAINTS FROM logon;
  table_name |        constraint_name         | constraint_type |                details                 | validated
+------------+--------------------------------+-----------------+----------------------------------------+-----------+
  logon      | logon_customer_id_sales_id_key | UNIQUE          | UNIQUE (customer_id ASC, sales_id ASC) |   true
  logon      | logon_pkey                     | PRIMARY KEY     | PRIMARY KEY (login_id ASC)             |   true
(2 rows)
icon/buttons/copy
> ALTER TABLE logon RENAME CONSTRAINT logon_customer_id_sales_id_key TO unique_customer_id_sales_id;
icon/buttons/copy
> SHOW CONSTRAINTS FROM logon;
  table_name |       constraint_name       | constraint_type |                details                 | validated
+------------+-----------------------------+-----------------+----------------------------------------+-----------+
  logon      | logon_pkey                  | PRIMARY KEY     | PRIMARY KEY (login_id ASC)             |   true
  logon      | unique_customer_id_sales_id | UNIQUE          | UNIQUE (customer_id ASC, sales_id ASC) |   true
(2 rows)

Rename tables

Setup

To follow along, run cockroach demo to start a temporary, in-memory cluster with the movr sample dataset preloaded:

icon/buttons/copy
$ cockroach demo

Rename a table

icon/buttons/copy
> SHOW TABLES;
  schema_name |         table_name         | type  | estimated_row_count
--------------+----------------------------+-------+----------------------
  public      | promo_codes                | table |                1000
  public      | rides                      | table |                 500
  public      | user_promo_codes           | table |                   0
  public      | users                      | table |                  50
  public      | vehicle_location_histories | table |                1000
  public      | vehicles                   | table |                  15
(6 rows)
icon/buttons/copy
> ALTER TABLE users RENAME TO riders;
icon/buttons/copy
> SHOW TABLES;
  schema_name |         table_name         | type  | estimated_row_count
--------------+----------------------------+-------+----------------------
  public      | promo_codes                | table |                1000
  public      | rides                      | table |                 500
  public      | user_promo_codes           | table |                   0
  public      | riders                     | table |                  50
  public      | vehicle_location_histories | table |                1000
  public      | vehicles                   | table |                  15
(6 rows)

To avoid an error in case the table does not exist, you can include IF EXISTS:

icon/buttons/copy
> ALTER TABLE IF EXISTS customers RENAME TO clients;

Set and reset storage parameters

Exclude a table's data from backups

In some situations, you may want to exclude a table's row data from a backup. For example, you have a table that contains high-churn data that you would like to garbage collect more quickly than the incremental backup schedule for the database or cluster holding the table. You can use the exclude_data_from_backup = true parameter with a CREATE TABLE or ALTER TABLE statement to mark a table's row data for exclusion from a backup.

For more detail and an example through the backup and restore process using this parameter, see Take Full and Incremental Backups.

To set the exclude_data_from_backup parameter for a table, run the following:

icon/buttons/copy
ALTER TABLE movr.user_promo_codes SET (exclude_data_from_backup = true);

The CREATE statement for this table will now show the parameter set:

icon/buttons/copy
SHOW CREATE user_promo_codes;
table_name         |                                                create_statement
-------------------+------------------------------------------------------------------------------------------------------------------
user_promo_codes   | CREATE TABLE public.user_promo_codes (
                   |     city VARCHAR NOT NULL,
                   |     user_id UUID NOT NULL,
                   |     code VARCHAR NOT NULL,
                   |     "timestamp" TIMESTAMP NULL,
                   |     usage_count INT8 NULL,
                   |     CONSTRAINT user_promo_codes_pkey PRIMARY KEY (city ASC, user_id ASC, code ASC),
                   |     CONSTRAINT user_promo_codes_city_user_id_fkey FOREIGN KEY (city, user_id) REFERENCES public.users(city, id)
                   | ) WITH (exclude_data_from_backup = true)
(1 row)

Backups will no longer include the data within the user_promo_codes table. The table will still be present in the backup, but it will be empty.

To remove this parameter from a table, run:

icon/buttons/copy
ALTER TABLE movr.user_promo_codes SET (exclude_data_from_backup = false);

This will ensure that the table's data is stored in subsequent backups that you take.

Reset a storage parameter

The following ttl_test table has three TTL-related storage parameters active on the table:

icon/buttons/copy
SHOW CREATE TABLE ttl_test;
  table_name |                                                                                         create_statement
-------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  ttl_test   | CREATE TABLE public.ttl_test (
             |     id UUID NOT NULL DEFAULT gen_random_uuid(),
             |     description STRING NULL,
             |     inserted_at TIMESTAMP NULL DEFAULT current_timestamp():::TIMESTAMP,
             |     crdb_internal_expiration TIMESTAMPTZ NOT VISIBLE NOT NULL DEFAULT current_timestamp():::TIMESTAMPTZ + '3 mons':::INTERVAL ON UPDATE current_timestamp():::TIMESTAMPTZ + '3 mons':::INTERVAL,
             |     CONSTRAINT ttl_test_pkey PRIMARY KEY (id ASC)
             | ) WITH (ttl = 'on', ttl_expire_after = '3 mons':::INTERVAL, ttl_job_cron = '@hourly')
(1 row)

To remove these settings, run the following command:

icon/buttons/copy
ALTER TABLE ttl_test RESET (ttl);
icon/buttons/copy
SHOW CREATE TABLE ttl_test;
  table_name |                            create_statement
-------------+--------------------------------------------------------------------------
  ttl_test   | CREATE TABLE public.ttl_test (
             |     id UUID NOT NULL DEFAULT gen_random_uuid(),
             |     description STRING NULL,
             |     inserted_at TIMESTAMP NULL DEFAULT current_timestamp():::TIMESTAMP,
             |     CONSTRAINT ttl_test_pkey PRIMARY KEY (id ASC)
             | )
(1 row)

Set localities

Note:

RESTORE on REGIONAL BY TABLE, REGIONAL BY ROW, and GLOBAL tables is supported with some limitations — see Restoring to multi-region databases for more detail.

Set the table locality to REGIONAL BY TABLE

To optimize read and write access to the data in a table from the primary region, use the following statement, which sets the table's home region to the primary region:

icon/buttons/copy
ALTER TABLE {table} SET LOCALITY REGIONAL BY TABLE IN PRIMARY REGION;

To optimize read and write access to the data in a table from the us-east-1 region, use the following statement, which sets the table's home region to us-east-1:

icon/buttons/copy
ALTER TABLE {table} SET LOCALITY REGIONAL BY TABLE IN "us-east-1";
Note:

If no region is supplied, REGIONAL BY TABLE defaults the table's home region to the primary region.

For more information about how this table locality works, see Regional tables.

Set the table locality to REGIONAL BY ROW

Note:

Before setting the locality to REGIONAL BY ROW on a table targeted by a changefeed, read the considerations in Changefeeds on regional by row tables.

To make an existing table a regional by row table, use the following statement:

icon/buttons/copy
ALTER TABLE {table} SET LOCALITY REGIONAL BY ROW;

Every row in a regional by row table has a column of type crdb_internal_region that represents the row's home region. By default, this column is called crdb_region and is hidden. To see a row's home region, issue a statement like the following:

icon/buttons/copy
SELECT crdb_region, id FROM {table};

To update an existing row's home region, use an UPDATE statement like the following:

icon/buttons/copy
UPDATE {table} SET crdb_region = 'eu-west' WHERE id IN (...)

To add a new row to a regional by row table, you must choose one of the following options.

  • Let CockroachDB set the row's home region automatically. It will use the region of the gateway node from which the row is inserted.

  • Set the home region explicitly using an INSERT statement like the following:

    icon/buttons/copy
    INSERT INTO {table} (crdb_region, ...) VALUES ('us-east-1', ...);
    

This is necessary because every row in a regional by row table must have a home region.

If you do not set a home region for a row in a regional by row table, it defaults to the value returned by the built-in function gateway_region(). If the value returned by gateway_region() does not belong to the multi-region database the table is a part of, the home region defaults to the database's primary region.

For more information about how this table locality works, see Regional by row tables.

Note that you can use a name other than crdb_region for the hidden column by using the following statements:

icon/buttons/copy
ALTER TABLE foo SET LOCALITY REGIONAL BY ROW AS bar;
SELECT bar, id FROM foo;
INSERT INTO foo (bar, ...) VALUES ('us-east-1', ...);

In fact, you can specify any column definition you like for the REGIONAL BY ROW AS column, as long as the column is of type crdb_internal_region and is not nullable. For example, you could modify the movr schema to have a region column generated as:

icon/buttons/copy
ALTER TABLE rides ADD COLUMN region crdb_internal_region AS (
  CASE
    WHEN city IN ('new york', 'boston', 'washington dc', 'chicago', 'detroit', 'minneapolis') THEN 'us-east-1'
    WHEN city IN ('san francisco', 'seattle', 'los angeles') THEN 'us-west-1'
    WHEN city IN ('amsterdam', 'paris', 'rome') THEN 'eu-west-1'
  END
) STORED;

Note that the SQL engine will avoid sending requests to nodes in other regions when it can instead read a value from a unique column that is stored locally. This capability is known as locality optimized search.

Turn on auto-rehoming for REGIONAL BY ROW tables

Note:

This feature is in preview. This feature is subject to change. To share feedback and/or issues, contact Support.

This feature is disabled by default.

When auto-rehoming is enabled, the home regions of rows in REGIONAL BY ROW tables are automatically set to the gateway region of any UPDATE or UPSERT statements that write to those rows. This functionality is provided by adding an ON UPDATE expression to the crdb_region column of newly created regional by row tables.

To enable auto-rehoming using the session setting, issue the following statement:

icon/buttons/copy
SET enable_auto_rehoming = on;

Once enabled, the auto-rehoming behavior described here has the following limitations:

  • It will only apply to newly created REGIONAL BY ROW tables, using an ON UPDATE expression that is added to the crdb_region column. Existing REGIONAL BY ROW tables will not be auto-rehomed.
  • The crdb_region column from a REGIONAL BY ROW table cannot be referenced as a foreign key from another table.

To enable auto-rehoming for an existing REGIONAL BY ROW table, manually update it using an ALTER TABLE ... ALTER COLUMN statement with an ON UPDATE expression:

icon/buttons/copy
ALTER TABLE {table} ALTER COLUMN crdb_region SET ON UPDATE rehome_row()::db.public.crdb_internal_region;
Example
  1. Follow steps 1 and 2 from the Low Latency Reads and Writes in a Multi-Region Cluster tutorial. This will involve starting a cockroach demo cluster in a terminal window (call it terminal 1).

  2. From the SQL client running in terminal 1, set the setting that enables auto-rehoming. You must issue this setting before creating the REGIONAL BY ROW tables that you want auto-rehomed.

    icon/buttons/copy
    SET enable_auto_rehoming = on;
    
  3. In a second terminal window (call it terminal 2), finish the tutorial starting from step 3 onward to finish loading the cluster with data and applying the multi-region SQL configuration.

  4. Switch back to terminal 1, and check the gateway region of the node you are currently connected to:

    icon/buttons/copy
    SELECT gateway_region();
    
      gateway_region
    ------------------
      us-east1
    (1 row)
    
  5. Open another terminal (call it terminal 3), and use cockroach sql to connect to a node in a different region in the demo cluster:

    icon/buttons/copy
    cockroach sql --insecure --host localhost --port 26262
    
    # Welcome to the CockroachDB SQL shell.
    # All statements must be terminated by a semicolon.
    # To exit, type: \q.
    #
    # Server version: CockroachDB CCL v24.2.5 (x86_64-apple-darwin19, built 2024-11-14 00:00:00) (same version as client)
    # Cluster ID: 87b22d9b-b9ce-4f3a-8635-acad89c5981f
    # Organization: Cockroach Demo
    #
    # Enter \? for a brief introduction.
    #
    root@localhost:26262/defaultdb>
    
  6. From the SQL shell prompt that appears in terminal 3, switch to the movr database, and verify that the current gateway node is in a different region (us-west1):

    icon/buttons/copy
    USE movr;
    
    icon/buttons/copy
    SELECT gateway_region();
    
      gateway_region
    ------------------
      us-west1
    (1 row)
    
  7. Still in terminal 3, update a row in the vehicles table that is homed in the us-east1 region. After the update, it should be homed in the current gateway node's home region, us-west1.

    1. First, pick a row at random from the us-east1 region:

      icon/buttons/copy
         select * from vehicles where region = 'us-east1' limit 1;
      
                            id                  |  city  | type |               owner_id               |       creation_time        |  status   |       current_location       |                    ext                     |  region
         ---------------------------------------+--------+------+--------------------------------------+----------------------------+-----------+------------------------------+--------------------------------------------+-----------
           3e127e68-a3f9-487d-aa56-bf705beca05a | boston | bike | 2f057d6b-ba8d-4f56-8fd9-894b7c082713 | 2021-10-28 16:19:22.309834 | available | 039 Stacey Plain             | {"brand": "FujiCervelo", "color": "green"} | us-east1
                                                |        |      |                                      |                            |           | Lake Brittanymouth, LA 09374 |                                            |
         (1 row)
      
    2. Next, update that row's city and current_location to addresses in Seattle, WA (USA). Note that this UUID is different than what you will see in your cluster, so you'll have to update the query accordingly.

      icon/buttons/copy
         UPDATE vehicles set (city, current_location) = ('seattle', '2604 1st Ave, Seattle, WA 98121-1305') WHERE id = '3e127e68-a3f9-487d-aa56-bf705beca05a';
      
         UPDATE 1
      
    3. Finally, verify that the row has been auto-rehomed in this gateway's region by running the following statement and checking that the region column is now us-west1:

      icon/buttons/copy
         SELECT * FROM vehicles WHERE id = '3e127e68-a3f9-487d-aa56-bf705beca05a';
      
                            id                  |  city   | type |               owner_id               |       creation_time        |  status   |           current_location           |                    ext                     |  region
         ---------------------------------------+---------+------+--------------------------------------+----------------------------+-----------+--------------------------------------+--------------------------------------------+-----------
           3e127e68-a3f9-487d-aa56-bf705beca05a | seattle | bike | 2f057d6b-ba8d-4f56-8fd9-894b7c082713 | 2021-10-28 16:19:22.309834 | available | 2604 1st Ave, Seattle, WA 98121-1305 | {"brand": "FujiCervelo", "color": "green"} | us-west1
         (1 row)
      

Set the table locality to GLOBAL

To optimize read access to the data in a table from any region (that is, globally), use the following statement:

icon/buttons/copy
ALTER TABLE {table} SET LOCALITY GLOBAL;
ALTER TABLE SET LOCALITY

For more information about how this table locality works, see Global tables.

Set table schema

Setup

To follow along, run cockroach demo to start a temporary, in-memory cluster with the movr sample dataset preloaded:

icon/buttons/copy
$ cockroach demo

Change the schema of a table

Suppose you want to add the promo_codes table to a new schema called cockroach_labs.

By default, unqualified tables created in the database belong to the public schema:

icon/buttons/copy
> SHOW TABLES;
  schema_name |         table_name         | type  | estimated_row_count
--------------+----------------------------+-------+----------------------
  public      | promo_codes                | table |                1000
  public      | rides                      | table |                 500
  public      | user_promo_codes           | table |                   0
  public      | users                      | table |                  50
  public      | vehicle_location_histories | table |                1000
  public      | vehicles                   | table |                  15
(6 rows)

If the new schema does not already exist, create it:

icon/buttons/copy
> CREATE SCHEMA IF NOT EXISTS cockroach_labs;

Then, change the table's schema:

icon/buttons/copy
> ALTER TABLE promo_codes SET SCHEMA cockroach_labs;
icon/buttons/copy
> SHOW TABLES;
   schema_name   |         table_name         | type  | estimated_row_count
-----------------+----------------------------+-------+----------------------
  cockroach_labs | promo_codes                | table |                1000
  public         | rides                      | table |                 500
  public         | user_promo_codes           | table |                   0
  public         | users                      | table |                  50
  public         | vehicle_location_histories | table |                1000
  public         | vehicles                   | table |                  15
(6 rows)

Split and unsplit tables

Setup

The following examples use MovR, a fictional vehicle-sharing application, to demonstrate CockroachDB SQL statements. For more information about the MovR example application and dataset, see MovR: A Global Vehicle-sharing App.

To follow along, run cockroach demo with the --geo-partitioned-replicas flag. This command opens an interactive SQL shell to a temporary, 9-node in-memory cluster with the movr database.

icon/buttons/copy
$ cockroach demo --geo-partitioned-replicas

Split a table

icon/buttons/copy
> SHOW RANGES FROM TABLE users;
                                     start_key                                     |                                     end_key                                      | range_id | range_size_mb | lease_holder |  lease_holder_locality   | replicas |                             replica_localities
-----------------------------------------------------------------------------------+----------------------------------------------------------------------------------+----------+---------------+--------------+--------------------------+----------+-----------------------------------------------------------------------------
  NULL                                                                             | /"amsterdam"/"\xb333333@\x00\x80\x00\x00\x00\x00\x00\x00#"                       |       37 |      0.000116 |            1 | region=us-east1,az=b     | {1,6,8}  | {"region=us-east1,az=b","region=us-west1,az=c","region=europe-west1,az=c"}
  /"amsterdam"/"\xb333333@\x00\x80\x00\x00\x00\x00\x00\x00#"                       | /"boston"/"333333D\x00\x80\x00\x00\x00\x00\x00\x00\n"                            |       46 |      0.000886 |            8 | region=europe-west1,az=c | {1,6,8}  | {"region=us-east1,az=b","region=us-west1,az=c","region=europe-west1,az=c"}
  /"boston"/"333333D\x00\x80\x00\x00\x00\x00\x00\x00\n"                            | /"los angeles"/"\x99\x99\x99\x99\x99\x99H\x00\x80\x00\x00\x00\x00\x00\x00\x1e"   |       45 |       0.00046 |            8 | region=europe-west1,az=c | {2,4,8}  | {"region=us-east1,az=c","region=us-west1,az=a","region=europe-west1,az=c"}
  /"los angeles"/"\x99\x99\x99\x99\x99\x99H\x00\x80\x00\x00\x00\x00\x00\x00\x1e"   | /"new york"/"\x19\x99\x99\x99\x99\x99J\x00\x80\x00\x00\x00\x00\x00\x00\x05"      |       44 |      0.001015 |            8 | region=europe-west1,az=c | {1,6,8}  | {"region=us-east1,az=b","region=us-west1,az=c","region=europe-west1,az=c"}
  /"new york"/"\x19\x99\x99\x99\x99\x99J\x00\x80\x00\x00\x00\x00\x00\x00\x05"      | /"paris"/"\xcc\xcc\xcc\xcc\xcc\xcc@\x00\x80\x00\x00\x00\x00\x00\x00("            |       77 |      0.000214 |            8 | region=europe-west1,az=c | {1,6,8}  | {"region=us-east1,az=b","region=us-west1,az=c","region=europe-west1,az=c"}
  /"paris"/"\xcc\xcc\xcc\xcc\xcc\xcc@\x00\x80\x00\x00\x00\x00\x00\x00("            | /"san francisco"/"\x80\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x19" |       43 |      0.001299 |            8 | region=europe-west1,az=c | {3,4,8}  | {"region=us-east1,az=d","region=us-west1,az=a","region=europe-west1,az=c"}
  /"san francisco"/"\x80\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x19" | /"seattle"/"ffffffH\x00\x80\x00\x00\x00\x00\x00\x00\x14"                         |       61 |      0.000669 |            3 | region=us-east1,az=d     | {3,4,8}  | {"region=us-east1,az=d","region=us-west1,az=a","region=europe-west1,az=c"}
  /"seattle"/"ffffffH\x00\x80\x00\x00\x00\x00\x00\x00\x14"                         | /"washington dc"/"L\xcc\xcc\xcc\xcc\xccL\x00\x80\x00\x00\x00\x00\x00\x00\x0f"    |       57 |      0.000671 |            3 | region=us-east1,az=d     | {3,4,7}  | {"region=us-east1,az=d","region=us-west1,az=a","region=europe-west1,az=b"}
  /"washington dc"/"L\xcc\xcc\xcc\xcc\xccL\x00\x80\x00\x00\x00\x00\x00\x00\x0f"    | NULL                                                                             |       87 |      0.000231 |            4 | region=us-west1,az=a     | {3,4,7}  | {"region=us-east1,az=d","region=us-west1,az=a","region=europe-west1,az=b"}
(9 rows)
icon/buttons/copy
> ALTER TABLE users SPLIT AT VALUES ('chicago'), ('new york'), ('seattle');
              key              |   pretty    |        split_enforced_until
-------------------------------+-------------+--------------------------------------
  \275\211\022chicago\000\001  | /"chicago"  | 2262-04-11 23:47:16.854776+00:00:00
  \275\211\022new york\000\001 | /"new york" | 2262-04-11 23:47:16.854776+00:00:00
  \275\211\022seattle\000\001  | /"seattle"  | 2262-04-11 23:47:16.854776+00:00:00
(3 rows)
icon/buttons/copy
> SHOW RANGES FROM TABLE users;
                                     start_key                                     |                                     end_key                                      | range_id | range_size_mb | lease_holder |  lease_holder_locality   | replicas |                                 replica_localities
-----------------------------------------------------------------------------------+----------------------------------------------------------------------------------+----------+---------------+--------------+--------------------------+----------+-------------------------------------------------------------------------------------
  NULL                                                                             | /"amsterdam"/"\xb333333@\x00\x80\x00\x00\x00\x00\x00\x00#"                       |       37 |      0.000116 |            1 | region=us-east1,az=b     | {1,6,8}  | {"region=us-east1,az=b","region=us-west1,az=c","region=europe-west1,az=c"}
  /"amsterdam"/"\xb333333@\x00\x80\x00\x00\x00\x00\x00\x00#"                       | /"amsterdam"/PrefixEnd                                                           |       46 |      0.000446 |            8 | region=europe-west1,az=c | {7,8,9}  | {"region=europe-west1,az=b","region=europe-west1,az=c","region=europe-west1,az=d"}
  /"amsterdam"/PrefixEnd                                                           | /"boston"                                                                        |       70 |             0 |            8 | region=europe-west1,az=c | {3,6,8}  | {"region=us-east1,az=d","region=us-west1,az=c","region=europe-west1,az=c"}
  /"boston"                                                                        | /"boston"/"333333D\x00\x80\x00\x00\x00\x00\x00\x00\n"                            |       71 |       0.00044 |            1 | region=us-east1,az=b     | {1,6,8}  | {"region=us-east1,az=b","region=us-west1,az=c","region=europe-west1,az=c"}
  /"boston"/"333333D\x00\x80\x00\x00\x00\x00\x00\x00\n"                            | /"boston"/PrefixEnd                                                              |       45 |      0.000225 |            2 | region=us-east1,az=c     | {2,3,8}  | {"region=us-east1,az=c","region=us-east1,az=d","region=europe-west1,az=c"}
  /"boston"/PrefixEnd                                                              | /"chicago"                                                                       |       72 |             0 |            8 | region=europe-west1,az=c | {2,4,8}  | {"region=us-east1,az=c","region=us-west1,az=a","region=europe-west1,az=c"}
  /"chicago"                                                                       | /"los angeles"                                                                   |       74 |             0 |            8 | region=europe-west1,az=c | {2,4,8}  | {"region=us-east1,az=c","region=us-west1,az=a","region=europe-west1,az=c"}
  /"los angeles"                                                                   | /"los angeles"/"\x99\x99\x99\x99\x99\x99H\x00\x80\x00\x00\x00\x00\x00\x00\x1e"   |       73 |      0.000235 |            8 | region=europe-west1,az=c | {2,4,8}  | {"region=us-east1,az=c","region=us-west1,az=a","region=europe-west1,az=c"}
  /"los angeles"/"\x99\x99\x99\x99\x99\x99H\x00\x80\x00\x00\x00\x00\x00\x00\x1e"   | /"los angeles"/PrefixEnd                                                         |       44 |      0.000462 |            4 | region=us-west1,az=a     | {4,6,8}  | {"region=us-west1,az=a","region=us-west1,az=c","region=europe-west1,az=c"}
  /"los angeles"/PrefixEnd                                                         | /"new york"                                                                      |       68 |             0 |            8 | region=europe-west1,az=c | {2,6,8}  | {"region=us-east1,az=c","region=us-west1,az=c","region=europe-west1,az=c"}
  /"new york"                                                                      | /"new york"/"\x19\x99\x99\x99\x99\x99J\x00\x80\x00\x00\x00\x00\x00\x00\x05"      |       69 |      0.000553 |            8 | region=europe-west1,az=c | {1,3,8}  | {"region=us-east1,az=b","region=us-east1,az=d","region=europe-west1,az=c"}
  /"new york"/"\x19\x99\x99\x99\x99\x99J\x00\x80\x00\x00\x00\x00\x00\x00\x05"      | /"new york"/PrefixEnd                                                            |       77 |      0.000111 |            1 | region=us-east1,az=b     | {1,6,8}  | {"region=us-east1,az=b","region=us-west1,az=c","region=europe-west1,az=c"}
  /"new york"/PrefixEnd                                                            | /"paris"                                                                         |       62 |             0 |            8 | region=europe-west1,az=c | {3,4,8}  | {"region=us-east1,az=d","region=us-west1,az=a","region=europe-west1,az=c"}
  /"paris"                                                                         | /"paris"/"\xcc\xcc\xcc\xcc\xcc\xcc@\x00\x80\x00\x00\x00\x00\x00\x00("            |       63 |      0.000103 |            8 | region=europe-west1,az=c | {7,8,9}  | {"region=europe-west1,az=b","region=europe-west1,az=c","region=europe-west1,az=d"}
  /"paris"/"\xcc\xcc\xcc\xcc\xcc\xcc@\x00\x80\x00\x00\x00\x00\x00\x00("            | /"paris"/PrefixEnd                                                               |       43 |      0.000525 |            8 | region=europe-west1,az=c | {7,8,9}  | {"region=europe-west1,az=b","region=europe-west1,az=c","region=europe-west1,az=d"}
  /"paris"/PrefixEnd                                                               | /"rome"                                                                          |       64 |             0 |            8 | region=europe-west1,az=c | {3,4,8}  | {"region=us-east1,az=d","region=us-west1,az=a","region=europe-west1,az=c"}
  /"rome"                                                                          | /"rome"/PrefixEnd                                                                |       65 |      0.000539 |            8 | region=europe-west1,az=c | {7,8,9}  | {"region=europe-west1,az=b","region=europe-west1,az=c","region=europe-west1,az=d"}
  /"rome"/PrefixEnd                                                                | /"san francisco"                                                                 |       66 |             0 |            8 | region=europe-west1,az=c | {3,4,8}  | {"region=us-east1,az=d","region=us-west1,az=a","region=europe-west1,az=c"}
  /"san francisco"                                                                 | /"san francisco"/"\x80\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x19" |       67 |      0.000235 |            4 | region=us-west1,az=a     | {4,5,8}  | {"region=us-west1,az=a","region=us-west1,az=b","region=europe-west1,az=c"}
  /"san francisco"/"\x80\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x19" | /"san francisco"/PrefixEnd                                                       |       61 |      0.000365 |            4 | region=us-west1,az=a     | {4,5,6}  | {"region=us-west1,az=a","region=us-west1,az=b","region=us-west1,az=c"}
  /"san francisco"/PrefixEnd                                                       | /"seattle"                                                                       |       88 |             0 |            3 | region=us-east1,az=d     | {3,4,8}  | {"region=us-east1,az=d","region=us-west1,az=a","region=europe-west1,az=c"}
  /"seattle"                                                                       | /"seattle"/"ffffffH\x00\x80\x00\x00\x00\x00\x00\x00\x14"                         |       89 |      0.000304 |            3 | region=us-east1,az=d     | {3,4,8}  | {"region=us-east1,az=d","region=us-west1,az=a","region=europe-west1,az=c"}
  /"seattle"/"ffffffH\x00\x80\x00\x00\x00\x00\x00\x00\x14"                         | /"seattle"/PrefixEnd                                                             |       57 |      0.000327 |            3 | region=us-east1,az=d     | {3,4,7}  | {"region=us-east1,az=d","region=us-west1,az=a","region=europe-west1,az=b"}
  /"seattle"/PrefixEnd                                                             | /"washington dc"                                                                 |       90 |             0 |            3 | region=us-east1,az=d     | {3,4,7}  | {"region=us-east1,az=d","region=us-west1,az=a","region=europe-west1,az=b"}
  /"washington dc"                                                                 | /"washington dc"/"L\xcc\xcc\xcc\xcc\xccL\x00\x80\x00\x00\x00\x00\x00\x00\x0f"    |       91 |      0.000344 |            3 | region=us-east1,az=d     | {1,2,3}  | {"region=us-east1,az=b","region=us-east1,az=c","region=us-east1,az=d"}
  /"washington dc"/"L\xcc\xcc\xcc\xcc\xccL\x00\x80\x00\x00\x00\x00\x00\x00\x0f"    | /"washington dc"/PrefixEnd                                                       |       87 |      0.000231 |            3 | region=us-east1,az=d     | {1,2,3}  | {"region=us-east1,az=b","region=us-east1,az=c","region=us-east1,az=d"}
  /"washington dc"/PrefixEnd                                                       | NULL                                                                             |      157 |             0 |            4 | region=us-west1,az=a     | {3,4,7}  | {"region=us-east1,az=d","region=us-west1,az=a","region=europe-west1,az=b"}
(27 rows)

Split a table with a compound primary key

You may want to split a table with a compound primary key.

Suppose that you want MovR to offer ride-sharing services, in addition to vehicle-sharing services. Some users need to sign up to be drivers, so you need a drivers table to store driver information.

icon/buttons/copy
> CREATE TABLE drivers (
    id UUID DEFAULT gen_random_uuid(),
    city STRING,
    name STRING,
    dl STRING DEFAULT left(md5(random()::text),8) UNIQUE CHECK (LENGTH(dl) < 9),
    address STRING,
    CONSTRAINT "primary" PRIMARY KEY (city ASC, dl ASC)
);

The table's compound primary key is on the city and dl columns. Note that the table automatically generates an id and a dl value using supported SQL functions if they are not provided.

Because this table has several columns in common with the users table, you can populate the table with values from the users table with an INSERT statement:

icon/buttons/copy
> INSERT INTO drivers (id, city, name, address)
    SELECT id, city, name, address FROM users;
icon/buttons/copy
> SHOW RANGES FROM TABLE drivers;
  start_key | end_key | range_id | range_size_mb | lease_holder |  lease_holder_locality   | replicas |                             replica_localities
------------+---------+----------+---------------+--------------+--------------------------+----------+-----------------------------------------------------------------------------
  NULL      | NULL    |      310 |      0.007218 |            7 | region=europe-west1,az=b | {1,4,7}  | {"region=us-east1,az=b","region=us-west1,az=a","region=europe-west1,az=b"}
(1 row)

Now you can split the table based on the compound primary key. Note that you do not have to specify the entire value for the primary key, just the prefix.

icon/buttons/copy
> ALTER TABLE drivers SPLIT AT VALUES ('new york', '3'), ('new york', '7'), ('chicago', '3'), ('chicago', '7'), ('seattle', '3'), ('seattle', '7');
                     key                    |     pretty      |        split_enforced_until
--------------------------------------------+-----------------+--------------------------------------
  \303\211\022new york\000\001\0223\000\001 | /"new york"/"3" | 2262-04-11 23:47:16.854776+00:00:00
  \303\211\022new york\000\001\0227\000\001 | /"new york"/"7" | 2262-04-11 23:47:16.854776+00:00:00
  \303\211\022chicago\000\001\0223\000\001  | /"chicago"/"3"  | 2262-04-11 23:47:16.854776+00:00:00
  \303\211\022chicago\000\001\0227\000\001  | /"chicago"/"7"  | 2262-04-11 23:47:16.854776+00:00:00
  \303\211\022seattle\000\001\0223\000\001  | /"seattle"/"3"  | 2262-04-11 23:47:16.854776+00:00:00
  \303\211\022seattle\000\001\0227\000\001  | /"seattle"/"7"  | 2262-04-11 23:47:16.854776+00:00:00
(6 rows)
icon/buttons/copy
> SHOW RANGES FROM TABLE drivers;
     start_key    |     end_key     | range_id | range_size_mb | lease_holder |  lease_holder_locality   | replicas |                             replica_localities
------------------+-----------------+----------+---------------+--------------+--------------------------+----------+-----------------------------------------------------------------------------
  NULL            | /"chicago"/"3"  |      310 |      0.001117 |            7 | region=europe-west1,az=b | {3,4,7}  | {"region=us-east1,az=d","region=us-west1,az=a","region=europe-west1,az=b"}
  /"chicago"/"3"  | /"chicago"/"7"  |      314 |             0 |            7 | region=europe-west1,az=b | {3,4,7}  | {"region=us-east1,az=d","region=us-west1,az=a","region=europe-west1,az=b"}
  /"chicago"/"7"  | /"new york"/"3" |      315 |      0.000933 |            7 | region=europe-west1,az=b | {3,4,7}  | {"region=us-east1,az=d","region=us-west1,az=a","region=europe-west1,az=b"}
  /"new york"/"3" | /"new york"/"7" |      311 |             0 |            7 | region=europe-west1,az=b | {1,4,7}  | {"region=us-east1,az=b","region=us-west1,az=a","region=europe-west1,az=b"}
  /"new york"/"7" | /"seattle"/"3"  |      312 |      0.001905 |            7 | region=europe-west1,az=b | {1,4,7}  | {"region=us-east1,az=b","region=us-west1,az=a","region=europe-west1,az=b"}
  /"seattle"/"3"  | /"seattle"/"7"  |      316 |      0.000193 |            7 | region=europe-west1,az=b | {1,6,7}  | {"region=us-east1,az=b","region=us-west1,az=c","region=europe-west1,az=b"}
  /"seattle"/"7"  | NULL            |      317 |       0.00307 |            7 | region=europe-west1,az=b | {1,4,7}  | {"region=us-east1,az=b","region=us-west1,az=a","region=europe-west1,az=b"}
(7 rows)

Set the expiration on a split enforcement

You can specify the time at which a split enforcement expires by adding a WITH EXPIRATION clause to your SPLIT statement. Supported expiration values include DECIMAL, INTERVAL, TIMESTAMP, and TIMESTAMPZ.

icon/buttons/copy
> ALTER TABLE vehicles SPLIT AT VALUES ('chicago'), ('new york'), ('seattle') WITH EXPIRATION '2022-01-10 23:30:00+00:00';
              key              |   pretty    |     split_enforced_until
-------------------------------+-------------+-------------------------------
  \276\211\022chicago\000\001  | /"chicago"  | 2022-01-10 23:30:00+00:00:00
  \276\211\022new york\000\001 | /"new york" | 2022-01-10 23:30:00+00:00:00
  \276\211\022seattle\000\001  | /"seattle"  | 2022-01-10 23:30:00+00:00:00
(3 rows)

You can see the split's expiration date in the split_enforced_until column. The crdb_internal.ranges table also contains information about ranges in your CockroachDB cluster, including the split_enforced_until column.

icon/buttons/copy
> SELECT range_id, start_pretty, end_pretty, split_enforced_until FROM crdb_internal.ranges WHERE table_name='vehicles';
  range_id |                                       start_pretty                                        |                                        end_pretty                                         |        split_enforced_until
-----------+-------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------+--------------------------------------
        38 | /Table/54                                                                                 | /Table/54/1/"amsterdam"                                                                   | NULL
        55 | /Table/54/1/"amsterdam"                                                                   | /Table/54/1/"amsterdam"/PrefixEnd                                                         | NULL
       109 | /Table/54/1/"amsterdam"/PrefixEnd                                                         | /Table/54/1/"boston"                                                                      | NULL
       114 | /Table/54/1/"boston"                                                                      | /Table/54/1/"boston"/"\"\"\"\"\"\"B\x00\x80\x00\x00\x00\x00\x00\x00\x02"                  | NULL
        50 | /Table/54/1/"boston"/"\"\"\"\"\"\"B\x00\x80\x00\x00\x00\x00\x00\x00\x02"                  | /Table/54/1/"boston"/"333333D\x00\x80\x00\x00\x00\x00\x00\x00\x03"                        | 2262-04-11 23:47:16.854776+00:00:00
        49 | /Table/54/1/"boston"/"333333D\x00\x80\x00\x00\x00\x00\x00\x00\x03"                        | /Table/54/1/"boston"/PrefixEnd                                                            | 2262-04-11 23:47:16.854776+00:00:00
       129 | /Table/54/1/"boston"/PrefixEnd                                                            | /Table/54/1/"chicago"                                                                     | NULL
       241 | /Table/54/1/"chicago"                                                                     | /Table/54/1/"los angeles"                                                                 | 2022-01-10 23:30:00+00:00:00
       130 | /Table/54/1/"los angeles"                                                                 | /Table/54/1/"los angeles"/PrefixEnd                                                       | NULL
       131 | /Table/54/1/"los angeles"/PrefixEnd                                                       | /Table/54/1/"new york"                                                                    | NULL
       132 | /Table/54/1/"new york"                                                                    | /Table/54/1/"new york"/"\x11\x11\x11\x11\x11\x11A\x00\x80\x00\x00\x00\x00\x00\x00\x01"    | 2022-01-10 23:30:00+00:00:00
        48 | /Table/54/1/"new york"/"\x11\x11\x11\x11\x11\x11A\x00\x80\x00\x00\x00\x00\x00\x00\x01"    | /Table/54/1/"new york"/PrefixEnd                                                          | 2262-04-11 23:47:16.854776+00:00:00
...
(46 rows)

Unsplit a table

Create a drivers table and split the table based on the compound primary key as described in Split a table with a compound primary key.

To remove the split enforcements, run the following:

icon/buttons/copy
> ALTER TABLE drivers UNSPLIT AT VALUES ('new york', '3'), ('new york', '7'), ('chicago', '3'), ('chicago', '7'), ('seattle', '3'), ('seattle', '7');
                     key                    |           pretty
--------------------------------------------+-----------------------------
  \xc3\x89\x12new york\x00\x01\x123\x00\x01 | /Table/59/1/"new york"/"3"
  \xc3\x89\x12new york\x00\x01\x127\x00\x01 | /Table/59/1/"new york"/"7"
  \xc3\x89\x12chicago\x00\x01\x123\x00\x01  | /Table/59/1/"chicago"/"3"
  \xc3\x89\x12chicago\x00\x01\x127\x00\x01  | /Table/59/1/"chicago"/"7"
  \xc3\x89\x12seattle\x00\x01\x123\x00\x01  | /Table/59/1/"seattle"/"3"
  \xc3\x89\x12seattle\x00\x01\x127\x00\x01  | /Table/59/1/"seattle"/"7"
(6 rows)

You can see the split's expiration date in the split_enforced_until column. The crdb_internal.ranges table also contains information about ranges in your CockroachDB cluster, including the split_enforced_until column.

icon/buttons/copy
> SELECT range_id, start_pretty, end_pretty, split_enforced_until FROM crdb_internal.ranges WHERE table_name='drivers';
  range_id |        start_pretty        |         end_pretty         | split_enforced_until
-----------+----------------------------+----------------------------+-----------------------
        74 | /Table/59                  | /Table/59/1/"chicago"/"3"  | NULL
        77 | /Table/59/1/"chicago"/"3"  | /Table/59/1/"chicago"/"7"  | NULL
        78 | /Table/59/1/"chicago"/"7"  | /Table/59/1/"new york"/"3" | NULL
        75 | /Table/59/1/"new york"/"3" | /Table/59/1/"new york"/"7" | NULL
        76 | /Table/59/1/"new york"/"7" | /Table/59/1/"seattle"/"3"  | NULL
        79 | /Table/59/1/"seattle"/"3"  | /Table/59/1/"seattle"/"7"  | NULL
        80 | /Table/59/1/"seattle"/"7"  | /Max                       | NULL
(7 rows)

The drivers table is still split into ranges at specific primary key column values, but the split_enforced_until column is now NULL for all ranges in the table. The split is no longer enforced, and CockroachDB can merge the data in the table as needed.

Validate constraints

Setup

To follow along, run cockroach demo to start a temporary, in-memory cluster with the movr sample dataset preloaded:

icon/buttons/copy
$ cockroach demo

Validate a constraint

In the example Add the foreign key constraint with CASCADE, we add a foreign key constraint as follows:

icon/buttons/copy
> ALTER TABLE vehicles ADD CONSTRAINT users_fk FOREIGN KEY (city, owner_id) REFERENCES users (city, id) ON DELETE CASCADE;

To ensure that the data added to the vehicles table prior to the creation of the users_fk constraint conforms to that constraint, run the following:

icon/buttons/copy
> ALTER TABLE vehicles VALIDATE CONSTRAINT users_fk;
Note:

If present in a CREATE TABLE statement, the table is considered validated because an empty table trivially meets its constraints.

See also


Yes No
On this page

Yes No