Documentation Home
MySQL 8.0 Reference Manual
Related Documentation Download this Manual
PDF (US Ltr) - 43.3Mb
PDF (A4) - 43.4Mb
Man Pages (TGZ) - 297.2Kb
Man Pages (Zip) - 402.4Kb
Info (Gzip) - 4.3Mb
Info (Zip) - 4.3Mb
Excerpts from this Manual

MySQL 8.0 Reference Manual  / ...  / Optimization  / Optimizing for InnoDB Tables  /  Bulk Data Loading for InnoDB Tables

10.5.5 Bulk Data Loading for InnoDB Tables

These performance tips supplement the general guidelines for fast inserts inSection 10.2.5.1, “Optimizing INSERT Statements”.

  • When importing data intoInnoDB, turn off autocommit mode, because it performs a log flush to disk for every insert. To disable autocommit during your import operation, surround it withSET autocommit andCOMMIT statements:

    SET autocommit=0;... SQL import statements ...COMMIT;

    Themysqldump option--opt creates dump files that are fast to import into anInnoDB table, even without wrapping them with theSET autocommit andCOMMIT statements.

  • If you haveUNIQUE constraints on secondary keys, you can speed up table imports by temporarily turning off the uniqueness checks during the import session:

    SET unique_checks=0;... SQL import statements ...SET unique_checks=1;

    For big tables, this saves a lot of disk I/O becauseInnoDB can use its change buffer to write secondary index records in a batch. Be certain that the data contains no duplicate keys.

  • If you haveFOREIGN KEY constraints in your tables, you can speed up table imports by turning off the foreign key checks for the duration of the import session:

    SET foreign_key_checks=0;... SQL import statements ...SET foreign_key_checks=1;

    For big tables, this can save a lot of disk I/O.

  • Use the multiple-rowINSERT syntax to reduce communication overhead between the client and the server if you need to insert many rows:

    INSERT INTO yourtable VALUES (1,2), (5,5), ...;

    This tip is valid for inserts into any table, not justInnoDB tables.

  • When doing bulk inserts into tables with auto-increment columns, setinnodb_autoinc_lock_mode to 2 (interleaved) instead of 1 (consecutive). SeeSection 17.6.1.6, “AUTO_INCREMENT Handling in InnoDB” for details.

  • When performing bulk inserts, it is faster to insert rows inPRIMARY KEY order.InnoDB tables use aclustered index, which makes it relatively fast to use data in the order of thePRIMARY KEY. Performing bulk inserts inPRIMARY KEY order is particularly important for tables that do not fit entirely within the buffer pool.

  • For optimal performance when loading data into anInnoDBFULLTEXT index, follow this set of steps:

    1. Define a columnFTS_DOC_ID at table creation time, of typeBIGINT UNSIGNED NOT NULL, with a unique index namedFTS_DOC_ID_INDEX. For example:

      CREATE TABLE t1 (FTS_DOC_ID BIGINT unsigned NOT NULL AUTO_INCREMENT,title varchar(255) NOT NULL DEFAULT '',text mediumtext NOT NULL,PRIMARY KEY (`FTS_DOC_ID`)) ENGINE=InnoDB;CREATE UNIQUE INDEX FTS_DOC_ID_INDEX on t1(FTS_DOC_ID);
    2. Load the data into the table.

    3. Create theFULLTEXT index after the data is loaded.

    Note

    When addingFTS_DOC_ID column at table creation time, ensure that theFTS_DOC_ID column is updated when theFULLTEXT indexed column is updated, as theFTS_DOC_ID must increase monotonically with eachINSERT orUPDATE. If you choose not to add theFTS_DOC_ID at table creation time and haveInnoDB manage DOC IDs for you,InnoDB adds theFTS_DOC_ID as a hidden column with the nextCREATE FULLTEXT INDEX call. This approach, however, requires a table rebuild which can impact performance.

  • If loading data into anew MySQL instance, consider disabling redo logging usingALTER INSTANCE {ENABLE|DISABLE} INNODB REDO_LOG syntax. Disabling redo logging helps speed up data loading by avoiding redo log writes. For more information, seeDisabling Redo Logging.

    Warning

    This feature is intended only for loading data into a new MySQL instance.Do not disable redo logging on a production system. It is permitted to shutdown and restart the server while redo logging is disabled, but an unexpected server stoppage while redo logging is disabled can cause data loss and instance corruption.

  • Use MySQL Shell to import data. MySQL Shell's parallel table import utilityutil.importTable() provides rapid data import to a MySQL relational table for large data files. MySQL Shell's dump loading utilityutil.loadDump() also offers parallel load capabilities. SeeMySQL Shell Utilities.