Since, having already written about the choice of database for Shopware 6.4 I have been running into the same difficulties, many other people have had before me, when it comes to restoring a database backup for a Shopware 6.4 installation. Namely, MySQL rejecting values for generated columns (such as order_date), or problems setting the
DEFINER for triggers.
Creating a backup
Long story short, in order to import a database dump into Shopware 6.4 first you need to create a dump. I’ll create two files, here. The first one called structure.sql, and the second called data.sql, with obvious contents.
We’ll pipe the database dump through to sed, in order to remove all
DEFINER = `<username>`@`<host_or_IP>` statements which mysqldump automatically adds to Procedures, Triggers and Functions. This is only necessary, if the source database and the target have different users or are listening on different host names or IPs.
Remember that in all of the codesnippets in this article, you’ll need to replace <database> with your actual database name.
mysqldump -uroot --no-data <database> | sed -e 's/DEFINER[ ]*=[ ]*[^*]*\*/\*/' | sed -e 's/DEFINER[ ]*=[ ]*[^*]*PROCEDURE/PROCEDURE/' | sed -e 's/DEFINER[ ]*=[ ]*[^*]*FUNCTION/FUNCTION/' > structure.sql
Likewise, creating the data export looks like this:
mysqldump -uroot --hex-blob --force --single-transaction --extended-insert --quick --no-create-info --compact <database> | sed -e 's/DEFINER[ ]*=[ ]*[^*]*\*/\*/' | sed -e 's/DEFINER[ ]*=[ ]*[^*]*PROCEDURE/PROCEDURE/' | sed -e 's/DEFINER[ ]*=[ ]*[^*]*FUNCTION/FUNCTION/' > data.sql
The only important parameter here is the
--hex-blob which makes sure binary values are exported as hex values. Everything else is for speeding up the later import (hence this funny –quick parameter – as if by not putting it, we would specifically want an export to be slow). To transfer less data to the target server, you should compress the sql file before uploading it (using tar, gzip, bzip). I use pigz, which is just a (parallel, thus) faster version of gzip which uses all threads of your CPU.
# compress pigz -k data.sql # decompress pigz -dc data.sql.gz > data.sql
Restoring a backup
First we restore the database structure.
mysql -p -u root --default-character-set=utf8mb4 -e "SET sql_mode=''; SET names utf8mb4; SET FOREIGN_KEY_CHECKS=0; USE <database>; SOURCE structure.sql; SET FOREIGN_KEY_CHECKS=1;"
Now we are going to delete all triggers, since they are also included in the data.sql and we don’t want a ‘Trigger already exists.’ error.
-- set `group_concat_max_len` SET @@session.group_concat_max_len = @@global.max_allowed_packet; -- select all the triggers and build the `DROP TRIGGER` SQL -- replace <database> with your schema name (e.g. your database name) SELECT GROUP_CONCAT(sql_string SEPARATOR '\n') FROM ( SELECT CONCAT('DROP TRIGGER IF EXISTS `', TRIGGER_NAME, '`;') AS sql_string,'1' FROM information_schema.TRIGGERS WHERE TRIGGER_SCHEMA = '<database>' ) AS sql_strings GROUP BY '1';
This will give you something like this – which we can execute.
DROP TRIGGER IF EXISTS `country_tax_free_insert`; DROP TRIGGER IF EXISTS `country_tax_free_update`; DROP TRIGGER IF EXISTS `customer_address_vat_id_insert`; DROP TRIGGER IF EXISTS `customer_address_vat_id_update`; DROP TRIGGER IF EXISTS `order_cash_rounding_insert`;
Now you’re ready to actually import the data:
mysql -p -u root --default-character-set=utf8mb4 -e "SET sql_mode=''; SET names utf8mb4; SET FOREIGN_KEY_CHECKS=0; USE <database>; SOURCE data.sql; SET FOREIGN_KEY_CHECKS=1;"
This of course will take some time, depending on the size of your database and the hardware you’re using. I hope these instructions are helpful for anyone who ran into these problems, as well.