mysql notes
mysql notes
MySQL query profiling is a useful technique when trying to analyze the overall
performance of a database driven application. The MySQL slow query log is a log that
MySQL sends slow, potentially problematic queries to. This logging functionality
comes with MySQL but is turned off by default. What queries are logged is determined
by customizable server variables that allow for query profiling based on an
application’s performance requirements. Generally the queries that are logged are
queries that take longer than a specified amount of time to execute or queries that do
not properly hit indexes. Configuring the below parameters makes you first step with
MySQL performance tunning slow_query_log, slow_query_log_file,
long_query_time, log_queries_not_using_indexes,
min_examined_row_limit. Setting these parameters to correct values, will ensure
applications long queries are now dumped to a log and you have a fair chance to
sneek peek and correct the queries in your application after a good examination. Well,
fixing one query makes your application faster by an average 4%, depending on
application specific conditions.
The next thing to do is to enable SQL_MODE = STRICT. This will probably bring your
application down on its knees, because MySQL just stops executing all those queries
which it feels are bad or unsafe. Hence, this is best practiced on a local or
development instance first. Well, after fixing all the slow queries from the log from
step 1, this step is just another chance for you to fix not only the slow queries, but
also all those queries which MySQL feels are bad, or not indexed, or not run securely.
Which means, our application is now updated to a no-issue state. Good to go for next
step. Now, below are the two small modifications which can be very quickly and
efficiently applied to the database schema without affecting the application uptime.
Now, lets move ahead an apply a few quick modifications to MySQL config file. These
settings will enable a faster query response and apply a layer of cache on query
execution.
Well, and that is it. You have your application speed improved by a good 50% just
with the above steps. The rest can still be improved by altering DB shcemas, and
modifying the underlying infrastructure.
Have a good time tuning MySQL performance. Happy work day!!!
In light of my recent work with mysqldump , I’ve taken the initiative to compile a
reference cheat sheet. This resource is crafted to serve as a consolidated guide
whenever a quick reference is required.
Basic Usage
mysqldump -u [username] -p[password] [database_name] > [dumpfile.sql]
Common Flags:
-u or --user: Username to connect to the MySQL server.
-p or --password: Used to specify the password for the MySQL server connection.
For security reasons, it's advisable not to append the password directly after
the -p flag within the command line. If the password is omitted, the system will
securely prompt you for it.
-h or --host: Specify the host to connect to (default is localhost)
--port: Specify the port number to use for connection
-B or --databases: Dump several databases. Database names are followed by
space
--all-databases: Dump all databases
--no-data: Dump only the schema, no data
--skip-triggers: Do not dump triggers
--no-create-info: Don't write table creation info (schema)
--create-options: Includes all table options in the CREATE TABLE statements in
the dump.
--skip-comments: Do not add comments in the dump
--add-drop-table: Add a DROP TABLE statement before each CREATE TABLE
statement
--skip-lock-tables: Do not lock tables during dump
--add-locks: Adds LOCK TABLES and UNLOCK TABLES statements around the
INSERT statements.
--lock-tables: Locks all tables before dumping them.
--single-transaction: Dump with a single transaction (useful for InnoDB tables to
ensure consistency)
--compact: Produce more compact output
--complete-insert: Use complete INSERT statements (includes column names)
--quick: Faster dump by not buffering query
--where : This allows you to filter rows based on a condition
mysqldump -u [username] -p [database_name] users --where="user_status = 'active'"
--disable-keys: For MyISAM tables, surrounds the INSERT statements with /*!
40000 ALTER TABLE tbl_name DISABLE KEYS /; and /!40000 ALTER TABLE
tbl_name ENABLE KEYS */; to enhance insertion performance during restoration.
--skip-disable-keys: Avoids adding statements to temporarily disable keys.
--extended-insert: Uses multiple-row INSERT syntax. This results in a smaller
dump file and speeds up inserts when the dump is reloaded.
--skip-set-charset: Omits the SET NAMES statement.
--set-charset: Adds a SET NAMES statement to the output to set the character
set.
Options related to output format:
--xml: Output as XML
--json: Output as JSON
--tab=[DIR]: Save the result in tabbed format in the given directory
Exporting and importing specific tables:
Export:
mysqldump -u [username] -p[password] [database_name] [table_name1]
[table_name2] > [dumpfile.sql]
Import:
mysql -u [username] -p[password] [database_name] < [dumpfile.sql]
Dealing with large databases:
Use the --compress option to compress the data between client and server.
By using the --compress option, the data transfer between the client and the server is
compressed, which can be beneficial if you're dumping large databases over a
network, especially if the connection between the client and server is not very fast.
mysqldump -u [username] -p --compress [database_name] > outputfile.sql
Use gzip for compression:
mysqldump -u [username] -p[password] [database_name] | gzip > [dumpfile.sql.gz]
Decompress and import:
gunzip < [dumpfile.sql.gz] | mysql -u [username] -p[password] [database_name]