Subscribe to this rss feed RSS

Jet Profiler for MySQL 2.0 released

Posted October 14th by Jet Profiler team
Jet Profiler for MySQL 2.0 is now available!
Multi-language Support

We are pleased to announce that multi-language support is now available via this release, responding to feedback and user requests. Support is now available in three languages including English, German and Swedish. If you would like Jet Profiler translated into your language, please let us know. And if you feel you can help us with translation, even better!

Adjustable data retention timeframe

Adjustable data retention timeframe is another exciting feature. The feature itself allows you to specify how long the application will keep the data before discarding it. This feature enables extended profiling over days or weeks without sacrificing a great deal in performance or disk space. You can access this setting via the recording settings dialog.

Top IP statistics

Another powerful feature in this release is Top IPs. This new feature will let you know which client IPs are using the database the most. This will be helpful if you have lots of client nodes using the database, such as a group of web servers. With the help of Top IPs you’ll be able to quickly ascertain if the load pattern is evenly distributed or not. This can commonly be the case if some machines have dedicated workloads such as cron jobs or an admin gui.

This release also contains bug fixes concerning performance and other issues.

For a 1-minute tour, click here.

Jet Profiler 1.0.8 released

Posted March 9th by Jet Profiler team
Jet Profiler v1.0.8 is now available. The biggest change is for Windows Vista / Windows 7 users, where the software no longer needs to run as administrator. This improves usability for companies where running as an administrator is a security concern. For this to work, a reinstall is required.

Jet Profiler now works on Intel 32 bit Mac Machines! Snow Leopard is required, as it includes the necessary Java version.

The release also contains some bug concerning memory leaks and software updates.

To upgrade the software, select Help, Check for upgrades from the menu. More details...

Tail -f table with myterm

Posted January 20th by Björn Melinder
Some new features in myterm, extensible mysql command line client:
Tail

Ever found yourself running the same query over and over again to see if a table has got new content, just waiting for that magic row to appear? The tail command will watch for new lines in a table, just like you might tail a log file in Linux. Example:

  1. myterm> tail -f bid
  2. +--------+---------+-----------+--------+---------------------+
  3. | bid_id | item_id | bidder_id | price | created |
  4. +--------+---------+-----------+--------+---------------------+
  5. | 67925 | 107751 | 81594 | 92500 | 2010-01-20 19:12:58 |
  6. | 67926 | 34248 | 32530 | 10000 | 2010-01-20 19:12:59 |
  7. | 67927 | 111211 | 108032 | 22900 | 2010-01-20 19:12:59 |
  8. | 67928 | 116016 | 46720 | 90400 | 2010-01-20 19:13:00 |
  9. | 67929 | 129101 | 98949 | 61300 | 2010-01-20 19:13:03 |
  10. | 67930 | 44235 | 91806 | 46400 | 2010-01-20 19:35:03 |
  11. | 67931 | 59403 | 114210 | 30700 | 2010-01-20 19:35:06 |
  12. | 67932 | 110582 | 101572 | 41800 | 2010-01-20 19:35:07 |
  13. | 67933 | 46759 | 73844 | 32600 | 2010-01-20 19:35:07 |
  14. | 67934 | 122248 | 18709 | 105600 | 2010-01-20 19:35:09 |
  15. | 67935 | 87468 | 70444 | 28500 | 2010-01-20 19:35:17 |
  16. ... (more rows appear over time)


The command will ask the server for the primary or unique key(s) of the table. It will then ask for new rows every second, remembering the maximum primary key value returned last time. This works if you insert data by ascending primary key (e.g. auto_increment). The query for the above would be:

  1. SELECT * FROM bid WHERE bid_id > ? ORDER BY bid_id


You can also specify the column to use as key using the -k option, e.g.

  1. tail -f -kcreated bid


which will run:

  1. SELECT * FROM bid WHERE created > ? ORDER BY created

Repeat

Similar but different, repeat allows you to run a query multiple times with some delay.

  1. repeat SELECT COUNT(*) FROM some_table;


By default, it will repeat 10 times and sleep 1 second in-between, but you can adjust this. Go forever and sleep 100 ms in-between:

  1. repeat -n0 -i0.1 SELECT COUNT(*) FROM some_table;


Say you want to watch some global status variable, option -t adds a timestamp as first column so you can see over time:

  1. myterm> repeat -t SELECT * FROM INFORMATION_SCHEMA.GLOBAL_STATUS WHERE VARIABLE_NAME LIKE 'Questions';
  2. +---------------------+---------------+----------------+
  3. | Time | VARIABLE_NAME | VARIABLE_VALUE |
  4. +---------------------+---------------+----------------+
  5. | 2010-01-20 19:11:25 | QUESTIONS | 8394418 |
  6. | 2010-01-20 19:11:26 | QUESTIONS | 8394420 |
  7. | 2010-01-20 19:11:27 | QUESTIONS | 8394422 |
  8. | 2010-01-20 19:11:28 | QUESTIONS | 8394424 |
  9. | 2010-01-20 19:11:29 | QUESTIONS | 8394426 |
  10. | 2010-01-20 19:11:30 | QUESTIONS | 8394428 |
  11. | 2010-01-20 19:11:31 | QUESTIONS | 8394430 |
  12. | 2010-01-20 19:11:32 | QUESTIONS | 8394432 |
  13. | 2010-01-20 19:11:33 | QUESTIONS | 8394434 |
  14. | 2010-01-20 19:11:34 | QUESTIONS | 8394436 |
  15. +---------------------+---------------+----------------+
  16. 10 rows in set (0.013 sec)


More or less tailor-made for this purpose, -d allows you to calculate a delta value for some column:

  1. myterm> repeat -t -d3 SELECT * FROM INFORMATION_SCHEMA.GLOBAL_STATUS WHERE VARIABLE_NAME LIKE 'Questions';
  2. +---------------------+---------------+----------------+----------------------+
  3. | Time | VARIABLE_NAME | VARIABLE_VALUE | Delta_VARIABLE_VALUE |
  4. +---------------------+---------------+----------------+----------------------+
  5. | 2010-01-20 19:12:52 | QUESTIONS | 8395641 | 0 |
  6. | 2010-01-20 19:12:53 | QUESTIONS | 8396281 | 640 |
  7. | 2010-01-20 19:12:54 | QUESTIONS | 8396729 | 448 |
  8. | 2010-01-20 19:12:55 | QUESTIONS | 8397162 | 433 |
  9. | 2010-01-20 19:12:56 | QUESTIONS | 8397634 | 472 |
  10. | 2010-01-20 19:12:57 | QUESTIONS | 8398254 | 620 |
  11. | 2010-01-20 19:12:58 | QUESTIONS | 8398919 | 665 |
  12. | 2010-01-20 19:12:59 | QUESTIONS | 8399470 | 551 |
  13. | 2010-01-20 19:13:00 | QUESTIONS | 8399994 | 524 |
  14. | 2010-01-20 19:13:01 | QUESTIONS | 8400509 | 515 |
  15. +---------------------+---------------+----------------+----------------------+
  16. 10 rows in set (0.004 sec)


Exercise for the reader: see if you can modify the command above to show the result as a line chart.

The repeat command can also do updates or deletes. You can tell it to repeat until no more rows are affected using the -a option. Useful if you want to delete some data chunk by chunk:

  1. repeat -a DELETE FROM message WHERE created < DATE_SUB(NOW(), INTERVAL 30 DAY) LIMIT 1000;

Named connections

If you often connect to the same db machines, you can name them in your configuration file. You can then specify the name when starting myterm, e.g.

  1. ./myterm @workhorse1


Your conf file .myterm contains the login credentials:

  1. con_workhorse1_user = "admin_john_doe"
  2. con_workhorse1_host = "192.168.23.25"
  3. ..

Abbreviations

Abbreviations enables you do edit the word list for auto completion. Some abbreviations are added by default. For instance, if you type:

  1. myterm> sel (TAB)


the abbreviation sel will auto expand into:

  1. myterm> select * from


You can see the list of existing abbreviations by typing abbrev list. You can also add your own abbreviations by typing abbrev add.

Download

Myterm is still in alpha, get it here: https://launchpad.net/myterm

See also this post.

Jet Profiler 1.0.7 released

Posted November 17th by Jet Profiler team
Jet Profiler v1.0.7 is now available. We've made a couple of releases the past month with focus on stability and performance. We've fixed numerous bugs, such as slow scrolling on some Windows machines, many memory problems and some internal database problems. We've also improved the speed of the program, which will make it easier to work with a lot of data.

The explain dialog has been slightly enhanced for better usability. And we now support profiling with limited rights. So if you are using a big web hosting company such as Hostgator, you should now be able to profile even though you don't have full access to the database servers.

To upgrade the software, select Help, Check for upgrades from the menu. More details...

myterm - extensible mysql command line client

Posted November 5th by Bjorn Melinder
What if I type this:

  1. myterm> SELECT engine, count(*) AS count FROM information_schema.tables GROUP BY 1 ORDER BY 2 DESC | chart pie | browser


and Firefox says:Myterm pie

What's that?


I just launched an open-source project on launchpad called myterm. Myterm is a crossover between the standard mysql command line client and the concept of pipes and filters in bash. You can use it to run queries and filter the produced result set in various ways using pipe chaining. This lends itself to quite a lot of different use cases, for instance graphical charts, md5 checksums and different presentation forms to name a few. It has browser integration using shell exec, which means it can render html result sets or charts in your browser. And since most stuff is written using plugins, it will work well to serve as a hub for hooking in more and more tools for data transfer, dumping, backup, simplify monitoring and so on. Sort of like inversed bash-scripting; you start inside the db going out.

The model is based on commands, filters, presenters and dests. The COMMAND (usually a query) produces a result set which is sent of to a chain of filters. Each FILTER can process the result set and reformat the data. A PRESENTER takes a result set and renders it in some form, e.g. plain text, tab separated values, html table or chart. Finally, a DEST is simply the destination of the rendered output, such as standard out, a file or the browser. A full command chain:

COMMAND --resultset--> FILTER --resultset--> PRESENTER --mime--> DEST

Some examples


First some standard output:

  1. myterm> SHOW PROCESSLIST
  2. +------+------+-----------------+------+---------+------+-------+------------------+
  3. | Id | User | Host | db | Command | Time | State | Info |
  4. +------+------+-----------------+------+---------+------+-------+------------------+
  5. | 4789 | root | 127.0.0.1:59047 | test | Query | 0 | | SHOW PROCESSLIST |
  6. +------+------+-----------------+------+---------+------+-------+------------------+
  7. 1 row in set (0.000 sec)


Then, lets use the cols filter:

  1. myterm> SHOW PROCESSLIST | cols 2-3
  2. +------+-----------------+
  3. | User | Host |
  4. +------+-----------------+
  5. | root | 127.0.0.1:59047 |
  6. +------+-----------------+
  7. 1 row in set (0.000 sec)


which filters out some columns. Similarly, a basic grep filter exists. Lets hope people won't stop using indexes just because of this:

  1. myterm> SHOW DATABASES | grep info
  2. +--------------------+
  3. | Database |
  4. +--------------------+
  5. | information_schema |
  6. +--------------------+
  7. 1 row in set (0.001 sec)


For chart rendering, myterm uses libchart by Jean-Marc Tremeaux. Take a look at the biggest tables:

  1. myterm> SELECT CONCAT(table_schema, '.', table_name) AS 'Table', data_length + index_length AS Bytes FROM information_schema.tables ORDER BY 2 DESC | other 7 | chart hbar


The other filter just reduces any rows after 6 into a total item named Other. Resulting bar chart:Myterm hbar

The insertify plugin will reverse engineer a result set into a create statement and an insert statement. This is not as rock-solid or performant as CREATE...SELECT (it won't pick up indexes), but works for creating temporary snapshots or test data:

  1. myterm> SHOW PROCESSLIST|insertify|tsv -N -E
  2. CREATE TABLE 'some_table' (
  3. 'Id' bigint(11) NOT NULL DEFAULT '',
  4. 'User' varchar(16) NOT NULL DEFAULT '',
  5. 'Host' varchar(64) NOT NULL DEFAULT '',
  6. 'db' varchar(64) DEFAULT NULL,
  7. 'Command' varchar(16) NOT NULL DEFAULT '',
  8. 'Time' int(7) UNSIGNED NOT NULL DEFAULT '',
  9. 'State' varchar(30) DEFAULT NULL,
  10. 'Info' varchar(100) DEFAULT NULL
  11. ) ENGINE=InnoDB;
  12. INSERT INTO some_table (Id, User, Host, db, Command, Time, State, Info) VALUES ('4789', 'root', '127.0.0.1:59047', 'test', 'Query', '0', NULL, 'SHOW PROCESSLIST');
  13. 2 rows in set (0.000 sec)


Here are the plugins so far:

  1. myterm> plugins list
  2. 15 plugins loaded:
  3. Filters:
  4. cols Filters columns and column ranges.
  5. grep Filters lines containing the specified text in any column.
  6. insertify Creates insert statements based on a result set.
  7. other Reduces a result to max N rows, collapsing any extra rows to a row titled Other at the end.
  8. Presenters:
  9. chart Renders a chart using php Libchart.
  10. html Formats the data to a html table.
  11. md5 Calculates an md5 checksum of all rows and column values.
  12. plain Formats the data to the default plain text table grid.
  13. tsv Formats the data to tab-separated-values plain text.
  14. vert Presents the data in a vertical plain-text fashion, similar to mysql \G output format.
  15. vhtml Formats the data to a record by record, vertical html table.
  16. Dests:
  17. browser Sends the output to the browser.
  18. file Sends output to a file with the given filename.
  19. mailto Sends output to the registered email application using a mailto: link
  20. std Sends output to standard out.

Current status


It's currently written in PHP, which is kind of bad, because of PHP:s limited console integration, signal handling (ctrl-c) and threading (think +asynchronous / multi-threaded queries). Maybe a rewrite in Python would fix these issues. Eric Day has initiated a similar project at Portland State University which is about to start... ...and they're thinking of Python! So I've contacted him about possibly combining these projects in some way later on, once both projects have gotten further.

Download


To download it, you need bazaar and php 5.2+ command line with mysqli extension. I've only tested it on Ubuntu Linux. Type:

  1. bzr branch lp:myterm


then follow the README file.

Contribute


For now, myterm parsing and option handling is limited (don't do multi-line queries, comments or too much quoting), but most basic stuff works. There are probably tons of bugs in it and things that don't work, I know :) So... if you'd like to contribute, join the project and mailing list on launchpad. There, you can also look at the blueprints which are some ideas on additional features.

Feel free to leave comments and feature suggestions!

New license model

Posted August 31st by Bjorn, Jet Profiler team
We've received a lot of feedback from our customers regarding our licensing model. Many people felt annual subscriptions were limiting since they would have to renew the license every year. Another limitation was that the software would only run on one single computer.

We're now pleased to announce that the license model has changed:
No more subscriptions

We've removed the one year limitation of the subscription-based model. The new license allows you to use the software for as long as you like. There will be no more recurring fees, and the software will not stop working after one year of use.

More installations

Another improvement is that we now support two installations per license. This allows for much greater flexibility. You can install it on your laptop and desktop at the same time (or at work and at home). Or you can share the license with a colleague or give it to a friend.

Note to existing subscribers:
All customers with subscription-based licenses will be offered to convert to a normal license with the next renewal. So instead of paying $399 for a completely new license, you can pay $99 the next time you renew your subscription. You will then receive a normal license. An added benefit is that if you have multiple licenses, you can cut the number of licenses in half since each license is valid for two computers. A sales representative will contact you in time with more information.

Jet Profiler for MySQL 1.0 released

Posted February 10th by Bjorn, Jet Profiler team
Top queries 500
This is the first public release. Features:

  • Top Queries - See which queries are being run the most on your server.
  • Top Users - See which users are using your server the most.
  • Top Tables - See which database tables are opened the most.
  • Top States - See which states your database is most busy doing, such as creating temp tables.
  • Replication Profiling - You can measure how much capacity you have left on the replication SQL thread on slaves. If you are using MyISAM a lot, a MyISAM lock analysis will help discover any locks associated with replication, as well as lock-prone queries in general.
  • Query Ratings and Visualization - You can get your queries rated using EXPLAIN and see which queries are most likely to cause load due to missing indices etc. The execution plan can be visualized in a diagram, which shows the table lookups involved, the join size and any bad indicators.
  • Zoomable GUI - You can easily zoom in on spikes in your load and see the corresponding queries for that time interval.
  • Low Overhead - Running the tool against MySQL typically costs around 1%. Recording granularity customizable 100ms - 10s.
  • Works on Windows, Linux and Mac


Download the free version, try it and let us know what you think is good and what isn't. It doesn't cost anything and isn't time limited. Read more...

Query profiling using SHOW PROCESSLIST

Posted November 20th by Bjorn, Jet Profiler team
The MySQL database server can display a lot of performance statistics. However, most of them relate to general server parameters, such as buffer sizes and query cache settings. Of course, such settings are important to get right, and can make a huge difference in performance. Once you've tuned them however, you need to start looking at the tables and queries. When a single bad query can cost 90 - 99% of the total performance, server tuning doesn't fix the problem. But obtaining query profiling information from MySQL can be tricky. Here are some of the options:
Slow query log

You can use the built-in slow_query_log from within the server. It will show you queries that take more than a second. Such queries typically don't use indices properly and are important to address. But only slow queries end up there, and many performance problems aren't caused by slow queries, but by queries that are run very often (for example, for-loops in the code instead of JOINs). You could use complete server query logging. This will log every statement to file, but it's not recommended, as it will take too much disk I/O.

SHOW PROFILES

SHOW PROFILES command by Jeremy Cole (available from MySQL 5.0.37) can help you a great deal. Once enabled, it will gather cpu execution times and other important information on up to 100 queries. You can compare the results and see wich query is most expensive. This article explains more.

SHOW USER / INDEX / TABLE STATISTICS

The SHOW USER / INDEX / TABLE STATISTICS patches from Google add functionality to trace cpu time and much more per user, index and table. No query information is shown though, you will have to figure that out based on the tables. The patches are not part of the main MySQL distribution, but Percona provides prebuilt MySQL versions.

MySQL Enterprise Monitor's Query Analyzer

MySQL Enterprise Monitor's Query Analyzer is great for finding the heaviest queries. Using MySQL Proxy (man-in-the-middle software), it collects all queries sent to the server and ranks them by execution time. You can also see number of times the query was run, and number of rows typically returned. Being the best tool of them all, it also costs, and it requires additional software/hardware setup for the MySQL Proxy.

SHOW PROCESSLIST

Another option is the SHOW PROCESSLIST command. It shows a list of currently running processes (queries), slightly similar to the Unix ps command or Windows Task Manager. For every query, you can see how long the query has run, its user, ip and state. If you run this command just once, you will get a snapshot of what queries the server is busy doing right now. Aggregating multiple snapshots over time can give you a good view of what queries the server typically is busy performing. An advantage of SHOW PROCESSLIST is that it works on all MySQL versions without modifications and requires no additional software / hardware. On the downside, you don't necessarily catch all queries, just the queries that happen to run during every snapshot. If you collect many snapshots over long time, this is less of a problem.

Issues

You might not have the option to go for MySQL Enterprise, or install MySQL Proxy in front of your database servers. Or you might not be able to swith to the MySQL server versions containing Google's patches or SHOW PROFILES. Another problem is, that even if you do get profiling information, it is typically presented in raw text form, and might not be easy to browse through. Good tools to visualize the profiling information and navigate through the data is just as important as getting the information in the first place.

A new profiling tool

Top tables shadow We're currently in the process of developing a tool based on the SHOW PROCESSLIST above. The tool, called Jet Profiler for MySQL, collects process list information and stores it in an internal database where it is analyzed, ranked and presented. After normalizing the queries, top lists are created of:

  • most frequent queries
  • most frequent users
  • most frequent tables
  • most frequent schemas

By looking at the state of the processes, additional information can be extracted, such as lock-prone queries (if you're using MyISAM tables), or queries which create a lot of temp tables on disk. The tool is a client desktop app so no server changes are required. It works on all MySQL versions (3.23 - 6.0).

Better visualization and usability

To provide good visualization, the information is presented in a line chart over time. You can zoom in on spikes and other interesting time intervals, and see the top queries / users / etc for that particular time frame. Every top list is accompanied by a pie chart to make it easier to compare the impact of different items. An experimental EXPLAIN visualization feature is underway, helping you understand the queries better.

Beta testers are welcome

We still have lots of adjustments and features before a public release, but we are looking beta testers. If you are interested, click here.

More information

Best way to visualize EXPLAIN?

Posted November 5th by Bjorn, Jet Profiler team
Interpreting the output of the MySQL EXPLAIN command can be tricky. From all the information you get, some of the most important information is:

  • Full table scans
  • High join size product
  • Using filesorts
  • Using temporary tables


This can be hard to see in the raw output. Example query:

EXPLAIN SELECT b.item_id, b.bidder_id, u.username, MAX(price) AS price FROM bid b JOIN user u ON (b.bidder_id = u.user_id) WHERE b.item_id = '1' GROUP BY b.bidder_id ORDER BY price DESC

The explain outputs:

id, select_type, table, type, possible_keys, key, key_len, ref, rows, Extra
1, SIMPLE, b, ALL, null, null, 0, null, 22660, Using where; Using temporary; Using filesort
1, SIMPLE, u, eq_ref, PRIMARY, PRIMARY, 4, heavybidder.b.bidder_id, 1,

We've been experimenting how to visualize the output of the EXPLAIN command. Using boxes for tables, calculating the join size product and highlighting important information. What do you think of the result so far?

Screenshot:Explain visualizer

Ideas, comments?