Tags

,

I have an AWS EC2 instance which host a production Rails app and also the MySQL database. At the same time I also have another instance running a staging server with an almost identical environment. With Capistrano in place, I was successfully deploying to both environment using the same recipe using multi stages feature. Until one day, I tried deploying to the production server and I encountered an error that says:

Permission denied (publickey).
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

The deployment was aborted. That was unusual because I have been deploying using the same script and same publickey for both the staging and production without any prior issue. And furthermore the staging deployment still works without a hitch. So what could be the problem here?

I logged into the production server to make sure the public keys are appropriately available. I wanted to see what is in one of the files and so I did a cat command on it and I got an error message:

cat authorized-bash: cannot create temp file for here-document: No space left on device

Ah! That’s giving me some definite clue. “No space left on device”. I did a df -h to see what amount of disk space is still free. Lo and behold, it is already at almost 100% used. How could that be? Both my staging and production server has the same setup, disk size and the database has the same data.

More investigation checking the disk usage to find where the disk space has gone to.
First I cd to the root directory and execute du -h –max-depth=1 (double dashes in front of max-depth). This tells me that /var is using alot of space. Running the same command in /var shows that /var/lib is the culprit. Eventually found that /var/lib/mysql is using almost 90%+ of the disk space.

Displaying the mysql directory shows ibdata1 using disk space big time. Obviously there is a big question mark as to why this file is so large as compared with the staging server with the same data. The only possible explanation I could think of was that while I was copying a table to the production mysql database, it aborted due to running our of space. My guess is that the ibdata1 becomes filled with those data even though in the database, the new table was not shown.

So, how do I shrink or reset the  ibdata1 file? Although there is no quick fix to shrink the ibdata1 file size, nevertheless it is still possible. There’s a good writeup explaining this at: http://dba.stackexchange.com/questions/16747/mysql-clean-ibdata1
which describes step by setp how to “reset” the file. In summary:

STEP 01) MySQLDump all databases into a SQL text file (call it SQLData.sql)
STEP 02) Drop all databases (except mysql schema)
STEP 03) service mysql stop
STEP 04) Add the following lines to /etc/mysql/my.cnf

[mysqld]
innodb_file_per_table
innodb_flush_method=O_DIRECT
innodb_log_file_size=1G
innodb_buffer_pool_size=4G

(Sidenote: Whatever your set for innodb_buffer_pool_size, make sure innodb_log_file_size is 25% of innodb_buffer_pool_size.)

STEP 05) rm -f /var/lib/mysql/ibdata1 /var/lib/mysql/ib_logfile0 /var/lib/mysql/ib_logfile1

At this point, there should only be the mysql schema in /var/lib/mysql

STEP 06) service mysql start

This will recreate ibdata1 at 10MB, ib_logfile0 and ib_logfile1 at 1G each

STEP 07) Reload SQLData.sql into mysql

I followed the steps except of step 4 since I am not familiar with how that setting will affect my database setup. And at least for now I want to keep it the same as the staging server. And my ibdata1 file was shrunk significantly. And I recovered my disk space.

Subsequent deployment to the production server now works without a hitch. Sometimes the error message we see on the screen is only a symptom of a different problem. In my case, what appeared as a public key access rights issue turned out to be merely running out of disk space.