I18n and Devise combination gotcha

Tags

, ,

Using the Rails Internationalization (I18n) API

The Ruby I18n gem provides an easy-to-use and extensible framework for providing multiple language support in your Rails application. The process of setting up the I18n gem is quite straight forward.

The gem is already shipped with Ruby on Rails (starting with Rails 2.2), and so there is no need to include that in the Gemfile. The Rails guide gives good instructions on how to set that up and works pretty well.

There is however one hiccup I encountered in using the I18n together with the Devise gem. Not sure if this is a bug or configuration issue with Devise. The issue is this: When you sign in successfully at the login page, you get directed to the home page using the last locale.

How to reproduce the issue:

When logged in, you change the language to a different one (example fr ) and then log out. At the log in page, it shows the french version. You now select another language like English. This will change the locale to en as expected and the login page will now be the English version. However after a successful login, Devise will redirect to the root page but using the previous locale, which will be fr. In this case it gets redirected to /fr.

After studying the Devise codes, and found that by default, Devise first tries to find a valid resource_return_to key in the session, then it falls back to resource_root_path, otherwise it uses the root_path, as the after_signed_in_path.

Using logger.debug, it shows that when I change the locale in the Login page, even though the locale did changed, the  session key resource_return_to continues to point to the last signed_in_root_path. Since Devise checks the resource_return_to key first, it hence redirects to that path before it will fall back on the root_path.

The way to overcome this issue is to update the resource_return_to session key each time the locale is changed in the login page. We do this in the set_locale method in the ApplicationController.

def set_locale
   I18n.locale = params[:locale] || I18n.default_locale
   session[“user_return_to”] = signed_in_root_path(User)
end

After this Devise will redirect to the root_path for the correct locale.

I use this little hack to resolve an issue I encountered. Not sure if this is something that only occur in my Rails App environment or in the general environment. If there is someone out there who is pulling his/her hair to try to figure out the same problem, hopefully this will help you find an answer to your question.

Exim and ActionMailer in Rails Application

Tags

,

For a mail server on Linux, one of the more popular ones is using the combination of Postfix and Dovecot. They provide both the incoming and outgoing mail services. And you can also set up a web mail interface for mail users. However sometimes all you need is a simple mail server that provides just the outgoing mail services. For example, mail sent out by your Rails app using ActionMailer. Even though Dove and Postfix will no doubt more than meet your need in this situation, a lightweight Mail server called Exim would be more suitable. It is easy to set up and meets the simple need of providing an outgoing mail service.

In Rails, the default mailer is the sendmail. The setting in the environment is set up that way and looks like this:

#config/environment/production.rb
 config.action_mailer.delivery_method = :sendmail
# Defaults to:
# config.action_mailer.sendmail_settings = {
#   :location => ‘/usr/sbin/sendmail’,
#   :arguments => ‘-i -t’
# }

To use Exim with Rails, it just requires a simple change in this default setting, as it behaves much like sendmail. It will look like this:

config.action_mailer.delivery_method = :sendmailconfig.action_mailer.sendmail_settings = {
  :location => ‘/usr/sbin/sendmail’,
  :arguments => ‘-i’
}

The difference is the “-t” option. Due to the fact that Exim has a different usage for the -t option, having that option as part of the sendmail default will cause Exim to fail in sending out email. The mail sent out via ActionMailer will never get sent. So to override the default setting, merely uncomment the default setting and remove the -t option.

I learned this from personal experience where I had to scratch my head to figure out why in spite of setting up everything correctly, I still did not receive the mail sent by the ActoinMailer.

For more Exim information, go to Exim

Shrinking the MySQL ibdata1 file

Tags

,

I have an AWS EC2 instance which host a production Rails app and also the MySQL database. At the same time I also have another instance running a staging server with an almost identical environment. With Capistrano in place, I was successfully deploying to both environment using the same recipe using multi stages feature. Until one day, I tried deploying to the production server and I encountered an error that says:

Permission denied (publickey).
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

The deployment was aborted. That was unusual because I have been deploying using the same script and same publickey for both the staging and production without any prior issue. And furthermore the staging deployment still works without a hitch. So what could be the problem here?

I logged into the production server to make sure the public keys are appropriately available. I wanted to see what is in one of the files and so I did a cat command on it and I got an error message:

cat authorized-bash: cannot create temp file for here-document: No space left on device

Ah! That’s giving me some definite clue. “No space left on device”. I did a df -h to see what amount of disk space is still free. Lo and behold, it is already at almost 100% used. How could that be? Both my staging and production server has the same setup, disk size and the database has the same data.

More investigation checking the disk usage to find where the disk space has gone to.
First I cd to the root directory and execute du -h –max-depth=1 (double dashes in front of max-depth). This tells me that /var is using alot of space. Running the same command in /var shows that /var/lib is the culprit. Eventually found that /var/lib/mysql is using almost 90%+ of the disk space.

Displaying the mysql directory shows ibdata1 using disk space big time. Obviously there is a big question mark as to why this file is so large as compared with the staging server with the same data. The only possible explanation I could think of was that while I was copying a table to the production mysql database, it aborted due to running our of space. My guess is that the ibdata1 becomes filled with those data even though in the database, the new table was not shown.

So, how do I shrink or reset the  ibdata1 file? Although there is no quick fix to shrink the ibdata1 file size, nevertheless it is still possible. There’s a good writeup explaining this at: http://dba.stackexchange.com/questions/16747/mysql-clean-ibdata1
which describes step by setp how to “reset” the file. In summary:

STEP 01) MySQLDump all databases into a SQL text file (call it SQLData.sql)
STEP 02) Drop all databases (except mysql schema)
STEP 03) service mysql stop
STEP 04) Add the following lines to /etc/mysql/my.cnf

[mysqld]
innodb_file_per_table
innodb_flush_method=O_DIRECT
innodb_log_file_size=1G
innodb_buffer_pool_size=4G

(Sidenote: Whatever your set for innodb_buffer_pool_size, make sure innodb_log_file_size is 25% of innodb_buffer_pool_size.)

STEP 05) rm -f /var/lib/mysql/ibdata1 /var/lib/mysql/ib_logfile0 /var/lib/mysql/ib_logfile1

At this point, there should only be the mysql schema in /var/lib/mysql

STEP 06) service mysql start

This will recreate ibdata1 at 10MB, ib_logfile0 and ib_logfile1 at 1G each

STEP 07) Reload SQLData.sql into mysql

I followed the steps except of step 4 since I am not familiar with how that setting will affect my database setup. And at least for now I want to keep it the same as the staging server. And my ibdata1 file was shrunk significantly. And I recovered my disk space.

Subsequent deployment to the production server now works without a hitch. Sometimes the error message we see on the screen is only a symptom of a different problem. In my case, what appeared as a public key access rights issue turned out to be merely running out of disk space.

Capistrano Deployment Hiccup

Tags

, ,

I have been using Capistrano for deployment for a few years. In general when it is properly set up, one usually have no issues. Just run a simple one line command and watch it does its magic. I use it for Rails deployment, but you may use it for other deployments as well.

On this day I had an out of usual situation where I had to restart my AWS EC2 instance. I was running out of disk space and hence needed to create a larger volume for disk space. And this requires that I stop my instance before I could detach the old volume and attach a new one. This is a pretty straight forward procedure. Except for one little inconvenience. Whenever you stop an instance and restart it, you will be given a new public IP address. I’m sure there’s way to have a fixed IP address but I’m not on that plan that provide that option. Or maybe I just did not dig deep enough to find out how to do that. You know how it is with busy programmer.

So, with a new IP address for my server certainly throw things off with my Capistrano deployment script. That should be a simple fix. Just update the IP address in that case. As a side note, I should have used a domain name in the place of an IP address. That way I just need to update my DNS record and not have to touch the script. But for a number of different reasons, which I will not dwell upon here, I used the IP address to specify my server to deploy to.

Upon running the deployment script, I encountered a deployment failure with error message as follows:

SSHKit::Runner::ExecuteError: Exception while executing on host 54.173.xx.xxx: git exit status: 1
git stdout: Nothing written
git stderr: ssh: connect to host 54.210.yy.yyy port 22: Connection timed out
error: Could not fetch origin

54.173.xx.xxx is the new IP address of my server while 54.210.yy.yyy was my old IP address. I could not figure out for hours where the old IP address came from, since I have gone through over and over again all my cap scripts to find that without any result. I googled and I hunted to see if perhaps this IP might have been cached somewhere.

After hours of searching I finally found the culprit. It was located in the repo config file on the server itself. In the application root directory you will find a repo folder which has a file called config. This config file is part of the setting for the Git. And in this file it was still using the old IP.

I don’t know exactly how the deployment flow in execution, apparently during the deployment, it was going to do a git clone from the origin. And it uses the setting in this config file on my server for the server address to do a SSH. And of course, the connection timed out.

Anyhow just to note it down here so that in case anyone encountered the same problem, you would not have to spend hours trying to track it down.

Configuring Monit to watch over your processes

Tags

,

I use a couple of useful gems (actually more than just a couple) within my Rails App, namely Delayed_job and Thinking_Sphinx and both these two gem runs background jobs as daemon. One of the issues with running background job daemon is having to make sure that they are always up and running. I use Thinking_Sphinx to enhance the search functionality of the data on the database. And if for whatever reason(s) the process got killed without my knowledge, it will have some adverse effect on the results that comes back from the search. Actually it will throw an error in the Rails App if Thinking Sphinx (searchd) is not running when the user is trying to do a search.

So, as useful as they can be, these background workers needed to be watched frequently to make sure they are still alive and kicking. You certainly do not want to be woken up at 3am to be told that the web server is down or certain email jobs are not going out and lose out on your precious sleep. This is where Monit comes to the rescue. Monit is a program that also runs in the background and does a good job watching over all the various processes and services to make sure that they are up and running. And if it detects certain process is down (not running as it should), it will make attempts to revive the process or service. If it fails for whatever reason, then it will shoot an email or message to inform you. There are many ways you may configure Monit to work for you.

As useful as it may be, configuring Monit for your environment is a different ball game. Troubleshoot problem on Monit is frustrating, as I found out for myself. It took me hours (almost a couple of days) googling for answers to finally get the configuration working right for my environment.

Here are a few gotchas that you should be aware of if you gathered enough courage to venture into this territory.

  1. You need to watch your $path.

By default Monit starts up with a plain “spartan” path:

/bin:/usr/bin:/sbin:/usr/sbin

Monit does not define a $HOME environment variable. What that means is if you are using bundler, it will not be able to locate the gems or your ruby path. And soon you will be pulling your hairs out trying to figure out why a perfectly written script isn’t working. And Monit is not vocal about the cause either. So make sure to include your $HOME environment variable in the path. The standard script example I followed religiously goes like this:

check process delayed_job with pidfile /var/www/my_app/shared/pids/delayed_job.pid 
   start program = “/usr/bin/env   RAILS_ENV=production /var/www/my_app//current/script/delayed_job star” 
  stop program = “/usr/bin/env RAILS_ENV=production /var/www/my_app/current/script/delayed_job stop”

Unfortunately it took me hours to figure out the need to include your environment PATH within the script to make this work. The final script that works looks like this:

check process delayed_job with pidfile /var/www/my_app/shared/pids/delayed_job.pid 
   start program = “/usr/bin/env   HOME=/home/deployer PATH=/usr/local/bin:/usr/bin:/bin:/home/deployer/.rbenv/shims:$PATH RAILS_ENV=production /var/www/my_app//current/script/delayed_job star” 
  stop program = “/usr/bin/env  HOME=/home/deployer PATH=/usr/local/bin:/usr/bin:/bin:/home/deployer/.rbenv/shims:$PATH RAILS_ENV=production /var/www/my_app/current/script/delayed_job stop”

The path /home/deployer/.rbenv/shims is where my ruby program resides and definitely need to be specified since I am running a ruby script here.

So if you are struggling with getting your monitrc script to work, make sure your PATH is included.

2.  Which user account should I use to run Monit as?

I struggled to get this right. And not getting it right means you keep getting permission errors whether you are trying to start a monit job or during deployment using Capistrano.

The Monit daemon can be started by a regular user or a privilege user such as root. The decision of which user depends on your app setup, in my case whether I will be dealing with starting and stopping monit during my deployment. Since my deployment runs as a regular user called deploy, and I need to be able to start/stop monit as a deploy user, I chose to start up Monit as user deploy to begin with. To do that I just need to change the ownership of the file monitrc to the deploy account.

Why don’t I just start/stop monit during deployment using sudo while keeping monit started by root account? That sounds reasonable but as I found out later, that there will still be permission issues involved. After hours of trials and testing, I came to the conclusion that the real issue is not whether to run Monit as a user deploy or root because with either case, I still have permission issues. How so, you may ask.

I have three processes that my Monit will be monitoring – Delayed_job, Sphinx and Nginx. Both delayed_job and Sphinx is started as user deploy because I need to be able to do that during deployment. Nginx is started as a root user by default, I think. And I just kept it that way. So I have 2 sets of processes that needed to be started with different user permissions. (There probably is a better way to do this but this is the best I can come up with for the moment).

So if I start Monit with a sudo, when the time come to restart my Delayed_job in my deployment, I have have permission issue because somewhere along the way, Monit may have restarted Delayed_job (in the event that the process stopped for whatever reason). And during deployment when I try to stop the Delayed_job process, it gives a permission error because that process is now owned by a privileged user (root), and it just stops my deployment in its track.

What if I start Monit as a regular user deploy? Then Monit will have problem restarting Nginx in the event it goes down, because Nginx needs to be started as a privileged user. And Monit is now owned by a regular user. I tried using the as uid root gid root but that command could only be used if Monit is started by privileged user like root.

 So either way, I get permission issues. So the solution I came up with after hours of laborious trying is to start the processes within Monit by the respective users. Easier said than done.

The final solution I found was using a wrapper script, to start Nginx:

      1. First create a start script file /usr/local/bin/startNginxServer.sh containing the following:

        #!/bin/sh
        /etc/init.d/nginx start

      2. Similarly create a stop script file  /usr/local/bin/stopNginxServer.sh containing:
         #!/bin/sh
        /etc/init.d/nginx stop

      3. Change execution permission:
        chmod a+x usr/local/bin/sttrtNginxServer.sh
        chmod a+x usr/local/bin/stopNginxServer.sh

      4. In /etc/sudoers add the following line:
        deployer ALL=NOPASSWD: /usr/local/bin/startNginxServer.sh, /usr/local/bin/stopNginxServer.sh

      5. In the monitrc file, the script to start Nginx will look like this (adapt to your own path environment. Essentially you are starting/stopping Nginx with the startNginxServer.sh / stopNginxServer.sh script):

        check process nginx with pidfile /opt/nginx/logs/nginx.pid
        start program = “/usr/bin/sudo /usr/local/bin/startNginxServer.sh”
        stop program  = “/usr/bin/sudo /usr/local/bin/stopNginxServer.sh”

What this essentially do is to allow Monit run the Nginx script  with root permission while the rest are run as Deployer user.

Hopefully you will find this information useful, especially if you are struggling to get Monit up and running.

Gotcha to watch out for when using Delayed_job

Tags

In this particular project I was using the Delayed_job gem to process jobs in the background. After installing and testing it out, it works great.

The first time I noticed some strange behavior that practically took me more than a hour to find out what the cause was. I was using another useful gem called letter_opener by Ryan Bates. This is especially great for testing mailer when you do not want the mail to actually go out but you want to see the output of the mail and to know that the mailer is doing its job. So any mail that you sent will open up in a browser with the mail content displayed, as though you are reading it with a mail client. That saves you the trouble of always having to check your mail in your mail client. Another great thing about letter opener is that you can be using fake email address and it will not be a problem since the mail does not actually go out to the mail server.

Anyway I have gone off in a tangent. Back to my issue. After using letter_opener for a while, it was time for me to test the actual smtp sending part. While that should be easy… in theory. All I need to do is to change the mailer configuration in the config/development.rb  from:

config.action_mailer.delivery_method = :letter_opener

to

config.action_mailer.delivery_method = :smtp

And Rails should be using the smtp from then on, right? Well, it didn’t behave as expected. It was still displaying the result with the letter_opener. That was a rather strange behavior. I thought perhaps restarting of the web server would help resolve the issue. Nope. Still giving the same result. In the end I even tried uninstalling the letter_opener gem, and the Rails mailer was still trying to send to letter_opener. But in this case, nothing happened. It’s like the mail went into the never-never land. It just doesn’t make sense until I noticed the mailer job was actual queuing in the delayed_job table. And there was an error message inside the delayed_job record in its table. It said it was unable to find the letter_opener gem in the specified gem path.

All of a sudden a light turned on in my head. It must have something to do with Delayed_job caching the previous setting somewhere. I don’t know how delayed_job work internally (do not have time at this point to check its code) but it is definitely caching something somewhere. So I did the next logical thing which was to restart the delayed_job process that was running in the background. And viola! Now it finally recognized the change I made in the development.rb.

One more thing to watch out for. I made a change in the mailer file and the change did not take effect until I restarted delayed_job.

So the conclusion is if you are using delayed-job, be aware that whenever you make any changes in the configuration and things still does work as you expected, try restarting the delayed_job process. You can do it by this command  script/delayed_job restart. This will save you a lot of troubleshooting frustrations.

Issue displaying in remote Time Zone during Capistrano deployment

I was working on a project that is hosted on a different time zone from where I am. The server is located in Asia while I reside in the US time zone. As part of the Capistrano deployment process, I use deploy:web:disable to disable the site with a maintenance message while the deployment is running. What the deploy:web:disable does is copying the file “maintenance.html” across to the server in the specified location. And the webserver has been previously configured to detect this file and if found, serves it to the user, while the Rails site remains temporarily disabled during  the deployment process.

This works great except for one small significant detail. Before the maintenance.html file is copied over to the server, it is parsed and the current time is inserted as part of the text, to inform the user of the time the server went down for maintenance. The issue was that the time inserted is always in the EDT time zone (where I am at), even though my Rails app has been configured for the actual location of the server. The reason for this is simple – the maintenance.html.erb was parsed in my local time zone, and hence the local time was inserted as a result. This can become confusing to users to see the time in EDT while they are in the Asian time zone.

In the Rails console, I could just do a Time.zone.now and it gives me the actual remote time of the server, since my Rails app is already configured for that time zone. So I inserted the same code into the maintenance file <%= Time.zone.now.strftime(%H %M) %>. However during deployment I would get an error message about the unrecognized zone method for Time. After hours of testing and Googling, I came to understand the cause of this issue – the code works in the Rails console because the Rails environment was already loaded. But this is not the case during deployment. Apparently Rake does not load the Rails environment and works purely as a Ruby environment.

To overcome this, I have to install the tzinfo gem (include it in my Gemfile as part of the development group and run Bundle install). Then include the line: require ‘tzinfo’ in my deploy.rb file. Finally in my maintenance.html.erb template file I include the following lines:

<% tz = TZInfo::Timezone.get(‘Asia/City_Name’) %>
<%= tz.now.strftime(“%b %d, %Y –  %H:%M”) %> to display the local time.

Problem solved. The time displayed in the Maintenance.html file is now in remote time, which is now more meaningful to the local users.

Issue running Rbenv-provided Rubies and Gems from within a Sudo session

I had the task of setting up a production server on VPS running Ubuntu 12.0.4 LTS. There are quite a number of articles on how to go about doing it. Thankfully they all have detailed instructions. However the frustration came up when you follow exactly the step by step instructions and came up with an error message that makes no sense. I do understand that this is probably due to the difference in OS environment or even OS version. Nevertheless it is a tedious process trying to figure out going back and forth to check my steps to make sure that I have not missed anything along the way.

In summary, the process involves installing the Rbenv first followed by Passenger and then Nginx. This article gives a step by step instruction on this. Quite a straight forward instructions except that it threw up a persistent message that doesn’t make much sense at the time.

After installing Rbenv successfully and verified that it was working, the next time was to run the following  commands to install Passenger and Nginx:

      sudo gem install passenger
      sudo passenger-install-nginx-module

These are the same steps outlined on the Phusion Passenger site.

For some reasons, it threw the following error message:

sudo: gem: command not found

I am certain that I have Rbenv installed and could access the gem command as a user.  But the sudo command just recognize the path apparently. I did a lot of googling before I came upon this sudo issue, which apparently isn’t an issue but a security measure that is a part of some OSes. Sudo replaces the caller’s $PATH environment variable with a predefined one that is safe.

The reason this does not work out-of-the-box is that most operating systems distribute sudo compiled with the secure_path option enabled. This throws away the caller’s $PATH environment variable and replaces it with a predefined list of search paths that are to be considered safe.

You can read more about this in this article by Dan Carley.

So the sudo command does not recognize the Rbenv path and hence complains that RubyGem was not installed. I found it surprising that this issue was not mentioned in the number of articles that touch on this subject of setting up an Ubuntu VPS with Rbenv, Passenger and Nginx. Again I assume that may be due to the fact that they be using a different version of Ubuntu which does not have the sudo security ‘feature’ implemented.

Anyhow the good news is Dan Carley not only highlighted the issue but also created a plugin called Rbenv-sudo that allows you to run rbenv-provided Rubies and Gems from within a sudo session. Kudos to Dan for this. After installing this plugin I was able to complete the installation of Passenger and Nginx without further problem.

Send email using different Postmark servers within an application

In one of the Rails 2 web applications that I was working on, we use Postmark email delivery service for web apps, to handle our email delivery. And the Rails gem used in this case is the postmark-rails which is a drop-in plug-in for ActionMalier to send emails via Postmark.

The postmark-rails gem works great when you have a single default server for your application. However when you have multiple Postmark servers and you want to be able to selectively choose which of your ‘servers’ to use for delivery for different email category, there is no easy way to do that, at the time of this writing. I have spent hours looking for a solution over Google but no success.

For a single server, you set the api_key in the environment.rb file. Theoretically to use a different server, one just need to change the api_key before sending the email. With some testing, I found that could be done as follows:

Postmark.api_key = “new_api_key”

However it didn’t quite work the way I thought it would. If this is the very first attempt to send an email, setting to a new api_key will work just fine. But subsequent changing of the api_key will have no effect. The api_key used for the first email will now be used every time.

I checked to make sure that the Postmark.api_key attribute did change as I intended, and it does have the new value. It seemed like the api key is cached somewhere and subsequent change have no effect.

After digging through the gem source codes I finally found the reason why. According to the Postmark documentation, in order to authenticate yourself to the Postmark service, you need to send the correct HTTP header with the API key of your server. That header is:

X-Postmark-Server-Token: your-api-key-here

Postmark-rails gem takes care of that for you by using the value you set in the environment.rb file . The way it does that is by this line of code:

@headers ||= HEADERS.merge({ “X-Postmark-Server-Token” => Postmark.api_key.to_s })

(which is found in postmark-gem/lib/postmark/http_client.rb)

So the first time changing the api_key using Postmark.api_key = “new_api_key” it works fine because @headers at that point was nil, and hence it merges the api_key to the HEADERS constant and assigns it to @headers instance variable.

Subsequent changing the api_key will not work any more because the @headers instance variable at this point has already been set and hence will always be returned as the header.

So the workaround to overcome that would be to set the @headers instance variable to nil before changing the api_key value. I created a simple method to switch servers:

def switch_postmark_server(api_key)
Postmark::HttpClient.instance_variable_set(:@headers, nil)
Postmark.api_key = api_key
end

You call this method just before you execute the mailer delivery. If you generally have a default server that you use, then you switch it back to the default api_key after the mailer delivery.

There is probably  a better way to go about it. In any case it is a quick hack to accomplish what I needed.

Chick Fil-A controversy…

The more I read about this controversy in the news the more I think how absurd the reactions are from a rational point of view. And reading the comments on the various blogs made me cringed. They reveal the shallowness in thoughts among the general readers. Calling out to boycott Chick Fil-A, derogatory comments about the CEO, etc. And because of this, this controversy continues to be blown out of proportion from what it really was about. What exactly did the CEO Dan Cathy said in an interview in the Baptist Press? If you have not read that interview, I would highly encourage you to read the whole interview to get the context of his comments to better understand the issue. I am sure any clear thinking adult would agree that to pass a judgement without prior understanding of an issue is immature and irresponsible.

Firstly, look into his background, his up-bringing and his religious belief and conviction. It should not be a surprise to hear him come out with a statement as follows:

“We are very much supportive of the family—the biblical definition of the family unit,” he said. “We are a family-owned business, a family-led business, and we are married to our first wives.…We want to do anything we possibly can to strengthen families.”

Let’s reason for a minute and see what the issue is that sparked the controversy. Did Dan Cathy call for a boycott of the gay groups? Did he say that Chick Fil-A will not serve gays? Or did he said anything about not hiring gays? I did not see any of those expressed in the company policy. And for the record, the word “gay” or “lesbian” was not even mentioned anywhere in that interview article. Surprised? Do not take me at my word, go read it for yourself.

So how did he get branded as a gay hater from this interview? All he is advocating was he is “very much supportive of the family—the biblical definition of the family unit”. Does that deserved to be classified as a hate crime? Just because the biblical definition of the family unit contradicts the gay marriages, can we honestly call him a gay hater, as has been implied by the reactions on hundreds of articles on the internet? Does he not have the freedom to voice his own religious belief and convictions on a religious news press? Has the US Constitution changed recently that I am not aware of? If not, why then the controversy?

I believe this has become a heated controversy for a simple reason. People make it a controversy. They flame the issue and keep putting fuel on the fire to make it hotter. The presses took the bait and added more fuel to it. To make it even more absurd, the politicians are jumping in to cash in on this. The mayor of Boston, for one, used the “bully pulpit” to discourage Chick-fil-A from coming to Boston, by writing a scathing letter to Cathy urging him to keep his restaurant out of Boston, while Chick Fil-A is in the process of searching for a site to locate in Boston. Is that not an abuse of administrative powers?  Since when has it become a crime to express your personal beliefs about something (in this case his support for traditional family unit) that you get bar from his city? Even though Boston Mayor Thomas Menino later came out with a statement to clarify his previous statement, that “he would not deny the restaurant the necessary city permits to open in the city.” A typical politician opportunist, from my point of view.

Obviously the traditional biblical definition of a family unit has been on a collision course with the LGBT community. And it is no surprise that the statement by Chick Fil-A CEO is viewed as a threat to whatever agenda they have, simply for the fact that Chick Fil-A is a high profile company in America. Hence we see the kind of reactions that followed and in the manner it was distorted. Don’t be a puppet that act the way puppeteer wants using his strings. is calling for a boycott of this company based on unfounded facts the proper way to express your displeasure? Maybe for some readers, it may be. You are surely entitled to your belief system. By the way, at the time of this writing, the Amazon CEO has just pledged $2.5 million dollars in support of gay marriages referendum in Washington state. Now after reading this, should I go out and boycott Amazon since I disagree with his standing and personal beliefs, even though I am a regular Amazon customer? Should I not label him as a “biblical traditional family unit” hater? No!!! that man has a right to his belief and as long as it is not criminal in nature, he has every right to put his money where his mouth was. So this same right is not  applicable to the CEO of Chick Fil-A? And instead we have now to label him a LGBT hater, as many has come forward to express this view?

Wars and riots have known to have been started based solely on rumors. And the consequence is always the innocents will have to pay.  We need think for yourselves, get the facts before you rush into following the piper. Boycotting Chick Fil-A may be your choice of action. That’s your right and decision but it has its consequences. One of which will be innocent people will lose their jobs in this already struggling economy, which is what may happen if Chick Fil-A loses its business and will have to cut back on their employee out of necessity.

If you have not already picked up this from reading this article, I hold the same view as Dan Cathy and I support his conviction. I will make it a point to eat at Chick Fil-A more often than I have in the past, if for no better reason than simply to play my part to save the jobs for their employees caught in between. But most of all, I will do it in support of free speech of an individual in the USA.