November 19, 2016
If you've made a choice not to go with Heroku, and deploy your Rails onto a VPS that you operate, then I would say that there is a relatively good chance that you are doing that to save yourself some money every month. A valid choice, costs add up, and if your app needs a few more dynos, then your Heroku bill could rapidly start costing you well over the $20 a month that it costs me to host this blog.
A new set of problems can be the result. These small machines can be a bit stretched thin for RAM at times. One of these challenges is that as you run your Passenger application, the size of the memory footprint can grow as more stuff is loaded into the Ruby VM by the app, and can grow quite aggressively if you do happen to have a memory leak in your application. The even worse news for this situation is that the actual cause of the problem can be very tough to track down to fix, and of course in the meantime you'd probably like to keep running your application and taking requests, doing all the things that it was made to do.
All is not lost. You can receive requests and run your app while you figure out your problem and stop your Passenger instances from devouring your free RAM in its entirety, keep your app running, even if it's a little rough around the edges and has some memory bloat.
So we have a server that is running a Rails app using passenger, and as it takes requests, the memory footprint is growing, giving it the potential to eat all the available RAM and stop your server operating. We don't want this.
If you are running the enterprise version of Passenger you can set a memory limit that each instance can use, and if you are so inclined, this is an excellent choice. It's not the only way though to take care of this. Another good way to keep your Passengers in check is to use Monit to keep an eye on them and restart them when required.
Obviously, I am all about automation and repeatability, so to install and manage Monit we are going to use our Ansible playbooks that can get it running for you super easily.
In our Ansible playbook we can add a role for Monit Passenger, some variables for it to use, then a pretty simple main task:
In our main playbook:
passenger_app_root_for_monit: \/home\/deploy\/{{app_name}}\/current\/public passenger_app_restart_file: /home/deploy/{{app_name}}/current/tmp/restart.txt
./tasks/main.yml
- name: Install Monit become: yes become_method: sudo apt: name=monit state=latest tags: - install_passenger_monit - name: Copy the Passenger restart command file into place become: yes become_method: sudo template: src: restart_passenger.j2 dest: /etc/monit/restart_passenger - name: Make restart file executable become: yes become_method: sudo raw: chmod a+x /etc/monit/restart_passenger notify: restart monit - name: Enable monit status become: yes become_method: sudo blockinfile: dest: /etc/monit/monitrc block: | set httpd port 2812 use address localhost allow localhost allow admin:monit notify: restart monit - name: Copy Passenger monit config into place become: yes become_method: sudo template: src: passenger.j2 dest: /etc/monit/conf.d/passenger notify: restart monit
When we run this, start to finish, it'll install monit and make sure it's the latest version running for us. Then it'll copy over a file that holds the command needed to restart our passenger server if it exceeds our criteria and makes it executable:
./templates/restart_passenger.j2
#!/bin/bash sudo -u deploy -H sh -c "touch {{ passenger_app_restart_file }}"
opens up a process that can be accessed locally and give us some status information. It then copies the config for monitoring of the passenger software, the 150 MB memory limit is arbitrary; I just chose a number:
./templates/passenger.j2
check process passenger matching "Passenger RubyApp: {{ passenger_app_root_for_monit }}" if memory usage > 150.0 MB then exec /etc/monit/restart_passenger
then restarts monit and we are ready to go.
After that, if we are on the server and run our monit status command, we get some information back about the software we are monitoring.
[email protected]:/etc/monit$ sudo monit status /etc/monit/monitrc:290: Include failed -- Success '/etc/monit/conf-enabled/*' The Monit daemon 5.16 uptime: 0m Process 'passenger' status Resource limit matched monitoring status Monitored pid 15341 parent pid 1 uid 1000 effective uid 1000 gid 1000 uptime 2d 16h 59m threads 5 children 0 memory 102.7 MB memory total 102.7 MB memory percent 5.1% memory percent total 5.1% cpu percent 0.0% cpu percent total 0.0% data collected Wed, 16 Nov 2016 10:16:06 System 'ubuntu.members.linode.com' status Running monitoring status Monitored load average 0.01 [0.00] cpu 0.0%us 0.0%sy 0.0%wa memory usage 220.1 MB [11.0%] swap usage 4 kB [0.0%] data collected Wed, 16 Nov 2016 10:16:06
If like in the above example the resource limit has been reached, our command is executed, the app restarted and the memory consumed can go back down. Restarting Passenger when the resource limit you've set has been reached it stops the application server growing to a size that causes a problem.
In the longer term, you will want to isolate any causes of memory leaks, and fine tune the number of passenger instances you have running to be optimal for the server you are running.
If restarting the application seems to be too often or aggressive then reducing the number of Passenger instances or potentially an upgrade of your server capacity or even moving to a multi-application server setup could well be a better solution.
You don't want to have to stop taking requests and doing business while you figure this out. Don't have a new feature go rogue in a way you don't expect and grind your application to a halt. Let Monit keep an eye on things for you and stop things getting out of control.
Did this help you? Have any thoughts, feedback? Leave a comment, or send me an email, I'd love to hear from you.