SNAP on SmartOS

Despite being used in the open-source web-development world for over a decade, Linux, Apache, MySQL and PHP (LAMP) stacks remain a staple of the web-development environment, and show very little signs of fading in general use.  These stacks have driven the development of countless startups and large scale production projects alike, and while they definitely still have a place on the Internet at large, I've personally moved away from it for new development in several ways:

However, there are still several legacy PHP applications which I am obligated to support.  Fortunately, I can easily provide this support using SmartOS, Nginx, and PHP, in something I'm calling a SmartOS Nginx And PHP (SNAP) Stack.  Today we will be documenting how to set one of these up to be fully contained within a single SmartOS zone.

Note: Pretty much everything in here is optional, a few sections require other things to work, but I'll try to make it obvious what needs what.

SmartOS Zone Configuration

We will be using the following zone configuration in the example used throughout this guide.

{
  "alias": "snap",
  "hostname": "snap",
  "brand": "joyent",
  "image_uuid": "088b97b0-e1a1-11e5-b895-9baa2086eb33",
  "max_physical_memory": 256,
  "cpu_cap": 100,
  "quota": 10,
  "nics": [{
    "nic_tag": "admin",
    "ips": [ "192.168.0.5/24", "addrconf" ],
    "gateway": "192.168.0.1"
  }],
  "resolvers": [ "8.8.8.8", "8.8.4.4" ]
}

IP Addresses & Domain Names

We will be visiting this server with a web-browser to ensure that we've successfully configured our server at every step.  For this example, we will be using the IPv4 address of 192.168.0.5, the IPv6 address of fe80::700d:fff:fef4:9d9c, and a domain name of foo.bar with an A record pointing to our IPv4 address and an AAAA record pointing to our IPv6 address.

Remember to use your own IP addresses and domain names if following along with this guide.

Configure Nginx

Prerequisites: SmartOS Zone

First of all, install Nginx

# pkgin in nginx

I prefer to clear up the excess example nginx configuration in /opt/local/etc/nginx, as it makes it much easier to read and debug if need be.  We'll start by removing excess files.

# rm /opt/local/etc/nginx/{fastcgi.conf,fastcgi_params,koi-utf,koi-win,naxsi_core.rules,uwsgi_params,win-utf}

And now we'll simplify the main nginx configuration file:

/opt/local/etc/nginx/nginx.conf:

user www www;
worker_processes 1;

events { worker_connections 1024; }

http {
  include mime.types;
  default_type application/octet-stream;

  tcp_nopush on;
  sendfile on;
  gzip on;

  index index.html;

  server {
    listen 80 default_server;
    server_name localhost;

    root share/examples/nginx/html;
  }

  include vhosts/*.enabled;
}

Notice:  The above configuration enables sendfile, tcp_nopush and tcp_nodelay (on by default) at the same time.  Normally, this might seem like a bit of a mixed signal, however, tcp_nopush ensures that packets will be full before being sent to the client and on the last packet, Nginx will remove tcp_nopush, and tcp_nodelay will force the socket to send the final packet, shaving 200ms off per request.  SmartOS should support TCP_CORK, and here are the source notes about it.

This will enable a catch-all server (first server definition) that will grab any requests that don't map to a defined server_name.  We can now start nginx.

# svcadm enable nginx

Visit http://192.168.0.5/ with your favorite web browser to verify that it's serving the default "Welcome to nginx!" page.

PHP

Prerequisites: Nginx

Start by installing the PHP FastCGI Process Manager, which will also install PHP.

# pkgin in nginx php70-fpm

We will be configuring Nginx to communicate with php-fpm, which normally serve requests via localhost tcp/9000.  We will do this by creating a backend directory within the Nginx configuration directory that contains a file called php, which can be included in any virtual host to enable server-side processing of php.

/opt/local/etc/nginx/backend/php:

index index.html index.php;
fastcgi_index index.php;

location ~ \.php$ {
  try_files $uri =404;
  fastcgi_pass localhost:9000;

  fastcgi_param  SCRIPT_FILENAME    $document_root$fastcgi_script_name;
  fastcgi_param  QUERY_STRING       $query_string;
  fastcgi_param  REQUEST_METHOD     $request_method;
  fastcgi_param  CONTENT_TYPE       $content_type;
  fastcgi_param  CONTENT_LENGTH     $content_length;

  fastcgi_param  SCRIPT_NAME        $fastcgi_script_name;
  fastcgi_param  REQUEST_URI        $request_uri;
  fastcgi_param  DOCUMENT_URI       $document_uri;
  fastcgi_param  DOCUMENT_ROOT      $document_root;
  fastcgi_param  SERVER_PROTOCOL    $server_protocol;
  fastcgi_param  REQUEST_SCHEME     $scheme;
  fastcgi_param  HTTPS              $https if_not_empty;

  fastcgi_param  GATEWAY_INTERFACE  CGI/1.1;
  fastcgi_param  SERVER_SOFTWARE    nginx/$nginx_version;

  fastcgi_param  REMOTE_ADDR        $remote_addr;
  fastcgi_param  REMOTE_PORT        $remote_port;
  fastcgi_param  SERVER_ADDR        $server_addr;
  fastcgi_param  SERVER_PORT        $server_port;
  fastcgi_param  SERVER_NAME        $server_name;

  fastcgi_param  REDIRECT_STATUS    200;
}

Next, we will add include backend/php; to our default server to verify that server-side PHP processing is indeed working.

/opt/local/etc/nginx/nginx.conf:

...
  server {
    listen 80 default_server;
    server_name localhost;

    root share/examples/nginx/html;
    include backend/php;
  }
...

Enable php-fpm and refresh nginx.

# svcadm enable php-fpm
# svcadm refresh nginx

Create a phpinfo.php file under our default vhost to test server-side PHP.

/opt/local/share/examples/nginx/html/phpinfo.php:

<?php
    phpinfo();
?>

Visit http://server_ip/phpinfo.php in your favorite web browser and verify that you're seeing a PHP Info page to confirm that everything's working correctly.  If it is, congratulations, you have successfully built a basic SNAP server.

You can either stop here and just use /opt/local/share/examples/nginx/html to host your web root, or you can continue with this guide.  Everything from here on out is optional.

PHP via UNIX Socket

Prerequisites: Nginx and PHP (installed on the same host)

If you're going to use both nginx and php-fpm on the same host, it really makes sense to use UNIX sockets to handle the communication between them.  This is slightly more secure than using a TCP socket, and is the default configuration of other operating systems (namely Debian), and is fairly simple to set up.

First, we will modify the nginx PHP backend configuration to communicate via UNIX domain socket.  This is done by changing the fastcgi_pass configuration parameter to refer to a UNIX socket:

/opt/local/etc/nginx/backend/php:

location ~ \.php$ {
...
  fastcgi_pass unix:/var/run/php-fpm.www.socket;
...

Next, we reconfigure php-fpm to communicate via UNIX socket by editing it's configuration file and changing the listen configuration parameter to refer to the same UNIX socket:

/opt/local/etc/php-fpm.d/www.conf:

listen = /var/run/php-fpm.www.socket

Refresh nginx and restart php-fpm and verify that phpinfo still renders properly.

# svcadm refresh nginx
# svcadm restart php-fpm

Ruby on Rails

Prerequisites: Nginx

Since we're already going to all of the work to describe how to install and connect PHP to Nginx, we might as well do the same thing with Ruby on Rails.

Notice: The later part of this section about connecting Rails to Nginx is valid for ANY web application that speaks HTTP and wants to reverse-proxy through Nginx, not just Rails.

We'll start by installing ruby and dependencies on the system.  GCC and gmake are required so that RubyGems can compile and build its native extensions.

# pkgin in ruby gcc49 gmake

I prefer to use ruby gems to install rails local to a user, as it reduces system cruft (at the expense of possibly installing multiple libraries) and allows users to install updated libraries without needing superuser privileges.

If you haven't already setup dedicated ZFS datasets for each user, the following command block will perform the necessary actions.  This step is optional but tends to keep things clean and organized within the zone.

# mv /home /home_tmp
# zfs create -o mountpoint=/home zones/`sysinfo | json UUID`/data/home
# zfs create zones/`sysinfo | json UUID`/data/home/admin
# chown admin:staff /home/admin
# mv /home_tmp/admin/.??* /home/admin
# rmdir -p /home_tmp/admin

Now you can use the -z parameter with useradd to create dedicated datasets along with users.

# useradd -mz brian

Switch into this user to continue.

# su - brian

Locally installing ruby gems is easy with the --user-install parameter to gem.

$ gem install rails --user-install

For convenience, you will need to extend your PATH environment variable to run gem executables.  Just extend the (colon separated) variable with one more entry:

~/.profile

PATH=...:~/.gem/ruby/2.3.0/bin

Exit and re-enter the shell to verify that your PATH is configured properly.  You should be able to run the rails command.  Do that to actually create your rails application:

$ rails new ~/sites/dev.example.com -B

The -B flag will prevent rails from attempting to run bundle install.  We use this flag so that we can install our gem dependencies to the local user directory, instead of the system directory:

# cd ~/sites/dev.example.com
# bundle install --path ~/.gem

Notice: this path could also be set to vendor/bundle for application-only use, but this would lead to a lot of potential code repetition (on the plus side, it means your dependencies would be directly managed by git).

Lets start up the server listening on any IP just so we can feel good about ourselves.

# rails s -b 0.0.0.0

Check it out with a web-browser.

The Rails version of "Hello World"

Awesome!

Next, lets set up an SMF manifest so we don't have to worry about ensuring it's always running!

Configure the above SMF to match your environment, import and enable it.

# svccfg import rails-smf.xml
# svcadm enable rails:example-dev

Now we can connect it to Nginx!

Add the following either to a vhost server directive (recommended), or the default server directive under nginx config.  By specifying root we can completely avoid Rails for static content.  The try_files directive allows us to check static files before we forward requests to rails.  The @rails location sets headers and forwards requests to our rails app.

We can optionally include extended error handling pages that will present nice messaging instead of a cold Nginx HTTP 502 message.  We can set one up for planned maintenance too.

server {
  listen      80;
  server_name dev.example.com;
  root        /home/brian/sites/dev.example.com/public;

  # Nicely formed HTTP 502 error page. Will display if @rails is unavailable.
  error_page 502 /errors/502.html;

  # Place a file at /under_maintenace.html to block access to Rails.
  try_files $uri /under_maintenance.html @rails;
  #try_files $uri @rails;

  location @rails {
    proxy_set_header X-Real_IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_set_header Host $server_name;
    proxy_pass http://127.0.0.1:3001;
  }
}

Refresh Nginx to continue.

# svcadm refresh nginx

Rails via UNIX Socket

Prerequisites: Nginx and Rails (running on the same host)

Just as with PHP, Ruby on Rails can be connected via UNIX domain socket rather than via TCP socket.  It yields the same benefits (more secure, no TCP overhead or latency) and drawbacks (no cross-host communication) as it does with PHP.

While the rails server command, unfortunately, cannot be configured to communicate via UNIX domain socket (it throws an error), puma, the now default HTTP server used by Rails can.  We will need to reconfigure SMF to directly call puma instead of calling rails server.

Note: This will affect all instances of rails running on this server.

First, change the exec method to call puma directly:

# svccfg -s pkgsrc/rails setprop \
start/exec = '"bundle exec puma -de %{options/environment} -b %{options/socket}"'

Next, get a list of all instances under rails (ignore :properties).

# svccfg -s pkgsrc/rails list
:properties
example-dev

Add the socket property and (optionally) remove the port property in each instance.

# svccfg -s pkgsrc/rails:example-dev setprop \
options/socket = astring: "unix://tmp/sockets/app.sock"
# svccfg -s pkgsrc/rails:example-dev delprop options/port

Finally, refresh each instance to push its configuration out to the admin layer of SMF.  This should automatically shut down the old command and restart with the new command.

# svccfg -s pkgsrc/rails:example-dev refresh

Lastly, we need to reference our UNIX domain socket from within Nginx's configuration.

Change this:

...
proxy_pass http://127.0.0.1:3001;
...

Into this:

...
proxy_pass http://unix:///home/brian/sites/dev.example.com/tmp/sockets/app.sock;
...

Refresh Nginx and verify it's working.

# svcadm refresh nginx

Listen on IPv6

Prerequisites: Nginx, IPv6

By default, Nginx will only listen on IPv4 sockets for incoming connections.  This can be changed by configuring Nginx to listen to any configured IPv6 address on the system, or the IPv6 default unicast address [::].  In our example, we're going to update our default server to listen to IPv6.

/opt/local/etc/nginx/nginx.conf:

...
    listen [::] default_server ipv6only=off;
...

If instead, you want Nginx to listen to IPv6 exclusively, you can remove the ipv6only=off option.

...
    listen [::] default_server;
...

Notice: These listen directives need to be set for each vhost you want to have listening on IPv6 interfaces (see the section below).

When you're done, restart Nginx (a refresh won't work here, since we're listening on new sockets).

# svcadm restart nginx

Confirm that you can access the server via IPv6 by visiting http://[fe80::700d:fff:fef4:9d9c]/ with your favorite web browser (if your browser is local).  If you kept IPv4 enabled, verify that http://192.168.0.5/ still works.

Create HTTP vhosts

Prerequisites: Nginx

HTTP allows for the client to specify to the server what hostname it's attempting to access, allowing for multiple host names to be handled by a single HTTP server.  This is known as virtual hosting, vhosting, or vhosts.  My main Nginx configuration file is configured to pull virtual host configurations from /opt/local/etc/nginx/vhosts/*.enabled, that is from any file within that directory, ending in .enabled.  Let's create one of those now.

/opt/local/etc/nginx/vhosts/example.net.enabled:

server {
  listen 80;
  server_name www.foo.bar;
  return 302 http://foo.bar$request_uri;
}

server {
  listen 80;
  server_name foo.bar;

  root /home/foo/sites/foo.bar;
  include backend/php;
}

This vhost file specifies two virtual HTTP servers.  The first server definition responds to requests for www.foo.bar and redirects all requests to foo.bar with a 302 (Found) response code.  The second server definition responds to requests for foo.bar and maps them to files located at /home/foo/sites/foo.bar, additionally enabling server-side PHP rendering for files ending in .php (due to how backend/php was written.)

You should test with a 302 (Found) response code before upgrading to 301 (Moved Permanently) response code, as those are quite permanent.

While using the $scheme is popular in Nginx configuration files when dealing with redirections, it does add an additional level of indirection, slowing down the process.  Since this server is listening on port 80, we can reasonably assume that $scheme would resolve to http.

There can be only one default_server option on the listen directives of all locally hosted vhosts.  Additionally, ipv6only=off should only be specified once, which can be done easily in the on the same server that default_server is specified on.

When you're done, refresh nginx.

# svcadm refresh nginx

Visit http://www.foo.bar/ with your favorite web browser.  You should be redirected to http://foo.bar/.

Default server redirection

Prerequisites: Nginx

When hosting a well-known website, it's usually best to ensure that your clients access the website through a uniform and consistent hostname.  In short, you want the following request paths:

  • Requests to your website's primary domain to be handled normally.
  • Requests to your website's well known secondary domains to be permanently redirected (HTTP 301) to your primary domain. (ie: http://www.foo.bar/ -> http://foo.bar/)
  • Requests to your website's IP address(es) to be temporarily redirected (HTTP 302) to your primary domain. (ie: http://1.2.3.4/ -> http://foo.bar/)

Fortunately, this is easy to do with the default_server we specified in Nginx's primary configuration file.  Instead of specifying a root path, or even a location, specify a rewrite directive to the primary domain you would like this nginx server to host:

/opt/local/etc/nginx/nginx.conf:

...
  server {
    listen 80 default_server;
    server_name localhost;
    return 302 http://foo.bar$request_uri;
  }
...

This is fully compatible with multiple vhosts, as nginx will simply temporarily redirect (HTTP 302) the client to one of those other virtual hosts if it is presented a hostname that is unknown to it.

As always, refresh nginx to enable this new configuration.

# svcadm refresh nginx

Visit http://192.168.0.5/ with your favorite web browser.  You should be redirected to http://foo.bar/.  If you have IPv6 enabled, visit http://[fe80::700d:fff:fef4:9d9c]/.  You should also be redirected to http://foo.bar/.

Enable HTTPS

Prerequisites: Nginx, OpenSSL

HTTPS or HTTP over Transport Level Security (TLS) is the very common practice of encrypting HTTP traffic with the TLS wrapper protocol, protecting it from unauthorized access or modification.  This is usually accomplished by having both the client and server establish a common symmetric key using a secure key exchange protocol (such as Diffie-Hellman) which is then typically extended by an asymmetric cryptosystem for authentication.

I won't bore you with theory here as I've already done that plenty.  We will instead focus on the practical elements of configuring Nginx for handling HTTPS connections.

First, we will adjust some of the default Nginx behaviors surrounding HTTPS, which will affect all HTTPS virtual hosts handled by this instance of Nginx:

/opt/local/etc/nginx/nginx.conf:

http {
...
  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
#  ssl_ciphers HIGH:!aNULL:!MD5:!3DES:!CAMELLIA:!AES128; # HTTP/1
  ssl_ciphers EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH; # HTTP/2
  ssl_prefer_server_ciphers on;
  ssl_session_cache shared:SSL:10m;
  ssl_dhparam private/dhparam.pem;

  ssl_stapling on;
  ssl_stapling_verify on;
...
}

In short, we're preventing nginx from using insecure SSL/TLS versions as well as insecure ciphers.  We're supporting both session caching, and session cookies.  We're also specifying a location for our dhparam.pem file, which will hold large Diffie-Hellman parameters for DHE ciphers.

The two ciphers lists are for if you intend on using HTTPS/2 or not.  HTTP/2 mandates less secure ciphers are used, hence the HTTP/1 cipher set is more secure.  Use the first one unless you have problems with certain browsers connecting.

The ssl_stapling parameter is a performance boon if your CA chain supports OCSP for invalidating keys.  Shouldn't hurt to turn this on.

Additionally, we will need to specify a certificate and key for each HTTPS virtual server we host.  Below is our new virtual host configuration file for foo.bar:

/opt/local/etc/nginx/vhosts/foo.bar.enabled:

server {
  listen 80;
  server_name www.foo.bar foo.bar;
  return 302 https://foo.bar$request_uri;
}

server {
  listen 443 ssl;
#  listen [::]:443 ssl ipv6only=off;
  server_name foo.bar;

  ssl_certificate private/foo.bar/ecc.crt;
  ssl_certificate_key private/foo.bar/ecc.key;

  ssl_certificate private/foo.bar/rsa.crt;
  ssl_certificate_key private/foo.bar/rsa.key;

  root /home/foo/sites/foo.bar;
}

Breaking this down, we specify an HTTP server which responds to requests to both foo.bar and www.foo.bar, redirecting them (HTTP 302) to https://foo.bar$uri.  The $uri variable representing the request URI passed by the client (ie: /.)  We also specify a HTTPS server which is listening on tcp/443 (optionally, via IPv6.)  This HTTPS server will serve content from the filesystem at /home/foo/sites/foo.bar and has server-side PHP rendering enabled.

The parameters ssl_certificate and ssl_certificate_key can be repeated multiple times to allow for multiple crypto-schemes to be used on a single vhost.

Before we continue, we will need to create a place to store our keys and other sensitive parameters.

# mkdir /opt/local/etc/nginx/private
# cd /opt/local/etc/nginx/private
# chmod 600 .

Generate some large Diffie Hellman parameters.  Since the generation of 4096-bit DH parameters can take hours to complete.  You may want to generate 2048-bit values instead.

# openssl dhparam -out dhparam.pem 2048

Generate your keys.

# openssl genrsa -out foo.bar/rsa.key 2048
# openssl ecparam -name prime256v1 -genkey -out foo.bar/ecc.key

... And CSRs for passing off to your CA.

# openssl req -new -key foo.bar/rsa.key -out foo.bar/rsa.csr \
-subj "/C=US/O=Foo Organization/CN=foo.bar"
# openssl req -new -key foo.bar/ecc.key -out foo.bar/ecc.csr \
-subj "/C=US/O=Foo Organization/CN=foo.bar"

Or you could just self-sign Certificates.

# openssl req -new -x509 -days 365 -key foo.bar/rsa.key -out foo.bar/rsa.crt \
-subj "/C=US/O=Foo Organization/CN=foo.bar"
# openssl req -new -x509 -days 365 -key foo.bar/ecc.key -out foo.bar/ecc.crt \
-subj "/C=US/O=Foo Organization/CN=foo.bar"

After your key-pairs are in place, refresh nginx.

# svcadm refresh nginx

Browse with your favorite web browser to http://foo.bar/ and http://www.foo.bar/.  You should be redirected to https://foo.bar/.  This self-signed certificate should be replaced by a proper certificate before being used in a production setting.

Multiple SSL Virtual Hosts

Prerequisites: SSL

Nginx supports SNI (Server Name Identification) out of the box, which solves the chicken-and-the-egg problem presented by HTTP being encapsulated within TLS.  Simply add another server definition with its own key and certificate, and it just works.

HTTP Strict Transport Security (HSTS)

Prerequisites: SSL

HSTS is a security policy mechanism which helps to protect websites from protocol downgrade attacks.  It allows servers to declare that a website should only be accessed using secure HTTPS connections, and never through an insecure protocol such as HTTP.

This is achieved by sending an HTTP response header named "Strict-Transport-Security" to the client over HTTPS.  This header specifies an interval of time during which the client should only access the server via HTTPS.

We can do this through nginx via the add_header directive, which should be added to the block of a HTTPS server definition and as well to an enclosed root location block if there are other add_header directives there:

server {
    # Better to put it here
    add_header Strict-Transport-Security "max-age=60; includeSubDomains" always;

    location / {
        # It should be repeated here if there are other add_header directives in this block
        add_header Strict-Transport-Security "max-age=60; includeSubDomains" always;
    }
}

Notice: Once you start using HSTS, the only possible way to revert is to wait for the max-age to expire.  Be sure you are able to use HTTPS before setting high max-age values or submitting your websites to Google's HSTS preload list.

The max-age is in seconds (so 60 is 1 minute), and should start out small for testing purposes.

Once you know nothing has broken horribly, you can gently step this up to 3600 (1 hour), 86400 (1 day), 604800 (7 days), 2592000 (30 days), and 31536000 (365 days).  Values of anywhere from 180 to 720 days are considered secure for long-term production environments.

The includeSubDomains part is optional and applies your HTTPS only policy to all sub-domains.

In addition, Google maintains a preload list of websites that use HSTS and have submitted their names to https://hstspreload.appspot.com/.

If you would like to add your site to Google's preload list, I recommend reading they're deployment recommendations.

Enable HTTP/2 (or SPDY)

Prerequisites: SSL

HTTP/2 is the first major upgrade to HTTP in a very long time.  Based on the work of Google's SPDY protocol, HTTP/2 decreases latency and improves page loading speeds by several methods, including compression of HTTP headers, establishing a server push mechanism, allowing for the pipelining of requests, fixing the head-of-line blocking problem in HTTP/1 and allowing multiple requests to be multiplexed over a single TCP connection.

While HTTP/2 is supported by all major browsers, no major browser supports HTTP/2 outside of a TLS connection, so HTTPS is a prerequisite in all practical situations.

HTTP/2 officially became available in nginx starting with version 1.9.5, which means it's available to us on SmartOS.  If you are using a version of nginx prior to 1.9.5, you can use the SPDY protocol, the experimental precursor to HTTP/2.  Simply replace http2 with spdy in all configuration directives.

To enable HTTP/2, simply append it to any SSL enabled listen directives.

/opt/local/etc/nginx/vhosts/foo.bar.enabled:

listen 443 ssl http2;

Or for IPv6 listeners.

listen [::]:443 ssl http2;

Refresh nginx to enable this new configuration.

# svcadm refresh nginx

Notice: It's not immediately apparent that you're connecting via HTTP/2 instead of HTTP/1.  In both Chrome and Opera, under the Network tab of the developer's console, the columns can be extended to include protocol, exposing 'http/1.1' or 'h2' for HTTP/1.1 and HTTP/2 respectively.  There is also a browser extension for Google Chrome as well as an online tool that can confirm HTTP/2 connectivity for you.

Testing HTTPS

I recommend testing your HTTPS configuration through Qualys' SSL Labs tool.

If you're the type that gets competitive over things like Qualys SSL scores, I recommend reading their Server Rating Guide which describes their methodology for scoring HTTPS servers.

Assuming you've followed everything thus far, the short guide for obtaining the highest possible score is:

  • Support only TLS 1.2. (ssl_protocols TLSv1.2)
  • Generate 4096-bit Diffie-Hellman parameters (for RSA) & Use the secp384r1 curve (for ECC).
  • Use only 256-bit AES ciphers. (The HTTP/1 ciphers as mentioned above)
  • An HTST max-age of at least 180 days (15552000).

Please note that some of these points will severely impact your server's compatibility with common browsers.

Some more good documentation on security from Qualys.

Conclusion

This guide was intended to be simple, and as such, barely scratches the surface of what can be done with Nginx.  As always, if you find yourself making more than light use of Nginx, Rails and/or PHP, I recommend you thoroughly read their documentation, links available below.

If you will be extending your Nginx installation well beyond what I've outlined here and have very little time to read, I would recommend that you read at minimum the Nginx wiki article on common nginx pitfalls.  I even had to rewrite several of the examples in this guide to avoid them: