Fast Scalable and Secure Web Hosting
Fast Scalable and Secure Web Hosting
Entrepreneurs
Wim Bervoets
* * * * *
This is a Leanpub book. Leanpub empowers authors and publishers with the Lean
Publishing process. Lean Publishing is the act of publishing an in-progress ebook using
lightweight tools and many iterations to get reader feedback, pivot until you have the right
book and build traction once you do.
* * * * *
© 2015 - 2016 Wim Bervoets
Table of Contents
Preface
Who Is This Book For?
Why Do You Need This Book?
Introduction
Choosing Your Website Hosting
Types Of Hosting
Shared Hosting
VPS Hosting
Dedicated Hosting
Cloud Hosting
Unmanaged vs Managed Hosting
Our Recommendation
Checklist For Choosing Your Optimal Webhost Provider
Performance Based Considerations
Benchmarking web hosts
VPS Virtualization methods considerations
IPv4 / IPv6 Support
Example ordering process
RamNode Ordering
Performance Baseline
Serverbear
Testing Ping Speed
Tuning KVM Virtualization Settings
VPS Control Panel Settings
Configuring the CPU Model Exposed to KVM Instances
Tuning Kernel Parameters
Improving Network speeds (TCP/IP settings)
Disabling TCP/IP Slow start after idle
Other Network and TCP/IP Settings
File Handle Settings
Setup the number of file handles and open files
Improving SSD Speeds
Kernel settings
Scheduler Settings
Reducing writes on your SSD drive
Enable SSD TRIM
Other kernel settings
Installing OpenSSL
Installing OpenSSL 1.0.2d
Upgrading OpenSSL to a Future Release
Check Intel AES Instructions Are Used By OpenSSL
Securing your Server
Installing CSF (ConfigServer Security and Firewall)
Configuring the ports to open
Configuring and Enabling CSF
Login Failure Daemon (Lfd)
Ordering a Domain Name For Your Website
Choosing a Domain Name and a Top Level Domain
Ordering a Domain With EuroDNS
Configuring a Name Server
Ordering the DNSMadeEasy DNS Service
Installing MariaDB 10, a MySQL Database Alternative
Download & Install MariaDB
Securing MariaDB
Starting and Stopping MariaDB
Upgrade MariaDB To a New Version
Tuning the MariaDB configuration
Kernel Parameters
Storage Engines
Using MySQLTuner
Enabling HugePages
System Variables
Enabling Logrotation for MariaDB
Diagnosing MySQL/MariaDB Startup Failures
Installing PHP
Which PHP Version?
What is an OpCode Cache?
Zend OpCache
Compile PHP From Source?
Install Dependencies for PHP Compilation
Downloading PHP
Compile and Install PHP
Compilation flags
Testing the PHP Install
Tuning php.ini settings
Set the System Timezone
Set maximum execution time
Set duration of realpath information cache
Set maximum size of an uploaded file.
Set maximum amount of memory a script is allowed to allocate
Set maximum size of HTTP POST data allowed
Don’t expose to the world that PHP is installed
Disable functions for security reasons
Disable X-PHP-Originating-Script header in mails sent by PHP
Sets the max nesting depth of HTTP input variables
Sets the max nesting depth of input variables in HTTP GET, POST (eg. $_GET, $_POST.. in PHP)
Enable Zend Opcache
Installing PHP-FPM
Configuring PHP-FPM
Configuring a PHP-FPM Pool
How to start PHP-FPM
nginx FPM Configuration
Viewing the PHP FPM Status Page
Viewing statistics from the Zend OpCode cache
Starting PHP-FPM Automatically At Bootup
Log File Management For PHP-FPM
Installing memcached
Comparison of caches
Downloading memcached And Extensions
Installing Libevent
memcached Server
Installing libmemcached
Installing igbinary
Install pecl/memcache extension for PHP
Install pecl/memcached extension for PHP
Testing The memcached Server Installation
Setup the memcached Service
Installing a Memcached GUI
Testing the scalability of Memcached server with twemperf and mcperf
Updating PHP To a New Version
Download the New PHP Version
Installing ImageMagick for PHP
Install ImageMagick
Installing PHPMyAdmin
Installing phpMyAdmin
Installing Java
Check the installed version Java
Downloading Java SE JDK 8 update 66
Installing Java SE JDK8 Update 66
Making Java SE JDK8 the default
Installing Jetty
Download Jetty 9.3.x
Creating a Jetty user
Jetty startup script
Configuring a Jetty Base directory
Overriding the directory Jetty monitors for your Java webapp
Adding the Jetty port to the CSF Firewall
Autostart Jetty at bootup
Enabling Large Page/HugePage support for Java / Jetty
Adding the jetty user to our hugepage group
Update memlock in /etc/security/limits.conf
Add UseLargePages parameter to the Jetty startup script
Forwarding requests from Nginx to our Jetty server
Improving Jetty Server performance
Installing Visual VM on your PC
Enabling Remote monitoring of the Jetty Server Java VM
Garbage Collection in Java
Optimising Java connections to the MariaDB database
Download and install the MariaDB JDBC4 driver
Download and install the Oracle MySQL JDBC driver
Download and install HikariCP - a database connection pool
Configuring Jetty logging and enabling logfile rotation
Using a CDN
Choosing a CDN
Analysing performance before and after enabling a CDN
Configuring a CDN service
Create a New Zone.
Using a cdn subdomain like cdn.mywebsite.com
Tuning the performance of your CDN
HTTPS everywhere
Do you need a secure website?
Buying a certificate for your site
Standard certificate
Wildcard certificate.
Public and private key length
Extended Validation certificates and the green bar in the browsers
Buying the certificate
Generate a Certificate Signing request
Ordering a certificate
Configuring nginx for SSL
Getting an A+ grade on SSLLabs.com
Enabling SSL on a CDN
Enabling SPDY or HTTP/2 on a CDN
Installing Wordpress
Downloading Wordpress
Enabling PHP In Your Nginx Server Configuration
Creating a Database For Your Wordpress Installation
Installing Wordpress
Enabling auto-updates for Wordpress (core, plugins and themes)
Installing Wordpress plugins
Installing Yoast SEO
Appendix: Resources
TCP IP
Ubuntu / Linux
KVM
SSD
OpenSSL
Nginx
PHP
MariaDB
Jetty
CDN
HTTPS
Fast, Scalable And Secure Web Hosting For Entrepreneurs
Copyright 2015-2016 Wim Bervoets - Revision 1.0 Published by Wim Bervoets
www.fastwebhostingsecrets.com
License Notes
This book is licensed for your personal enjoyment only. This book may not be re-sold or
given away to other people. If you would like to share this book with another person,
please purchase an additional copy for each person. Thank you for respecting the hard
work of this author.
Disclosure of Material Connection
Some of the links in the book are affiliate links. This means if you click on the link and
purchase the item, I will receive an affiliate commission. Regardless, I only recommend
products or services I use personally and believe will add value to my readers.
Preface
Thanks for buying the book Fast, Scalable And Secure Web Hosting For Entrepreneurs.
This book was born out of the need to provide a comprehensive and up-to-date overview
on how to configure your website hosting in the most optimal way and from the start to
finish. The book is packed with a great deal of practical and real world knowledge.
We will also focus on tuning the OS for performance and scalability. Making your server
and website secure (via HTTPS) is a hot topic since Google announced it can positively
impact your Search Engine Rankings. We’ll describe how to enable HTTPS and secure
connections to your website.
We’ll also explain technologies like SPDY, HTTP/2 and CDN and how they can help you
to make your site faster for your visitors from all around the world.
Introduction
So why is a fast site so important?
In one word: your users will always love a fast site. Actually they’ll take it for granted. If
you manage to make it too slow, they’ll start getting angered by the slowness. And this
can have a lot of consequences:
Mobile users are even quicker to abandon your site when it loads slowly on their tablet or
smartphone on a 3G or 4G network. And we all know that mobile usage is increasing very
fast!
If you’re dependent on search traffic from Google or Bing, that’s another big reason to
make your site as fast as possible.
These days there is a lot competition between websites. One of the factors Google uses to
rank websites is their site speed. As Google wants the best possible experience for their
users, they’ll give an edge to a faster site (all other things being equal). And higher
rankings lead to more traffic.
Increased website performance can also reduce your bandwidth costs as some of the
performance optimizations will reduce your bandwidth usage.
So how do you make your site fast?
There are a lot of factors influencing the speed of your site. The first decision (and an
important one) you’ll have to make is which web hosting company you want to use to host
your server / site. Let’s get started!
Choosing Your Website Hosting
Getting great website speeds starts with choosing the right website hosting. In this chapter
we will explain the different types of hosting and their pro’s/contra’s.
We will give you a checklist with performance indicators which will allow you to analyze
and compare different hosting packages.
Types Of Hosting
To run a website, you’ll need to install the necessary web server software on a server
machine. Web hosts generally offer different types of hosting. We’ll explain the
pro’s/contra’s of each.
Shared Hosting
Shared hosting is the cheapest kind of hosting. Shared hosting means that the web host has
setup a (physical) server and let it host a lot of different sites from different customers on
that server. (into the hundreds)
The cost of the server is thus spread to all customers with a shared hosting plan on that
server.
Most people start with this kind of hosting because it is cheap. There are some downsides
though:
When your site gets busy, your site could get slower (because there are few hundred
sites from other people also waiting to be served)
Your web hosting company could force you to move to a more expensive server
because you’re eating up too many resources from the physical server - negatively
impacting the availability or performance of other sites which are also hosted on that
server.
You could be affected by security issues (eg. due to out of date server software)
created by the other websites you’re sharing the server with.
Your website performance could suffer because other sites misbehave
Most of the time, there is almost no possibility to start configuring and tuning the
server, which leaves you at mercy how well the web host has configured the server.
For the above reasons, we will not further mention this type of hosting in this book, as it is
not a good choice to get the best possible stability and repeatable performance out of your
server.
VPS Hosting
VPS stands for Virtual Private Server. Virtualization is a technique that makes it possible
to create a number of virtual servers on one physical server. The big difference with shared
hosting is that the virtual servers are completely separated from each other:
Each virtual server can host its own operating system. (and can be completely
different from eachother)
Each virtual server gets an agreed upon slice of the physical server hardware (cpu,
disk, memory)
The main advantage is that it’s not possible that other virtual servers will negatively affect
the performance of your machine and your installation is more secure.
The physical server typically runs a hypervisor which is tasked with creating, releasing,
and managing the resources of “guest” operating systems, or virtual machines.
Most web hosts will limit the number of virtual private servers running on one physical
server to be able to give you the advertised features of the VPS. (eg. 512MB RAM, 1Ghz
CPU and so on)
Well known VPS web host providers include:
Ramnode
MediaTemple
BlueHost
Dedicated Hosting
A dedicated server is a server which is completely under your control. You get the full
power of the hardware as you are the only one running website(s) on it.
The extra layer of virtualization (which has some performance impact) is not present here,
unless you decide to virtualize the server yourself into separate virtual servers. This means
you generally get the best possible performance from the hardware.
Cloud Hosting
Cloud servers are in general VPS servers, but with the following differences:
The size of your cloud server (amount of RAM, CPU and so on) can be enlarged or
downsized dynamically. Eg. during peak traffic times you could enlarge the instance
to be able to keep up with the load. When traffic diminishes again, you can downsize
the instance again.
Bigger instances will always cost more money then small instances. Being able to
change this dynamically can reduce your bills.
Cloud-based servers can be even moved to other hardware while the server keeps
running. They can also span multiple physical servers. (This is called horizontal
scaling)
Cloud-based servers allow your site(s) to grow more easily to really high-traffic
websites.
Our Recommendation
To create a stable and performant website, we recommend to choose between VPS or
Cloud-based servers.
Price-wise VPS based servers are generally cheaper then Cloud-based servers. For most
website servers which don’t have extreme volumes of traffic, a good VPS will be able to
handle the load just fine.
If you hope the create the next Twitter or Pinterest, cloud servers will give you the ability
to manage the growth of your traffic more easily. For example a sudden, large traffic
increase could instantly overload your server. With a cloud server you’ll be able to ‘scale
your server’ and get additional computing power in minutes vs. creating from scratch a
new instance that could take a few hours.
Before we start with our performance based considerations we need to explain two key
concepts related to website performance: latency and bandwidth.
The following diagram shows how bandwidth and latency relate to eachother (diagram
adapted from Ilya Grigorik excellent book High Performance Browser Networking).
Bandwidth and Latency
Latency is the amount of time it takes for a request from a device (such as a computer,
phone or tablet) to arrive at the server. Latency is expressed in milliseconds.
You can see in the picture that the bits and bytes are routed over different networks, some
of which are yours (eg. your WIFI network), some belong to the Internet provider, and
some belong to the web hosting provider.
All of these different networks have a different maximum throughput or bandwidth. You
can compare it with the number of lanes on a highway vs an ordinary street.
In the example above the Cable infrastructure has the least bandwidth available. This can
be due to physical limitations or by arbitrary limits set by your ISP.
The latency is affected by the distance between the client and the server. If the two are
very far away, the propagation time will be bigger, as light travels at a constant speed over
fiber network links.
The latency is also affected by the available bandwidth. For example if you put a big
movie on the wire (eg. 100MB) and the wire can transfer 10MB per second maximally;
it’ll take 10 seconds to put everything on the wire. There are other kinds of delays, but for
our discussion the above represent what we need to know.
Location Of The Site
Because the latency is affected by the distance between the physical location of your
visitor and the server it is very important to take a look at your target audience.
If your target audience is Europe then having a VPS on the west coast of the USA isn’t a
great idea.
Target Audience Of Your Site
To know the target audience of your site there are two options:
If you have already an established website the chances are high you are already
analyzing your traffic via Google Analytics or similar tools. This makes it easy to
find out your top visitor locations.
If your site is new, you’ll need to establish who will be your target customer and
where he is located. (eg. on a Dutch site this could be Belgium and the Netherlands).
As you can see in the example above, our Top5 visitor locations are:
1. United States
2. Germany
3. India
4. United Kingdom
5. Brazil
The United States leads with a large margin in our example, that’s why we’ll drill down in
Google Analytics to a State level:
1. California
2. Texas
3. New York
4. Florida
5. Illinois
Geo Location of your visitors
In the United States report, California wins by a pretty big margin. As we have a lot of
visitors from both the West coast of the US and Europe, in our case the ideal location for
our server would be in a datacenter location on the US East Coast.
This combines the best response times for both Europe and the US.
Bandwidth Considerations
The amount of bandwidth used will depend on the popularity of your site and also the type
of content (hosting audio or video files for download will result in higher bandwidth
needs). Most hosts have a generous allocation of available bandwidth (100GB+ per
month).
The rate at which you might use your bandwidth will be determined by the network
performance and port speed of the host you’ve selected.
If you’re seeing speeds of ~10 MB/s then your host has a 100Mbit port speed, if you see
speeds of 100 MB/s then the host has a 1Gbit port.
We recommend to choose a host which uses Gigabit ports from/to the datacenter where
your server is located. With 100Mbit/s ports there is a much higher chance for congestion.
Tools For Testing Latency And Bandwidth
Ping latency checks from around the globe can be tested for free at
http://cloudmonitor.ca.com/en/ping.php
To use the tool you’ll need to specify the hostname or IP address of your server (in the
case you already have bought it). If you’re still researching the perfect host, then you’ll
need to find a test ping server from the web host company.
Google ‘<name of the web host> ping test url’ or ‘<name of the web host> looking glass’
to see if there is a public test URL. For example, RamNode hosting provider has Ping
server at IP address 23.226.235.3 for their Atlanta US based server. If there is no test url
available try to contact their pre sales support.
When executing the Ping latency check you can see the minimum round trip time, the
average and the maximum round trip time. With round trip time we mean the time it takes
for the request to reach the server at your web host and the time it takes to get back the
response to the client.
For client locations very near your server the round trip time will be very low: 25ms,
while eg. clients from India or Australia which probably are much further away from your
server are eg. in the 200ms range. For a client in Europe and a server on the east coast of
the USA, the round trip time cannot be much lower then 110ms (because of the speed of
light).
To test the maximum download speeds of your web host (bandwidth), search for Google
‘<name of the web host> bandwidth test url’ or ‘<name of the web host> looking glass’
Eg. a Ramnode server in Atlanta can be tested by going to http://lg.atl.ramnode.com/
You can also run a website check via Alertra
CPU Performance
When comparing web hosting plans, it’s important to compare the CPUs with the
following checks:
number of cores available for your VPS. If you have more then one core, work can be
parallelized by modern OS and server software
clock speed of the CPU (512MHz to 3GHz)
Actual Intel cpu model or AMD CPU model used.
Later in this chapter we will show you how to run some CPU benchmarks.
I/O Performance
Writing and reading files from disk can quickly become a bottleneck when you have a lot
of simultaneous visitors to your site.
Here are a few examples where your site needs to read files from disk:
Writing to the disk will happen less often, but will be also slower then the read speeds;
here are some examples:
For Wordpress blogs, with performant Disk I/O you’ll notice that your pages cache and
serve much faster.
Webhost servers are generally equipped with either:
hard drive based disks
SSD (Solid State Drives) based disks
A combination of both (SSD cached)
A solid state drive uses memory chips to store the data persistently, while hard drives
contain a disk and a drive motor to spin the disk.
Solid state drives have much better access times because there are no moving parts. They
also have much higher read and write speeds (x MB per second).
There are a few downsides though:
At this moment SSD drives are still more expensive then hard drive based disks, thus
your web host provider could also price these offerings to be more expensive.
The second downside is that SSD drive sizes are smaller then the biggest hard disks
sizes available. (Gigabytes vs Terabytes). This could be a problem if you want to
store terabytes of video.
The third option SSD Cached - tries to solve the second downside; the amount of available
disk space.
SSD Cached VPSs come with more space. The most frequently used data is served from
SSDs; while the less frequently accessed data is stored on HDDs. The whole process is
automated by the web host by using high performance hardware RAID cards.
Our recommendation is to go with an SSD based web hosting plan.
The following web host providers have plans with SSD disks:
Ramnode
MediaTemple
RAM
You’ll want 256MB or more for a WordPress based blog, preferably 512MB if you can
afford it.
When running a Java based server we recommend to have at least 1GB of RAM available.
Bandwidth
You should check how much bandwidth is included in your hosting package per month.
For every piece of data that gets transferred to and from your site (eg ssh connections,
html, images, videos and so on) it’ll be counted towards your bandwidth allowance per
month.
If not enough gigabytes or terabytes are included, you risk having to pay extra $$ at the
end of the month.
Some hosts will advertise unmetered or unlimited bandwidth. This is almost never the
case, they will have a fair use policy and could terminate your account if you go over the
fair use limits.
The rate at which you might use your bandwidth will be determined by the network
performance and port speed of the host you’ve selected. If you want a quick indication of
speed you can run ~~~~~~~~ wget http://cachefly.cachefly.net/100mb.test ~~~~~~~~
from your Linux server.
If you’re seeing speeds of ~10 MB/s then your host has a 100Mbit port speed. If you see
speeds of 100 MB/s then the host has a 1Gbit port.
We recommend Gigabit links from/to the datacenter.
Obviously if you can get a host with 1Gbit port speeds then that’s ideal but you also want
an indication of what speeds you’ll get around the globe. This can easily be tested with the
Network performance benchmark in ServerBear. (see next section)
ServerBear.com is a great site where you can see the benchmark results of a lot of web
hosting plans by different companies. As the ServerBear benchmarks results are
comparable with each other and they are shared by real users of the web hosting plans; it’s
possible to get an indication of the performance you’ll receive.
There are a lot of options to filter the search results: eg. on price, type of hosting (shared,
VPS, cloud), ssd or hdd storage and so on)
ServerBear benchmarks CPU performance, IO performance and Bandwidth performance
(from various parts of the world).
You can also run the ServerBear benchmark on your machine (if you already have
hosting) and optionally share it with the world at ServerBear.com
Once the benchmark is complete ServerBear sends you a report that shows your current
plan in comparison to similar plans. For example if you’re on a VPS ServerBear might
show you similar VPS plans with better performance and some lower tier dedicated
servers.
Running the benchmarking script on your server is a completely secure process. It’s
launched on the command line (via a Secure Shell login to the server). The exact
command can be generated by going to ServerBear.com and clicking on the green button
“Benchmark Your Server”
OpenVZ
KVM
OpenVZ
OpenVZ has a low performance overhead as it doesn’t provide full virtualization. Instead
OpenVZ runs containers within the host OS, using an OpenVZ enabled OS kernel.
As such your VPSs are not that isolated from others when running on the same server. We
don’t recommend to use OpenVZ because of the following reasons:
You’ll see in later sections that a recent and tuned Linux kernel is necessary to get the best
possible performance and scalability.
KVM
Virtualization technology needs to perform many of the same tasks as operating systems,
such as managing processes and memory and supporting hardware. Because KVM is built
on top of Linux, it can take advantage of the strengths of the Linux kernel to benefit
virtualized environments. This approach lets KVM exploit Linux for core functions,
taking advantage of built-in Linux performance, scalability, and security. At the same
time, systemic feature, performance, and security improvements to Linux benefit
virtualized environments, giving KVM a significant feature velocity that other solutions
simply cannot match.
Virtual machines are Linux processes that can run either Linux or Microsoft Windows as a
guest operating system. As a result, KVM can effectively and efficiently run both
Microsoft Windows and Linux workloads in virtual machines, alongside native Linux
applications as required. QEMU is provided for I/O device emulation inside the virtual
machine.
Key advantages are:
Full isolation
Very good performance
Upgrading the kernel is possible
Tweaking the kernel is possible
We recommend to use a web host where you can order a KVM based VPS.
IPv4 / IPv6 Support
IP stands for Internet Protocol. It is a standard which enables computers and or servers to
communicate with each other over a network.
IP can be compared with the postal system. An IP address allows you to send a packet of
information to a destination IP address by dropping it in the system. There is no direct
connection between the source and receiver.
An IP packet is a small bit of information; it consists of a header and a payload (the actual
information you want to submit). The header contains routing information (eg. the source
IP address, the ultimate destination address and so on)
When you want to eg. download a file from a server, the file will be broken up into
multiple IP packets, because one packet is only small in size and the file won’t fit in it.
Although those IP packets share the same source and destination IP address, the route they
follow over the internet may differ. It’s also possible that the destination receives the
different packets in a different order then the order in which the source sent them.
TCP (Transmission Control Protocol) is a higher-level protocol. It establishes a connection
between a source and a destination and it is maintained until the source and destination
application have finished exchanging messages.
TCP provides flow control (how fast can you sent packets/receive packets before the
source or destination is overwhelmed), handles retransmission of dropped packets and sort
IP packets back in the correct order.
TCP is highly tunable for performance, which we will see in later chapters.
IPv4 or IP version 4 allows computers to be identified by an IP address such as the
following:
In total there are around 4 billion IPv4 addresses available for the whole internet. While
that may sound much, it is less then the number of people on the planet. As more and
more devices and people come on-line, IPv4 addresses are becoming more scarce rapidly.
IPv6 or version 6 aims to solve this problem. In this addressing scheme there are 2^128
number of different IP addresses possible. Here is an example:
3ffe:6a88:85a3:0000:1319:8a2e:0370:7344
RamNode Ordering
You’ll need to surf to Ramnode via your browser and click on the View Plans and Pricing.
Then click on the KVM Full Virtualization option.
RamNode features
Massive: uses SSD Cached, which gives you more storage for a cheaper price
Standard: uses Intel E5 v1/v2 CPUs with a minimum per core speed of 2.3GHz. It
comes with less bandwidth then the Premium KVM SSDs. They are also not
available in Europe (NL - Netherlands).
Premium: uses Intel E3 v2/v3 CPUs with a minimum per core speed of 3.3GHz.
They are available in the Netherlands, New York, Atlanta and Seattle.
When clicking on the orange order link you’ll be taken the shopping cart where you can
configure a few settings:
billing cycle
the hostname for your server (you should choose a good name identifying the server.
(eg. server.<mydomainname>.<com/co.uk/…>)
You can order an extra IPv4 address for 1,50USD
You can order an extra Distributed Denial of Service attack filtered IPv4 address for
5USD (more information here
RamNode shopping cart
On the next page you can finish the ordering process by entering personal and payment
details.
After completing the ordering process you should receive a confirmation about the order.
After a few hours you’ll receive an email to notify the VPS has been setup by RamNode.
At this point you can access a web based control panel at RamNode (SolusVM) where you
can install the operating system on your VPS.
This is the first step to complete to get your website up and running. We’ll give you some
recommendations and a step by step guide on how to install the OS in the next chapter.
If you decide to use another host like MediaTemple, the process will be pretty similar.
Installing the Operating System
Most hosting providers allow you to install either a Windows Server OS or a Linux based
flavor.
In our guide we focus solely on Linux because of the following reasons:
Support for TCP Fast Open (TFO), a mechanism that aims to reduce the latency
penalty imposed on new TCP connections (Available since Linux kernel 3.7+)
the TCP Initial congestion window setting was increased (with broadband
connections the default setting limited performance) (Available since Linux kernel
2.6.39)
Default algorithm for recovering from packet loss changed to Proportional Rate
Reduction for TCP since Linux Kernel 3.2
Don’t underestimate the impact of kernel upgrade; here is what Ilya Grigorik from the
Web Fast Team at Google says about kernel upgrades:
(http://chimera.labs.oreilly.com/books/1230000000545/ch02.html#OPTIMIZING_TCP)
As a starting point, prior to tuning any specific values for each buffer and timeout
variable in TCP, of which there are dozens, you are much better off simply upgrading your
hosts to their latest system versions. TCP best practices and underlying algorithms that
govern its performance continue to evolve, and most of these changes are only available
only in the latest kernels.
Our recommendation is to choose a stable version of a Linux distribution which comes
with the most recent kernel possible.
The latest stable version is now 15.10 which will be supported for the next nine months.
By that time a newer stable version will be out which you can upgrade to.
With Ubuntu a Long term support version is generally released every two years.
Our recommendation is to use the latest stable version. These releases are generally more
then stable enough, and will allow you to use newer kernels and functionality more
rapidly.
We’ll be installing Ubuntu on our RamNode server. Looking at the current list of Linux
flavors available at RamNode we recommend you to install Ubuntu 15.04 as it comes with
Kernel 3.19
Note that while we were writing this ebook, the latest version of Ubuntu was 15.04 which
we’ll show you how to install below.
In the next chapter we’ll also explain how to upgrade to Ubuntu 15.10.
Selecting the ISO template in the Ramnode Control Panel and Booting from
CDROM
To start the install we need to select the Ubuntu 15.04 ISO template from the Ramnode
control panel.
Go to https://vpscp.ramnode.com/control.php and login to your Control Panel.
Click on the CDRom tab and mount the latest Ubuntu x86_64 Server ISO file.
Click on the Settings tab and change the boot order to (1) CDROM (2) Hard Disk
Now reboot your server
We recommend to use the HTML5 SSL VNC Viewer. If it doesn’t work somehow,
provided you have installed Java on your machine, you can use the Java VNC client.
All the VNC clients on RamNode control panel will automatically connect to the host
address and port you can see on the VNC tab.
Alternatively you can also install Tight VNC Viewer on Windows/Mac and connect to the
address/port and use the password provided.
Because you have rebooted the VPS and launched the CDROM with the Ubuntu install
you should now see the Ubuntu installer in the VNC window.
Select the location of your server, to correctly set the time zone and system locale of your
Linux installation. This may be different then the country you’re living in if you are
hosting your site in another country.
Select your location
Now it is time to configure the hostname for you server. This name will later be used
when setting up your web server. You can choose eg. server.<mydomain>.com or
something similar.
Configure the network
Now Ubuntu asks you to setup a user account to login with. This is an account for non-
administrative activities. When you need todo admin work, you will have to elevate your
privileges via a superuser command.
Enter your full name and on the second screen enter your firstname (in lowercase)
Setup users and passwords
Setup users and passwords
To gain a little extra performance we didn’t let Ubuntu encrypt our home directory. If you
will allow multiple users to access your server it’s recommended to enable this setting
though.
Encrypt home directory
The next step is partitioning the disk where you want to install Linux on. Of course you
should make sure there is no data on the disk which you want to keep as the partitioning
process will remove all the data.
Choose the Guided - use entire disk and set up LVM option.
Partitioning
On our system we only have one SSD, identified as a Virtio Block Device which we will
partition.
Virtio Block Device
Choose Yes to write the changes to disk and configure the LVM. (Logical Volume
Manager)
Configuring the Logical Volume Manager
The final step in the partitioning process. The installer show that it’ll create a root
partition, a swap partition and one partition Virtual disk 1 where we will install Ubuntu
Linux on.
Partition disks
Now the package manager “apt” (used to install and update software packages on Ubuntu)
needs to be configured. You can leave the HTTP proxy line empty unless you’re using a
proxy server to connect to the Internet.
Setting internet proxy for APT package manager
Software selection
Software selection
Software selection
The SSH service is now installed and running. If you ever need to restart it you can use the
following command:
$
If you want to change the default settings, such as the default port to connect to, edit the
/etc/ssh/sshd_config file:
$
The default port is 22, it can be useful to change this to another port number to enhance
the security of your server as people will have to guess where the SSH service is running)
If you want you can let Putty remember the details via Save.
Putty configuration
The first time Putty will ask you to save a fingerprint of the server. This is useful because
when the IP address would be hijacked to another server, the fingerprint will change.
Putty will then issue a warning so you don’t accidentally connect to a compromised server.
To upload files securely we recommend FileZilla as it supports SFTP, the secure version
of File Transfer Protocol (FTP).
You can download and install FileZilla from https://filezilla-project.org/
Open the FileZilla SiteManager and enter the same server details as for SSH login with
Putty.
Make sure to use the SFTP protocol and login Type Normal to be able to login via
username and password.
For many, that’s as secure as it gets, and this is mostly because the /tmp directory is just
that: a directory, not its own filesystem.
By default the /tmp directory lives on the root partition (/ partition) and as such has the
same mount options defined.
In this section we will make the temp directory a bit more hackerproof. Because hackers
could try to place executable programs in the /tmp directory we’ll set some additional
restrictions.
We will do that by setting /tmp on its own partition, so that it can be mounted independent
of the root / partition and have more restrictive options set. More specific we will set the
following options: * nosuid: no suid programs are permited on this partition * noexec:
nothing can be executed from this partition * nodev: no device files may exist on this
partition
You could then remove the /var/tmp directory and create a symlink pointing to /tmp so that
the temporary files in /var/tmp also make use of these restrictive mount options.
Here are the exact commands:
$
$
$
$ 1777
$
Usage is:
$
NTP is a TCP/IP protocol for synchronising time over a network. Basically a client
requests the current time from a server, and uses it to set its own clock.
To install ntpd, from a terminal prompt enter:
$
ntpdate
Ubuntu comes with ntpdate as standard, and will run it once at boot time to set up your
time according to Ubuntu’s NTP server.
$
ntpq
View it is running:
$
jitter
==============================================================================
time 1 50 64 7
2 48 64 7
3 48 64 7
2 44 64 7
2 44 64 7
Enabling IPv6 Support
As we explained in our IPv4 / IPv6 support section, the use of IPv6 is on the rise.
In this section we’ll make sure that our Ubuntu Linux is correctly configured so that our
server is accessible via an IPv6 address.
By default, after the install of Ubuntu, IPv6 is not enabled yet. You can test if IPv6 support
is enabled by pinging an IPv6 server of Google from your VPS server:
$
It’ll return unknown host if your server is not yet IPv6 enabled.
When you got your VPS, you were given 1 or more IPv4 addresses and IPv6 addresses.
Look them up, as we’ll need them shortly. With RamNode you can look them up via the
SolusVM Control panel at https://vpscp.ramnode.com/login.php
To have network access our VPS has a (Gigabit) network card. Each network card is
available in Linux under a name like eg. eth0, eth1, …
Let’s run ifconfig to see what we have in our VPS:
$
The eth0 is the Linux interface to our Ethernet network card. The inet addr tells us that an
IPv4 address is configured on this interface/network card.
We don’t see any ‘inet6 addr’; that’ll be added below for our Ipv6 addresses.
$
Add:
For the loopback network interface we add the following line in bold if it is not already
present:
The following line auto starts the eth0 network interface at bootup:
For Ramnode the netmask is 255.255.255.0. The gateway server should also be
documented by your VPS host. For Ramnode you can find the information here.
We will use the Google DNS nameservers 8.8.8.8 8.8.4.4; so our server can resolve
domain names hostnames.
We have a second IPv4 address, which we want to bind to the same physical network card
(eth0).
The concept of creating or configuring multiple IP addresses on a single network card is
called IP aliasing. The main advantage of using IP aliasing is, you don’t need to have a
physical network card attached to each IP, but instead you can create multiple or many
virtual interfaces (aliases) to a single physical card.
These lines actually configures the IP aliasing:
This makes sure that the DNS nameservers we have specified are added to the file
/etc/resolv.conf:
$
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND—YOUR CHANGES WILL BE OVERWRITTEN
After booting, check the output of ifconfig again; it should now list a ‘Scope:Global’ Ipv6
address in the eth0 section:
$
Now try to ping to Google via the Ipv6 enabled ping app:
$
( ) 56
64 icmp_seq=1 ttl=57 time=
64 icmp_seq=2 ttl=57 time=
Congratulations, your server is now reachable via IPV4 and IPV6 addresses!
0
0
0
A good way to check how much RAM is being used on your server is using the command
free -m
Make sure you look at the free RAM on the -/+ buffers/cache line, because the first line
includes the memory used for disk caching, which can make it seem like you have no
RAM left.
To be able to use the newest kernels we recommend non-LTS versions, which means
you’ll upgrade more regularly.
In the following sections we’ll give some important pre-upgrade checks you have todo
before starting the upgrade.
Replace
by
$ do
If the upgrade asks to overwrite locally changed files choose the option to keep your
version of the files.
Reboot when asked.
Performance Baseline
Before we start tweaking the basis install, we want to run some benchmarks to form a
performance baseline which we can use to compare before/after tweaking results.
Serverbear
Go to http://serverbear.com/ and choose Benchmark my server. Choose your hosting
provider and which plan you have bought. Enter your email to get the results of the
benchmarks when they are ready.
At the bottom you can copy paste a command that you can use within your SSH session.
When the benchmark finishes you will receive an email with a link to your results.
Here are some example results:
The server bear benchmarks tests the CPU (UnixBench score), the performance of the I/O
subsystem, and the download speed from various locations to your server.
The latency should should approach the minimum ms we can physically have (speed of
light and distance between your location and the server.
Tuning KVM Virtualization Settings
There are a few settings in the VPS Control panel which will provide optimal performance
when using the KVM virtualization.
Network card: Intel PRO/1000. This makes sure were not limited to 100Mbit/sec
bandwidth on the network card. (but instead have the full 1000Mbit/sec available).
Disk driver: virtio - this can give better IO performance
After this change we can run the ServerBear benchmark again with the following results:
Configuring the CPU Model Exposed to KVM Instances
When using KVM, the CPU model used inside the server is not exposed directly to our
Linux installation by default. Instead a QEMU Virtual CPU version will be exposed.
You can view the processor information with the following command:
$
processor : 1
...
From the output you can see we have a multi core CPU (processor: 0, processor: 1). The
model name is as you can see QEMU Virtual CPU version. In the flags section you can
see what kind of special instruction sets the CPU supports. Eg. SSE2
In reality our CPU supports a lot more instruction sets, eg. SSE4,… But it is not visible
yet in /proc/cpuinfo, which means Linux apps or when compiling new software, may not
use the instructions either.
To be sure we maximize performance we can enable the CPU Passthrough mode so that
all CPU features are available.
Actually as we running inside a VPS, the KVM installation was done by RamNode, which
means the CPU Passthrough mode can only be enabled by RamNode support
After the change the /proc/cpuinfo changed to the below information:
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 58
model name : Intel(R) Xeon(R) CPU E3-1240 V2 @ 3.40GHz
stepping : 9
microcode : 0x1
cpu MHz : 3400.022
cache size : 4096 KB
physical id : 0
siblings : 1
core id : 0
cpu cores : 1
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov \
pat pse36 clflush mmx fxsr sse sse2 ss syscall nx lm constant_tsc arch_perfmon n\
opl eagerfpu pni pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 x2apic popcnt tsc_deadl\
ine_timer aes xsave avx f16c rdrand hypervisor lahf_lm xsaveopt fsgsbase smep
bogomips : 6800.04
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
Now we can see we’re running on a Intel(R) Xeon(R) CPU E3-1240 V2 @ 3.40GHz. The
flags line also has a lot more instruction sets available now.
Tuning Kernel Parameters
In this chapter we will tune performance related Kernel parameters. These parameters can
be grouped into different categories:
Tuning the Linux kernel can be done with the sysctl program or by editing the
/etc/sysctl.conf file.
After editing the sysctl.conf file you can run sudo sysctl -p to reflect the changes without
rebooting your server.
This setting reduces the keepalive timout of a TCP connection to 30 minutes. This way we
have less memory usage.
This setting increases the maximum number of remembered connection requests, which
still did not receive an acknowledgment from the connecting client. You may need to
lower this number if you have a memory constrained VPS. The default is 1024.
The number of packets that can be queued should be increased from the default of 1000 to
2500
With a web server you will see a lot of TCP connections in the TIME-WAIT state.
TIME_WAIT is when the socket is waiting after close to handle packets still in the
network. This setting should be high enough to prevent simple Denial of Service attacks.
This setting defines the local port range that is used by TCP traffic. You will see in the
parameters of this file two numbers: The first number is the first local port allowed for
TCP on the server, the second is the last local port number allowed. For high traffic sites
the range can be increased, so that more local ports are available for use concurrently by
the web server.
An orphan socket is a socket that isn’t associated to a file descriptor. For instance, after
you close() a socket, you no longer hold a file descriptor to reference it, but it still exists
because the kernel has to keep it around for a bit more until TCP is done with it. If this
number is exceeded, the orphaned connections are reset immediately and warning is
printed. Each orphan eats up to 64K of unswappable memory.
You can view the number of orphan sockets here:
$
42 2 12 77
The tcp_mem variable defines how the TCP stack should behave when it comes to
memory usage; when the third number is reached, then packets are dropped. Configuring
this option depends on how much memory your server has.
The number is not in bytes but in number of pages (where most of the time 1 page = 4096
bytes)
You can view your page size with:
$
So if we take the total memory (4GB=4000 000 000 bytes) of our server we can do the
math:
32 0 9 68
The first value in this variable tells the minimum TCP receive buffer space available for a
single TCP socket. Unlike the tcp_mem setting, this one is in bytes. The second value is
the default size. The third value is the maximum size.
The first value in this variable tells the minimum TCP send buffer space available for a
single TCP socket. Unlike the tcp_mem setting, this one is in bytes. The second value is
the default size. The third value is the maximum size. 16MB per socket sounds much, but
most sockets won’t use anywhere near this much. (+ it is nice to be able to expand if
necessary)
This is an TCP option that can be used to calculate the Round Trip Measurement in a
better way than the retransmission timeout method can.
This specifies how we can scale TCP windows if we are sending them over large
bandwidth networks. When sending TCP packets over these large pipes, we experience
heavy bandwidth loss due to the channels not being fully filled while waiting for ACK’s
for our previous TCP windows.
Enabling tcp_window_scaling enables a special TCP option which makes it possible to
scale these windows to a larger size, and hence reduces bandwidth losses due to not
utilizing the whole connection.
Linux 3.5 kernel and later implement TCP Early Retransmit with some safeguards for
connections that have a small amount of packet reordering. This allows connections, under
certain conditions, to trigger fast retransmit and bypass the costly Retransmission Timeout
(RTO). By default it is enabled in the failsafe mode tcp_early_retrans=2.
The maximum number of “backlogged sockets”. The default is 128. This is only needed
on very loaded server. You’re effectively letting clients wait instead of returning a
connection abort.
The maximum OS send buffer size in bytes for all types of connections
The maximum OS receive buffer size in bytes for all types of connections
The default OS receive buffer size in bytes for all types of connections.
When you’re serving a lot of html, stylesheets, etc; it is usually the case that the web
server will open a lot of local files simultaneously. The kernel limits the number of files a
process can open.
Raising these limits is sometimes needed.
$ ulimit
-Hn shows us the hard limit, while -Sn shows the soft limit.
If you want to increase this limit then edit the limits.conf:
$
These settings are optimized for SSD disks. Do not apply these if you don’t have an SSD
in your server.
With the action above, you limit the use of the swap partition (the virtual memory on the
SSD). Ubuntu’s inclination to use the swap, is determined by a setting.
On a scale of 0-100, the default setting is 60. We set it to 1 so that swap is not used (= less
I/O traffic) unless the server gets severe RAM shortage.
Scheduler Settings
I/O performance, or the read/write latency of a web server can seriously impact the overall
page load times of your server. Making a simple change to the IO scheduler that’s built
into the kernel can decrease your IO latency by 20% or more.
Scheduling algorithms attempt to reoder disk access patterns to mitigate the shortcomings
of traditional HDDs. They work best with I/O devices that have reasonable transfer speeds
and slow seek times.
Depending on the Linux distribution and version you have installed, the I/O Scheduler can
be different. There are a few possible Linux schedulers:
Deadline
CFQ
NoOp
None
By default, Ubuntu 14.04 and later uses the I/O scheduler Deadline, which is working well
for both SSDs and conventional hard disks.
When running Ubuntu 14.04 inside a KVM based VPS, you cannot choose the I/O
scheduler as you are accessing a virtual device. The I/O request is passed straight down
the stack to the real disk. The scheduler used will be “none”. This is due to kernel changes
in Kernel 3.13 and later.
By default Ubuntu 12.04 uses the CFQ scheduler which is good for conventional hard
disks but not for SSDs. CFQ, Completely Fair Queuing, tries to balance I/O among all the
processes fairly. This isn’t the best option for web servers.
The NoOp (no operation) scheduler is the recommended choice if your server only has
SSDs inside. It’ll effectively completely let the scheduling be done by the SSD hardware
controller, assuming it will do a better job.
Here is how to view the current and available schedulers on your Linux system:
You may need to substitute the vda portion of the command with your disk devices, which
may be sda, sdb, sdc or hda, hdb, etc.
$
It says the deadline scheduler is the current IO scheduler. cfq and noop are also available.
Our recommendation is to use the noop scheduler, which will allow the host to determine
the order of the read/write requests from your VPS instead of the VPS re-ordering and
potentially slowing things down with the cfq scheduler.
Here is how you can do this on your Ubuntu 13.x install:
$
Add
The last line is used to check if the device is of a rotational type or non-rotational type. For
SSD disks this should be set to 0; because seek times are much lower then on traditional
HDDs. (where the scheduler spends extra CPU cycles to minimize head movement.)
After saving the file execute the rc.local:
$
Setting the correct scheduler on the KVM host is something that your VPS provider has to
set. Ideally they will use the noop scheduler for SSD based VPSs and deadline if the
server also has HDDs in the mix.
Here is the output on Ubuntu 14.x on a KVM VPS:
In this case no changes are necessary inside the KVM VPS. Just be sure or ask your VPS
provider to set deadline or noop scheduler on the KVM host.
Now add ‘noatime’ in /etc/fstab for all your partitions on the SSD.
There is also a ‘nodiratime’ which is a similar option for directories.
15
Here we see were not running the latest version. 1.0.2d was released in July 2015 and
fixes 12 security bugs. You can find the newest versions at https://www.openssl.org/
Here is the procedure to install OpenSSL 1.0.2d:
$ cd
$
$
$ cd
$
$
$
All OpenSSL files, including binaries and manual pages are installed under the directory
/usr/local/ssl. To ensure users use this version of OpenSSL instead of the previous version
which come by default on Ubuntu, you must update the paths for the manual pages
(documentation) and the executable binaries.
Edit the file /etc/manpath.config, adding the following line before the first
MANPATH_MAP:
$
Edit the file /etc/environment and insert the path for the new OpenSSL version
(/usr/local/ssl/bin) before the path for Ubuntu’s version of OpenSSL (/usr/bin).
$
PATH="/usr/local/sbin:/usr/local/bin:/usr/local/ssl/bin:/usr/sbin:/usr/bin:/sbin\
:/bin:/usr/games"
After reboot check whether executing openssl displays the version you’ve just upgraded
to:
$
You can see that AES enabled and used by OpenSSL on our system.
Make sure you’re exposing the AES instructions to your KVM VPS by Configuring the
CPU model exposed to KVM instances, which we explained previously.
Securing your Server
In this chapter we will make our server more secure by installing CSF (ConfigServer
Security and Firewall). CSF is one of the best firewall/Intrusion detection-prevention tool
out there for Linux. It has the following features:
A firewall
Checks login authentication failures for SSH, FTP, …
Excessive connection blocking
Syn flood protection
Ping of death protection
CSF is autostarted on bootup of your server. It reads its configuration settings from the file
/etc/csf/csf.conf
You can test the CSF installation by running:
$
function
If you receive the result ‘csf should function on this server’ everything is OK!
Configuring the ports to open
Inside the csf.conf there are 8 parameters which let’s you whitelist inbound and outbound
ports (and the ability to connect on them).
As you can see the last 4 are used for IPv6 connections. Incoming connections are
connections coming from the outside world to your server. (these could be HTTP
connections, SSH login, …). Outgoing connections are connections created by your server
to the outside world.
You should only add the ports for services you are really using; the ports which are not
whitelisted are closed by default, reducing the possible attack surface for hackers.
Below you can find an example configuration:
At the top of the file make sure that the Testing mode is enabled for now. This will make
sure we don’t accidently lock us out of our own server by an incorrect configuration.
No errors should be returned. To check if you can still login via SSH, open a second SSH
connection to your server after enabling CSF. You should be able to login correctly. If so
you can disable the testing mode in CSF by setting TESTING = “0” in the /etc/csf/csf.conf
file. Afterwards restart csf by running:
$
CSF Firewall can also allow or whitelist ip addresses with the following command:
$
The first level is the .com part. This is either a top level domain (tld) such as .gov,
.com, .edu or a country code top level domain (.be, .fr,…)
The second level is the google part.
The third level is the www part.
To create your fully qualified domain name you’ll generally use something like:
After signup or login you can enable the option Domain Privacy if needed. Enabling
Domain Privacy will hide your name, address, email and phone number from ‘whois’
lookups via your domain name.
You also need to specify the name server to use. A name server is a computer server that
connects your domain name to any services you may use (e.g. email, web hosting). We’ll
go into more detail on how to configure this in the next section. For now you can use the
EuroDNS default name server.
Now click on ‘Review and Payment’ to enter the credit card details and finish the ordering
process.
You’ll receive an email from EuroDNS as soon as everything is ready.
Simplicity: You don’t have to worry about setting up and maintaining your own DNS
server. Management of DNS records is also easier because providers enable this
using a simple browser-based GUI or API
Performance: Providers that specialize in DNS have often invested significant time
and capital setting up global networks of servers that can respond quickly to DNS
queries regardless of a user’s location
Availability: Managed DNS providers employ dedicated staff to monitor and
maintain highly available DNS services and are often better equipped to handle
service anomalies like Distributed denial of service attacks.
Advanced Features: Managed DNS providers often offer features that are not part of
the standard DNS stack such as integrated monitoring and failover and geographic
load balancing
After login in the control panel, you can add a domain this way:
1 Select the “DNS” Menu, select “Managed DNS” 1. Click “Add Domains”, on the right
1. Enter a domain name and Click “Ok”
Add a domain at DNSMadeEasy
A Records
AAAA Records
CNAME Records
MX Records
TXT Records
System NS Records
For each we will discuss why they are used and how to configure them.
A Records
An A or Address record is used to map a host name prefix to an IPv4 address. Generally
you’ll have at least two such records; one for the www version of your site and one for the
non-www version. Eg. www.mysite.com and mysite.com
An example can make this clear:
Add an A Record at DNSMadeEasy
fill in the name (leave it blank once and for the second A record fill in www)
the IPv4 address of your server
Dynamic dns: off
TTL: The Time to live in seconds. This means that DNS servers all over the world
can cache the IP address for x amount of seconds before they have to ask the
authorative server again. A commonly used TTL value is 86400 (= 24h). If you lower
this value to eg. 1800 (30 minutes), a change of IP address/server will have less
impact for your visitors. Setting a lower value will increase the number of hits to
DNSMadeEasy (your authorative DNS) - make sure to find the right balance so
you’re not exceeding the maximum number of included queries in the small business
plan.
AAAA Records
This is very similar to A Records; except that the AAAA records will map to IPv6
addresses of your server.
CNAME Records
CNAME Records, or Canonical Name records specify an alias for another domain, the
‘canonical’ domain.
One such example where you’ll need this is when using a CDN service (Content Delivery
Network) for optimizing latency and download speeds of your website resources (eg.
Images, html, css,… )
We will explain this in more detail in our CDN chapter.
MX Records
MX or a Mail exchange record maps a domain name to a mail transfer agent (MTA) or
mail relay. The mail relay is software which takes care of sending email messages from
one computer to another via SMTP (Simple Mail transfer Protocol).
More simply explained; if you want to send an email from [email protected] you’ll need to
provide MX records. We’ll discuss this further in our Setting up mails chapter.
TXT Records
TXT Records are used for multiple purposes, one of which is for email anti-spam
measures. We’ll discuss this further in our Setting up mails chapter.
Using the DNSMadeEasy nameservers at your DNS registrar EuroDNS
The final step you need to take, to get everything working is to specify the DNSMadeEasy
nameservers at your DNS registrar EuroDNS, where you ordered your domain name.
This will make sure that DNS servers around the world will find all the configuration
you’ve applied in the previous section.
Here are the steps to complete:
You can attach this profile to your domain name via the following steps:
There are some handy free website tools which can verify if your DNS configuration is
well setup. Some tools also test the performance of the DNS server or Service you are
using.
Slow DNS performance is often overlooked when trying to speed up a website.
At http://bokehman.com/dns_test.php you can perform a DNS latency test for your
domain.
The following sites also analyze your DNS configuration and performance:
https://cloudmonitor.ca.com/en/dnstool.php
Installing MariaDB 10, a MySQL Database Alternative
Most of the popular websoftware used these days needs a database to store its information.
These include popular blogplatforms like Wordpress and Drupal, forums like phpBB3 and
many more.
In this chapter we will install MariaDB, which is a drop-in replacement for the well
known MySQL database.
MariaDB is a community developed fork of the MySQL database which should remain
free. It is led by the original developers of MySQL after the acquisition of MySQL by
Oracle.
It is very easy to switch from MySQL to MariaDB, because MariaDB is a drop-in
replacement. That means that
This means that for most cases, you can just use MariaDB where you would use MySQL.
If your website software doesn’t explicitly support MariaDB, but it does support MySQL,
it should work out-of-the-box with MariaDB.
Here is a short list of reasons why we choose MariaDB over MySQL:
Better performance
By default uses the XtraDB storage engine which is a performance enhanced fork of
the MySQL InnoDB storage engine
Better tested and fewer bugs
All code is open source
You can find the exact steps for Ubuntu 15.x via the configuration guide at
https://downloads.mariadb.org/mariadb/repositories
The above steps only needs to be performed once on a given server. The apt-key command
enables apt to verify the integrity of the packages it downloads.
$
$
During installation MariaDB will ask for a password for the root user.
Choose a good password because the root MariaDB user is an administrative user that has
access to all databases!)
That’s it, MariaDB is now installed on your server!
To auto start it at system startup execute the following command:
Securing MariaDB
By default the MariaDB installation still has a test user, database, and anonymous login
which should be disabled on a production server.
You can run the following script to make your installation fully secure:
$
It’ll make the following changes to your MariaDB installation: * Setting a password for
the root user * Remove anonymous users * Disallow root login remotely * Remove test
database and access to it
You should run mysql_upgrade after upgrading from one major MySQL/MariaDB release
to another, such as from MySQL 5.0 to MariaDB 5.1 or MariaDB 10.0 to MariaDB 10.1.
It is also recommended that you run mysql_upgrade after upgrading from a minor version,
like MariaDB 5.5.40 to MariaDB 5.5.41, or even after a direct “horizontal” migration
from MySQL 5.5.40 to MariaDB 5.5.40. If calling mysql_upgrade was not necessary, it
does nothing.
Symptoms you may receive when you didn’t run msyql_upgrade include:
Errors in the error log that some system tables don’t have all needed columns.
Updates or searches may not find all the records.
Executing CHECKSUM TABLE may report the wrong checksum for MyISAM or
Aria tables.
Amount of memory available: when MariaDB has more memory available, larger key
and table caches can be stored in memory. This reduces disk access which is of
course much slower.
Disk access: fast disk access is critical, as ultimately the database data is stored on
disks. The key figure is the disk seek time, a measurement of how fast the physical
disk can move to access the data. Because we’re using an SSD, seek times are very
fast :-)
Kernel Parameters
Swapping
$
=
With the action above, you limit the use of the swap partition (the virtual memory on the
SSD). Ubuntu’s inclination to use the swap, is determined by a setting.
On a scale of 0-100, the default setting is 60. We set it to 1 so that swap is not used unless
the server gets severe RAM shortage.
This improves MariaDBs performance because:
MariaDB’s internal algorithms assume that memory is not swap, and are highly
inefficient if it is.
Swap increases IO over just using disk in the first place as pages are actively
swapped in and out of swap.
Database locks are particularly inefficient in swap. They are designed to be obtained
and released often and quickly, and pausing to perform disk IO will have a serious
impact on their usability.
Storage Engines
MariaDB comes with quite a few storage engines. They all store your data and they all
have their pro’s and cons depending on the usage scenario.
The most used storage engines you’ll come across frequently are:
MyISAM: this is the oldest storage engine from MySQL; and is not transactional
Aria: a modern improved version of MyISAM
InnoDB: a transactional general purpose storage engine.
XtraDB: a performance improved version of InnoDB. It is meant to be near 100%
compatible with InnoDB. From MariaDB 10.0.0.15 on it is the default storage
engine.
Using MySQLTuner
MySQLTuner is a free Perl script that can be run on your database which will give you a
list of recommendations to execute to improve performance.
It uses statistics available in MySQL/MariaDB to give reliable recommendations.
It is advised to run this script when your database has been up for at least a day or longer.
Otherwise the recommendations may be inaccurate.
Here is how to run the MySQLTuner script:
$ cd
$
$
MySQLTuner
For our Database they eg. advise to defragment our tables and add more INDEXes when
we are joining tables together. Here is how you can defragment all databases with
MySQL/MariaDB:
$
To lookup slow queries (possibly due to missing indexes), take a look at the queries
logged in:
$
It’ll enable you or your application developers to analyze what is wrong and how to make
the queries more performant.
Also take a look at the “highest usage of available connections”. If this is much lower then
your max_connection setting, then you’ll be wasting memory which is never used. For
example in our server the max_connection settings is on 300 simultanuous connections
while the highest number of concurrent sessions is only 30. This make it possible to
reduce the max. connection settings to eg 150.
Enabling HugePages
What are huge pages or Large Pages?
When a process uses some memory, the CPU is marking the RAM as used by that process.
For efficiency, the CPU allocates RAM by chunks of 4K bytes (it’s the default value on
many platforms). Those chunks are named pages. Those pages can be swapped to disk,
etc.
The CPU and the operating system have to remember which page belong to which
process, and where it is stored. Obviously, the more pages you have, the more time it takes
to find where the memory is mapped.
A Translation-Lookaside Buffer (TLB) is a page translation cache inside the CPU that
holds the most-recently used virtual-to-physical memory address translations.
The TLB is a scarce system resource. A TLB miss can be costly as the processor must
then read from the hierarchical page table, which may require multiple memory accesses.
By using a bigger page size, a single TLB entry can represent a larger memory range and
as such there could be reduced Translation Lookaside Buffer (TLB) cache misses
improving performance.
Most current CPU architectures support bigger pages then the default 4K in Linux. Linux
supports these bigger pages via the Huge pages functionality since Linux kernel 2.6. By
default this support is turned off.
Note that enabling Large Pages can reduce performance if your VPS doesn’t have ample
RAM available. This is because when a large amount of memory is reserved by an
application, it may create a shortage of regular memory and cause excessive paging in
other applications and slow down the entire system.
We recommend to enable it only when you have enough RAM in your VPS (2GB or
more).
In this section we will enable MariaDB to use HugePages. In later chapters we will also
enable this for Memcached and Java.
How do I verify that my kernel supports hugepage?
A kernel with Hugepage support should give a similar output like below:
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
Hugepagesize: 2048 kB
You can see a single hugepage is 2048KB (2MB) in size on our system. Support for
Hugepage is also one of the reasons to use a recent kernel. The total number of HugePages
is zero because we have not yet enabled HugePages support.
Enabling HugePages
Then add:
We reserve 1GB of RAM (2048*512) for Large pages use. You should adjust this setting
based on how much memory your VPS has. We have 4GB of RAM available which means
we reserved 25%. You have to take into account this RAM is not available to processes
that are not using Huge pages.
We also need to give MariaDB access to these HugePages. The setting
vm.hugetlb_shm_group in /etc/sysctl.conf tells the Kernel which Linux group can access
the Large Pages. Effectively this means we have to create a group called for example
‘hugepage’ and make the MySQL/MariaDB user part of that group.
Because we want to allow more then one process to access the HugePages we will create a
group ‘hugepage’. Every user who needs access can then add this group to its list of
groups.
For example, for user mysql we have the following groups attached now:
$
uid= ( ) gid= ( ) groups= ( )
Now we need to update the Linux shared memory kernel parameters SHMMAX and
SHMALL inside /etc/sysctl.conf
Shared Memory is a type of memory management in the Linux kernel . It is a memory
region which can be shared between different processes.
SHMMAX is the maximum size of a single shared memory segment. Its size is expressed
in bytes.
It should be higher then the amount of memory in bytes allocated for Large Pages. As we
reserved 1GB of RAM this means: 1 * 1024 * 1024 * 1024 + 1 = 1073741825 bytes. Now
we want to specify that the max total shared memory (SHMALL) may not exceed 2GB:
(50% of the available RAM)
2GB = 2 * 1024 * 1024 * 1024 = 2147483648 bytes
Now SHMALL is not measured in bytes but in pages, so in our example where 1 page is
4096 bytes in size, we need to:
2147483648 / 4096 = 524288
The Linux OS can enforce memory limits a process/user can consume. We adjust the
memlock parameter to set no limit for the mysql user. We also set that the mysql user can
open max. 65536 files at the same time. In the next section we will set the open-files-limit
MariaDB parameter to the same 65536 value. (MariaDB can’t set it’s open_files_limit to
anything higher then the what was specified for user mysql in limits.conf.)
Now reboot the server
$
shows information about the shared memory: maximum size, max. segment size and more.
You can use it to verify the settings you’ve made in the sysctl.conf file after reboot.
System Variables
Tuning a database server is vast subject which could span a whole book by its own. We
will only scratch the surface in this and the following sections. We will optimize the
configuration of MyISAM and InnoDB/XtraDB storage engines.
We will give you some recommendations and tools. Of course your settings could be
different due to having more or less memory or applications which use the database
differently.
The MariaDB/MySQL settings are located at: /etc/mysql/my.cnf.
We have based this configuration file on https://github.com/Fleshgrinder/mysql-mariadb-
configuration/blob/master/my.cnf
It’s targetted to a VPS server with 2GB RAM and limited computing power.
The configuration file is split into three parts: * [client]: configuration settings that are
read by all client programs. (eg. PHP accessing the database) * [mysqld_safe]:
configuration settings for the mysqld_safe daemon (see below) * [mysqld]: general
mysql/MariaDB configuration settings.
mysqld_safe starts mysqld the mysql daemon. mysqld_safe will check for an exit code. If
mysqld did not end due to system shutdown or a normal service mysql stop, mysqld_safe
will attempt to restart mysqld.
$
# ----------------------------------------------------------------------
# CLIENT CONFIGURATION
# ----------------------------------------------------------------------
[ ]
# As always, all charsets default to utf8.
default_character_set =
[ ]
# mysqld_safe is the recommended way to start a mysqld server on Unix. mysqld_sa\
\
# Write the error log to the given file.
#
# DEFAULT: syslog
=
=
# The process priority (nice). Enter a value between -19 and 20; where
# -19 means highest priority.
#
# SEE: man nice
# DEFAULT: 0
nice =
# The Unix socket file that the server should use when listening for
# local connections
socket =
# The number of file descriptors available to mysqld. Increase if you are gettin\
# Port
port =
# The number of file descriptors available to mysqld. Increase if you are gettin\
# If you run multiple servers that use the same database directory (not recommen\
)
= true
# This limit is also defined by the operating system. You should not set this to\
if \
#
# The maximum value on Linux is directed by tcp_max_syn_backlog sysctl parameter
# net.ipv4.tcp_max_syn_backlog = 10240
#
# When the connection is in the back_log, the client will have to wait until the\
#
# SEE:
# http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysvar_ba\
# http://www.mysqlperformanceblog.com/2012/01/06/mysql-high-number-connections-\
#
# DEFAULT: OS value
back_log =
# The MySQL installation base directory. Relative path names for other
# variables usually are resolved to the base directory.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
# DEFAULT:
basedir =
# ALL character set related options are set to UTF-8. We do not support
# any other character set unless explictely stated by the user who's
# working with our database.
character_set_server =
# SEE: http://dev.mysql.com/doc/refman/5.5/en/concurrent-inserts.html
# DEFAULT: 1
concurrent_insert =
# http://major.io/2007/08/03/obscure-mysql-variable-explained-max_seeks_for_key/
max_seeks_for_key =
# The number of seconds that mysqld server waits for a connect packet
# before responding with "Bad Handshake". The defautl value is 10 sec
# as of MySQL 5.0.52 and 5 seconds before that. Increasing the
# connect_timeout value might help if clients frequently encounter
# errors of the form "Lost connection to MySQL server at 'XXX', system
# error: errno".
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
# DEFAULT: 10
connect_timeout =
# DEFAULT:
datadir =
# DEFAULT: 0
# expire_logs_days = 10
# The minimum size of the buffer that is used for plain index scans,
# range index scans, and joins that do not use indexes and thus perform
# full table scans. High values do not mean high performance. You should
# not set this to you very large amoutn globally. Instead stick to a
# small value and increase it only in sessions that are doing large
# joins. Drupal is performing a lot of joins, so we set this to a
# reasonable value.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
# http://serverfault.com/questions/399518/join-buffer-size-4-m-is-not-advised
# DEFAULT: ?
join_buffer_size =
# Index blocks for MyISAM tables are buffered and shared by all threads.
# The key_buffer_size is the size of the buffer used for index blocks.
# The key buffer is also known as the key cache.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
# http://www.ewhathow.com/2013/09/what-is-the-recommended-value-of\
# DEFAULT: 8388608
key_buffer_size =
# Number of open tables for all threads. See Optimizing table_open_cache for sug\
\
# https://mariadb.com/kb/en/optimizing-table_open_cache/
table_open_cache =
# Whether large page support is enabled. You must ensure that your
# server has large page support and that it is configured properly. This
# can have a huge performance gain, so you might want to take care of
# this.
#
# You MUST have enough hugepages size for all buffers you defined.
# Otherwise you'll see errno 12 or errno 22 in your error logs!
# Hugepages can give you a hell of a headache if numbers aren't calc-
# ulated whisely, but it's totally worth it as you gain a lot of
# performance if you're handling huge data.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/large-page-support.html
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
# DEFAULT: 0
large_pages = true
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
# DEFAULT: OFF
# log_bin = /var/log/mysql/mariadb-bin
# The index file for binary log file names. See Section 5.2.4, The
# Binary Log. If you omit the file name, and if you did not specify one
# with --log-bin, MySQL uses host_name-bin.index as the file name.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/replication-options-binary-log.htm\
# DEFAULT: OFF
# log_bin_index = /var/log/mysql/mariadb-bin.index
log_slow_rate_limit =
# Queries that don't use an index, or that perform a full index scan where the i\
't limit the number of rows, will be logged to the slow query log.
# The slow query log needs to be enabled
# for this to have an effect.
log_queries_not_using_indexes = true
# Specifies how much information to include in your slow log. The value
# is a comma-delimited string, and can contain any combination of the
# following values:
#
# - microtime: Log queries with microsecond precision (mandatory).
# - query_plan: Log information about the query``s execution plan (optional).
# - innodb: Log InnoDB statistics (optional).
# - full: Equivalent to all other values OR``ed together.
# - profiling: Enables profiling of all queries in all connections.
# - profiling_use_getrusage: Enables usage of the getrusage function.
#
# Values are OR``ed together.
#
# For example, to enable microsecond query timing and InnoDB statistics,
# set this option to microtime,innodb. To turn all options on, set the
# option to full.
#
# SEE: http://www.percona.com/doc/percona-server/5.5/diagnostics/slow_extended_5\
5.html#log_slow_verbosity
log_slow_verbosity = query_plan
# Print out warnings such as Aborted connection… to the error log.
# Enabling this option is recommended, for example, if you use
# replication (you get more information about what is happening, such as
# messages about network failures and reconnections). This option is
# enabled (1) by default, and the default level value if omitted is 1.
# To disable this option, use --log-warnings=0. If the value is greater
# than 1, aborted connections are written to the error log, and access-
# denied errors for new connection attempts are written.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-options.html#option_mysqld_\
log-warnings
# SEE: http://dev.mysql.com/doc/refman/5.5/en/communication-errors.html
# DEFAULT: 1
log_warnings = 2
# If a query takes longer than this many seconds, the server increments
# the Slow_queries status variable. If the slow query log is enabled,
# the query is logged to the slow query log file. This value is measured
# in real time, not CPU time, so a query that is under the threshold on
# a lightly loaded system might be above the threshold on a heavily
# loaded one. The minimum and default values of long_query_time are 0
# and 10, respectively. The value can be specified to a resolution of
# microseconds. For logging to a file, times are written including the
# microseconds part. For logging to tables, only integer times
# are written; the microseconds part is ignored.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_long_query_time
# SEE: http://dev.mysql.com/doc/refman/5.5/en/slow-query-log.html
# DEFAULT: 10
long_query_time = 1
# The maximum size of one packet or any generated/intermediate string.
# The packet message buffer is initialized to net_buffer_length bytes,
# but can grow up to max_allowed_packet bytes when needed. This value by
# default is small, to catch large (possibly incorrect) packets. You
# must increase this value if you are using large BLOB columns or long
# strings. It should be as big as the largest BLOB you want to use. The
# protocol limit for max_allowed_packet is 1GB. The value should be a
# multiple of 1024; nonmultiples are rounded down to the nearest
# multiple. When you change the message buffer size by changing the
# value of the max_allowed_packet variable, you should also change the
# buffer size on the client side if your client program permits it. On
# the client side, max_allowed_packet has a default of 1GB. Some
# programs such as mysql and mysqldump enable you to change the client-
# side value by setting max_allowed_packet on the command line or in an
# option file. The session value of this variable is read only.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_max_allowed_packet
# DEFAULT: ?
max_allowed_packet = 16M
# If a write to the binary log causes the current log file size to
# exceed the value of this variable, the server rotates the binary logs
# (closes the current file and opens the next one). The minimum value is
# 4096 bytes. The maximum and default value is 1GB. A transaction is
# written in one chunk to the binary log, so it is never split between
# several binary logs. Therefore, if you have big transactions, you
# might see binary log files larger than max_binlog_size. If
# max_relay_log_size is 0, the value of max_binlog_size applies to relay
# logs as well.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/replication-options-binary-log.htm\
l#sysvar_max_binlog_size
# DEFAULT: 1073741824
# max_binlog_size = 100M
# The maximum permitted number of simultaneous client connections.
# Increasing this value increases the number of file descriptors that
# mysqld requires.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/too-many-connections.html
# SEE: http://dev.mysql.com/doc/refman/5.5/en/table-cache.html
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_max_connections
# DEFAULT: 151
max_connections = 300
# The size of the buffer that is allocated when sorting MyISAM indexes
# during a REPAIR TABLE or when creating indexes with CREATE INDEX or
# ALTER TABLE.
#
# The maximum permissible setting for myisam_sort_buffer_size is 4GB.
# Values larger than 4GB are permitted for 64-bit platforms (except
# 64-bit Windows, for which large values are truncated to 4GB with a
# warning).
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_myisam_sort_buffer_size
# DEFAULT: 8388608
myisam_sort_buffer_size = 512M
# Do not cache results that are larger than this number of bytes. The
# default value is 1MB.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_query_cache_limit
# DEFAULT: 1048576
query_cache_limit = 512K
# The amount of memory allocated for caching query results. The permiss-
# ible values are multiples of 1024; other values are rounded down to
# the nearest multiple. The query cache needs a minimum size of about
# 40KB to allocate its structures.
#
# 256 MB for every 4GB of RAM
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_query_cache_size
# DEFAULT: 0
query_cache_size = 128M
# Sets the global query cache type. There are three possible enumeration
# values:
# 0 = Off
# 1 = Everything will be cached; except for SELECT SQL_NO_CACHE
# 2 = Only SELECT SQL_CACHE queries will be cached
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_query_cache_type
# DEFAULT: 1
# http://major.io/2007/08/08/mysqls-query-cache-explained/
query_cache_type = 1
# Each thread that does a sequential scan for a MyISAM table allocates
# a buffer of this size (in bytes) for each table it scans. If you do
# many sequential scans, you might want to increase this value.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/memory-use.html
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_read_buffer_size
# http://www.mysqlperformanceblog.com/2007/09/17/mysql-what-read_buffer_size-val\
ue-is-optimal/
# DEFAULT: 131072
read_buffer_size = 128K
# When reading rows from a MyISAM table in sorted order following a key-
# sorting operation, the rows are read through this buffer to avoid disk
# seeks. Setting the variable to a large value can improve ORDER BY
# performance by a lot. However, this is a buffer allocated for each
# client, so you should not set the global variable to a large value.
# Instead, change the session variable only from within those clients
# that need to run large queries.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/order-by-optimization.html
# SEE: http://dev.mysql.com/doc/refman/5.5/en/memory-use.html
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_read_rnd_buffer_size
# DEFAULT: 262144
read_rnd_buffer_size = 256K
# Only use IP numbers and all Host columns values in the grant tables
# must be IP addresses or localhost.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/host-cache.html
# DEFAULT: false
skip_name_resolve = true
# if set to 1, the slow query log is enabled. See log_output to see how log file\
s are written
slow_query_log=1
# The absolute path to the Unix socket where MySQL is listening for
# incoming client requests.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
r_socket
# DEFAULT: /tmp/mysql.sock
socket = /var/run/mysqld/mysqld.sock
# Size in bytes of the per-thread cache tree used to speed up bulk inserts into \
MyISAM and Aria tables. A value of 0 disables the cache tree
bulk_insert_buffer_size = 16M
# SEE: http://dev.mysql.com/doc/refman/5.5/en/table-cache.html
# SEE: http://www.mysqlperformanceblog.com/2009/11/16/table_cache-negative-scala\
bility/
# SEE: http://www.mysqlperformanceblog.com/2009/11/26/more-on-table_cache/
# DEFAULT: ?
table_cache = 400
# Should be the same as table_cache
table_definition_cache = 400
# How many threads the server should cache for reuse. When a client
# disconnects, the client' if
# fewer than thread_cache_size threads there. Requests for threads are
# satisfied by reusing threads taken from the cache if possible, and
# only when the cache is empty is a new thread created. This variable
# can be increased to improve performance if you have a lot of new
# connections. Normally, this does not provide a notable performance
# improvement if you have a good thread implementation. However, if
# your server sees hundreds of connections use cached threads. By
# examining the difference between the connections and threads created
# status variables, you can see how efficient the thread cache is.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-status-variables.html
# SEE: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysva\
# http://serverfault.com/questions/408845/what-value-of-thread-cache-size-should\
# DEFAULT: 0
thread_cache_size =
# SEE: http://dev.mysql.com/doc/refman/5.5/en/internal-temporary-tables.html
# SEE: http://www.mysqlperformanceblog.com/2007/01/19/tmp_table_size-and-max_hea\
# DEFAULT: OS value
tmp_table_size =
# DEFAULT: ?
tmpdir =
# DEFAULT: 28800
wait_timeout =
# Run the mysqld server as the user having the name user_name or the
# numeric user ID user_id
#
# DEFAULT: root
user =
# ----------------------------------------------------------------------
# INNODB / XTRADB CONFIGURATION
# ----------------------------------------------------------------------
# The size of the memory buffer InnoDB / XtraDB uses to cache data and
# indexes of its tables. The larger this value, the less disk I/O is
# needed to access data in tables. A save value is 50% of the available
# operating system memory.
#
# total_size_databases + (total_size_databases * 0.1) = innodb_buffer_pool_size
#
# SEE: http://www.mysqlperformanceblog.com/2007/11/03/choosing-innodb_buffer_poo\
# SEE: http://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_inno\
# DEFAULT: 128M
innodb_buffer_pool_size =
# If enabled, InnoDB / XtraDB creates each new table using its own .idb
# file for storing data and indexes, rather than in the system table-
# space. Table compression only works for tables stored in separate
# tablespaces.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/innodb-multiple-tablespaces.html
# SEE: http://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_inno\
# DEFAULT: FALSE
innodb_file_per_table =
# SEE: http://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_inno\
# DEFAULT: fdatasync
innodb_flush_method =
# "InnoDB pages are organized in blocks of 64 pages. When the check-
# pointing algorithm has picked a dirty page to be written to disk, it
# checks if there are more dirty pages in the block and if yes, writes
# all those pages at once. The rationale is, that with rotating disks
# the most expensive part of a write operation is head movement. Once
# the head is over the right track, it does not make much difference if
# we write 10 or 100 sectors."
# ~~ Axel Schwenke
#
# Use none if you are on an SSD drive!
#
# SEE: https://mariadb.com/blog/how-tune-mariadb-write-performance
# DEFAULT: area
# innodb_flush_neighbor_pages = none
# renamed in in mariadb10!!
innodb_flush_neighbors=
# An upper limit on the I/O activity performed by the InnoDB background
# tasks, such as flushing pages from the buffer pool and merging data
# from the insert buffer.
#
# Refer to the manual of your drive to find out the IOPS.
#
# You can monitor the IOPS with e.g. iostat (package sysstat on Debian).
#
# SEE: http://blog.mariadb.org/how-to-tune-mariadb-write-performance/
# SEE: http://dev.mysql.com/doc/refman/5.5/en/innodb-performance-thread_io_rate.\
# SEE: http://dev.mysql.com/doc/refman/5.5/en/optimizing-innodb-diskio.html
# DEFAULT: 200
#
# Some commonly used SSD drives.
#innodb_io_capacity = 400 # Simple SLC SSD
#innodb_io_capacity = 5000 # Intel X25-E
#innodb_io_capacity = 20000 # G-Skill Phoenix Pro
#innodb_io_capacity = 60000 # OCZ Vertex 3
#innodb_io_capacity = 120000 # OCZ Vertex 4
#innodb_io_capacity = 200000 # OCZ RevoDrive 3 X2
#innodb_io_capacity = 100000 # Samsung 840 Pro
#
# Only really fast SAS drives (15,000 rpm) are capable of reaching 200
# IOPS. You might consider lowering the value if you are using a slower
# drive.
#innodb_io_capacity = 100 # 7,200 rpm
#innodb_io_capacity = 150 # 10,000 rpm
#innodb_io_capacity = 200 # 15,000 rpm default
#
# I have an SAS RAID.
# http://www.mysqlplus.net/2013/01/07/play-innodb_io_capacity/
# the InnoDB write threads will throw more data at the disks every second that \
innodb_io_capacity =
# Default values
innodb_read_io_threads =
innodb_write_io_threads =
# If set to 1, the default, to improve fault tolerance InnoDB first stores data \
\
innodb_doublewrite =
# The size in bytes of the buffer that InnoDB uses to write to the log
# files on disk. A large log buffer enables large transactions to run
# without a need to write the log to disk before the transaction commit.
# Thus, if you have big transactions, making the log buffer larger saves
# disk I/O.
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_inno\
# DEFAULT: 8388608
innodb_log_buffer_size =
# DEFAULT: 300
innodb_open_files =
# http://www.mysqlperformanceblog.com/2008/11/21/how-to-calculate-a-good-innodb-\
innodb_log_file_size =
# If set to 1, the default, the log buffer is written to the log file and a flus\
for \
set done ; $
innodb_flush_log_at_trx_commit =
# Once this number of threads is reached (excluding threads waiting for locks), \
wait \
for $
innodb_thread_concurrency =
# Maximum length in bytes of the returned result for a GROUP_CONCAT() function
group_concat_max_len =
# If the extra columns used for the modified filesort algorithm would contain m\
\
$
max_length_for_sort_data =
# The starting size, in bytes, for the connection and thread buffers for each cl\
's session va\
lue is read-only. Can be set to the expected length of client statements if $
net_buffer_length = 16384
# Limit to the number of successive failed connects from a host before the host\
is blocked from making further connections. The count for a host is reset to ze\
ro if they successfully connect. To unblock, flush the host cache with a FLU$
max_connect_errors = 10
# Minimum size in bytes of the blocks allocated for query cache results.
# http://dba.stackexchange.com/questions/42993/mysql-settings-for-query-cache-mi\
n-res-unit
query_cache_min_res_unit = 2K
# Size in bytes of the oersistent buffer for query parsing and execution, alloca\
ted on connect and freed on disconnect. Increasing may be useful if complex quer\
ies are being run, as this will reduce the need for more memory allocations $
query_prealloc_size = 262144
# Size in bytes of the extra blocks allocated during query parsing and executio\
n (after query_prealloc_size is used up).
query_alloc_block_size = 65536
# Size in bytes to increase the memory pool available to each transaction when t\
he available pool is not large enough
transaction_alloc_block_size = 8192
# Initial size of a memory pool available to each transaction for various memory\
allocations. If the memory pool is not large enough for an allocation, it is in\
creased by transaction_alloc_block_size bytes, and truncated back to transac$
transaction_prealloc_size = 4096
#
# * Security Features
#
# Read the manual, too, if you want chroot!
# chroot = /var/lib/mysql/
#
# For generating SSL certificates I recommend the OpenSSL GUI "tinyca".
#
# ssl-ca=/etc/mysql/cacert.pem
# ssl-cert=/etc/mysql/server-cert.pem
# ssl-key=/etc/mysql/server-key.pem
thread_handling=pool-of-threads
# ----------------------------------------------------------------------
# MYSQLDUMP CONFIGURATION
#
# SEE: http://dev.mysql.com/doc/refman/5.5/en/mysqldump.html
# ----------------------------------------------------------------------
[mysqldump]
# DEFAULT:
quote_names = true
[ ]
# SEE: mysqld.key_buffer
key_buffer =
sort_buffer =
read_buffer =
write_buffer =
[ ]
# Default is Latin1, if you need UTF-8 set all this (also in client section)
#
=
=
character_set_server =
collation_server =
userstat =
# https://mariadb.com/kb/en/segmented-key-cache/
# For all practical purposes setting key_cache_segments = 1 should be slower tha\
key_cache_segments =
aria_log_file_size =
aria_log_purge_type =
# The size of the buffer used for index blocks for Aria tables. Increase this to\
(for ) \
aria_pagecache_buffer_size =
# The buffer that is allocated when sorting the index when doing a REPAIR or wh\
aria_sort_buffer_size =
[ ]
# If set to 1 (0 is default), the server will strip any comments from the query \
if
query_cache_strip_comments=
# http://www.mysqlperformanceblog.com/2010/12/21/mysql-5-5-8-and-percona-server-\
innodb_read_ahead =
# ----------------------------------------------------------------------
# Location from which mysqld might load additional configuration files.
# Note that these configuration files might override values set in this
# configuration file!
# !includedir /etc/mysql/conf.d/
renaming (rotating) & compressing log files when they reach a certain size
keeping compressed backups for log files (with limits)
In the end the options make sure that the size taken by the log files will be constant.
In this section we will logrotate the MariaDB log files, so that they’ll not fill the entire
disk. MariaDB has a few different log files:
The error log: it contains information about errors that occur while the MariaDB
server is running
The general query log: it contains general information about the queries. In our
configuration we haven’t enabled this log
The slow query log: it consists of slow queries. This is very useful to find SQL
queries which need to be optimized.
Let’s check which log files we have specified in our my.cnf MariaDB configuration:
$ | "\.log"
=
=
=
slow_query_log_file =
640
test || exit
daily
max 7 backups, before log files are deleted
size take by log files is max 100MB
when rotating the log files, compress them
do a daily compress
do a postrotate when MariaDB is restarted
This means that the HugePages configuration was not properly setup. Please reread the
Enabling HugePages to see if you missed some details.
InnoDB: Error: log file ./ib_logfile0 is of different size <x> <y> bytes
Why nginx ?
nginx, pronounced ‘engine X’, has been gaining a lot of users in the Linux web server
market. Although Apache HTTP Server is still the market share leader in December 2014;
nginx is the new cool kid on the block.
So why is nginx so popular these days?
nginx has an event driven design which can make better use of the available hardware than
Apache’s process driven design. nginx can thus serve more concurrent clients with higher
throughput then Apache on the same hardware.
Another benefit is that configuring nginx is easier then Apache in my opinion.
Nginx gets very regular releases that fix bugs and add new features. (like the SPDY and
HTTP/2 support for enhancing performance of https websites).
For new servers we thus recommend you to install nginx instead of Apache.
Installing nginx
As nginx doesn’t support adding libraries dynamically we will compile nginx and its extra
libraries from source.
There may be some precompiled packages available, but most of the time they don’t have
the extra libraries we want (eg. nginx Google Pagespeed plugin) or come with old versions
of nginx/plugins.
Because we want to use the newest versions we will compile everything from source. This
is actually pretty simple and not that hard.
Download nginx
First we need to download the latest nginx version sources. These can be downloaded
from: http://nginx.org/en/download.html
From the commandline on your server:
$ cd
$
$
Download ngx_pagespeed
$ cd
$ NPS_VERSION=
$ ${NPS_VERSION}\
${NPS_VERSION}
$ ${NPS_VERSION}
$ cd ${NPS_VERSION}
$ ${NPS_VERSION}
$ ${NPS_VERSION} # extracts to psol/
make -j4 means we use 4 cores for the compilation process (as our VPS has 4 cores
available). Note this may not always work.
Compile nginx
$ cd
$ cd
$ = = \
= \
\
\
= = \
= =enable \
= = \
= = \
= =
$
$
We’ll go over the options in the configure command one by one to explain why they are
used.
This option adds the Google Pagespeed module for nginx. We’ll configure this plugin in a
later chapter to enhance the performance of your site.
–with-http_spdy_module (deprecated since 1.9.5; see below for the HTTP/2 module)
This option enables the SPDY module. SPDY is a Google specification which manipulates
HTTP traffic, with the goal to reduce web page load latency. It uses compression and
prioritizes and multiplexes the transfer of a web page so that only one connection per
client is required. (eg. Getting the html, images, stylesheets and javascript files all happens
with a connection that is kept open).
SPDY will form the basis of the next generation standardized HTTP v2 protocol.
SPDY requires the use of TLS (Transport Layer security) encryption (eg. https) for
security and better compatibility across proxy servers.
–with-http_v2_module
The ngx_http_v2_module module (since nginx 1.9.5) provides support for HTTP/2 and
supersedes the ngx_http_spdy_module module. Note that accepting HTTP/2 connections
over TLS (https) requires the ‘Application-Layer Protocol Negotiation’ (ALPN) TLS
extension support, which is available only since OpenSSL version 1.0.2; which we have
installed in our OpenSSL chapter.
–with-http_ssl_module
Enables SSL / TLS support (eg. To run your website over https)
In 2014 Google launched the HTTPS everywhere initiative; which tries to make secure
communications to your website the default (eg. Https everywhere).
This YouTube video explains their reasonings:
Google HTTPS initiative
Reasons:
you want to protect the privacy of the visitors coming to your site
your visitors know that they are talking to your site (and not a malicious server)
your visitors can be sure that the content of your site was not altered in transit. (some
ISPs have added Google Adsense banners in the past)
nobody can eavesdrop the communcation between the visitor and your site. (eg. think
NSA or someone eavesdropping on an unencrypted Wifi channel at your local
coffeeshop)
This only works when your site is fully on https; not just the admin sections or shopping
cart.
To use https, you’ll also need to buy an SSL/TLS certificate from a certificate provider to
prove you’re the owner of your website domain. We’ll explain this whole process in later
chapters.
–with-http_gzip_static_module
Enables nginx to compress the html, css and javascript resources with gzip before sending
it to the browser. This will reduce the amount of data sent to the browser and increase the
speed. This module also allows to send precompressed .gz files if they are available on the
server.
–with-http_stub_status_module
This module enables you to view basic status information of your nginx server on a self
chosen URL.
Here is an example of the status output:
Here you can see there are 291 active connections; for 6 of them nginx is reading the
request, for 179 nginx is writing the response and 106 requests are waiting to be handled.
More info can be found at HTTP Stub Status module
When we create the nginx configuration we will show you how to define the URL where
this information will be visible.
–with-http_secure_link_module
This module is usefull when you’re hosting Flash Video files on your site (FLV files).
This module provides pseudo-streaming server-side support for Flash Video (FLV) files. It
handles requests with the start argument in the request URI’s query string specially, by
sending back the contents of a file starting from the requested byte offset and with the
prepended FLV header.
–with-http_realip_module
This module is used to get the real IP address of the client. When nginx is behind a proxy
server or load balancer; the IP address of the client will sometimes be the IP address of the
proxy server or load balancer; not the IP address of your visitor. To make sure the correct
IP is available for use, you can enable this module. This can also be usefull when you are
trying to determine the country of the visitor from the IP address. (eg. via the Geo IP
module of nginx).
There is good tutorial at CloudFront which explains how to use this (advanced) module.
–with-pcre=../pcre-8.37
Directory where the PCRE library sources are located. It adds support for regular
expressions in the nginx configuration files (as explained previously)
–with-pcre-jit
Enables the Just In Time compiler for regular expressions. Improves the performance if
you have a lot of regular expressions in your nginx configuration files.
–add-module=../ngx_cache_purge-2.3
This is an important module which enables the TLS extensions. One of such extension is
SNI or Server Name Indication.
This is used in the context of https sites. Normally only one site (https domain) can be
hosted on the same IP address and port number on the server.
SNI allows a server to present multiple https certificates on the same IP address and port
number to the browser. This makes it possible for your VPS with eg. only 1 IPv4 address
to host multiple https websites/domains on the standard https port without having them to
use all the same certificate. (which would give certificate warnings in the browsers as a
https certificate is generally valid for one domain)
SNI needs support in the webbrowers; luckily all modern browsers have support builtin.
The only browser with a little bit of market share that does not support SNI is IE6. Users
will still be able to browser your site, but will receive certificate warnings. We advise you
to check in your Google Analytics tool to know the market share of each browser.
–add-module=../ngx_devel_kit-master and –add-module=../set-misc-nginx-module-0.29
These modules add miscellaneous options for use in the nginx rewrite module (for HTTP
redirects, URL rewriting, …)
–error-log-path=/var/log/nginx/error.log and –http-log-path=/var/log/nginx/access.log
The PID path is the path to the nginx PID file. But what is a PID file?
pid files are written by some Unix programs to record their process ID while they are
starting. This has multiple purposes:
It’s a signal to other processes and users of the system that that particular program is
running, or at least started successfully.
You can write a script to check if a certain programm is running or not.
It’s a cheap way for a program to see if a previous running instance of the programm
did not exit successfully.
The nginx.pid file is thus used to see if nginx is running or not.
Startup/shutdown/restart scripts for nginx which we will use in later chapters depend
upon the presence of the nginx.pid file.
Lock files can be used to serialize the access to a shared resource (eg. a file, a memory
location); when two concurrent processes request for it at the same time. On most systems
the locks nginx takes are implemented using atomic operations. On some nginx will use
the lock-path option.
nginx Releases
Each nginx release comes with a changelog file describing the fixes and enhancements in
each version. You can find those on the download page of nginx
Here we list some recent changes that improve performance: * v1.7.8 fixes a 200ms delay
when using SPDY * v1.7.6 has some security fixes * v1.7.4 has bugfixes in the SPDY
module and improved SNI support * v1.9.5 introduces HTTP/2 support
nginx Configuration
Creating a nginx System User
When we launch nginx, we need todo that with a certain OS user. For security reasons it is
best to create a dedicated user ‘nginx’ to launch the web server. This way you can limit
what the ‘nginx’ user has access to, and what is forbidden.
This is good way to reduce the attack possibilities of hackers when they would manage to
find a security hole in nginx.
Here is how we create the nginx user:
$
$
$
First we create a directory /home/nginx which we will use as home directory for the user
nginx.
Secondly we create a group ‘nginx’
Thirdly, we create the user ‘nginx’ and specify the home directory (/home/nginx) via the -
d option.
We also specify the login shell via -s. The login shell /usr/sbin/nologin actually makes sure
that we can not remotely login (SSH) via the nginx user. We do this for added security.
If you now take a look in your home directory you’ll see something like this:
$ cd
$
2 4096 6
after
add:
PATH=/sbin:/bin:/usr/sbin:/usr/bin
DAEMON=/usr/local/sbin/nginx
DAEMON_OPTS="-c /usr/local/nginx/conf/nginx.conf"
NAME=nginx
DESC=nginx
PID_FILE=/var/run/nginx.pid
set -e
case "$1" in
start)
echo -n "Starting $DESC: "
start-stop-daemon --start --quiet --pidfile $PID_FILE \
--exec $DAEMON—$DAEMON_OPTS
echo "$NAME."
;;
stop)
echo -n "Stopping $DESC: "
start-stop-daemon --stop --quiet --pidfile $PID_FILE \
--exec $DAEMON
echo "$NAME."
;;
restart|force-reload)
echo -n "Restarting $DESC: "
start-stop-daemon --stop --quiet --pidfile \
$PID_FILE --exec $DAEMON
sleep 1
start-stop-daemon --start --quiet --pidfile \
$PID_FILE --exec $DAEMON—$DAEMON_OPTS
echo "$NAME."
;;
reload)
echo -n "Reloading $DESC configuration: "
start-stop-daemon --stop --signal HUP --quiet --pidfile $PID_FILE\
\
--exec $DAEMON
echo "$NAME."
;;
*)
N=/etc/init.d/$NAME
echo "Usage: $N {start|stop|restart|reload|force-reload}" >&2
exit 1
;;
esac
exit 0
Now we need to make the script executable with the chmod +x command:
$
To register the script to start nginx during boot we need to run update-rc.d with the name
of the startup script.
$
To restart nginx:
$
To stop nginx:
$
Some settings only make sense inside a { … } block; eg. in a certain context. If you put
parameters in the wrong location, nginx will refuse to startup, telling you where the
problem is located in your configuration file(s).
At http://nginx.org/en/docs/ngx_core_module.html you can find all the information of
every setting possible available in the core nginx functionality. They also describe where
the setting makes sense. (eg for example in a http { … } block).
test
If there are errors, you can correct them first; while your currently running instance of
nginx is not impacted. Below we will start with the settings at the root level of the nginx
configuration file.
Updating worker_processes
The worker-processes directive is responsible for letting nginx know how many processes
it should spawn. It is common practice to run 1 worker process per CPU core.
To view the number of available cores in your system, run the following command:
$ |
Updating worker_priority
This defines the scheduling priority for nginx worker processes. A negative number means
higher priority. (acceptable range: -20 to + -20)
$
;
Updating timer_resolution
$
;
Reduces timer resolution in worker processes, thus reducing the number of gettimeofday()
system calls made. By default, gettimeofday() is called each time a kernel event is
received. With reduced resolution, gettimeofday() is only called once per specified
interval.
The 0666 permission settings mean all users can read and write to this directory but no
files may be executed here; again to harden security.
$
;
PCRE JIT can speed up processing of regular expressions in the nginx configuration files
significantly.
The access log logs all requests that came in for the nginx server. We will use the default
log format provided by nginx. This log format is called ‘combined’ and we will reference
it in the log file location parameter.
The default log format ‘combined’ displays the following information:
$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent\
' '"$http_referer" "$http_user_agent"';
Logging every request takes CPU and I/O cycles; that’s why we define a buffer.
(buffer=32k). This causes nginx to buffer a series of log entries and write them to the file
together instead with a separate write operation for each.
Defines a cache that stores the file descriptors of frequently used logs whose names
contain variables. The directive has the following parameters:
max: sets the maximum number of descriptors in a cache; if the cache becomes full
the least recently used descriptors are closed
inactive: sets the time after which the cached descriptor is closed if there were no
access during this time; by default, 10 seconds
min_uses: sets the minimum number of file uses during the time defined by the
inactive parameter to let the descriptor stay open in a cache; by default, 1
Enabling HTTP Status output
The following configuration enables nginx HTTP status output on the URL /nginx-info
Here you can see there are 291 active connections; for 6 of them nginx is reading the
request, for 179 nginx is writing the response and 106 requests are waiting to be handled.
HTTP Performance Optimalizations
The Unix sendfile allows to transfer data from a file descriptor to another directly in
kernel space. This saves a lot of resources and is very performant. When you have a lot of
static content to be served, sendfile will speed up the serving significantly.
When you have dynamic content (eg. Java, PHP, …) this setting will not be used by nginx;
and you won’t see performance differences.
These two settings only have effect when ‘sendfile on’ is also specified.
tcp_nopush ensures that the TCP packets are full before being sent to the client. This
greatly reduces network overhead and speeds up the way files are sent.
When the last packet is sent (which is probably not full), the tcp_nodelay forces the socket
to send the data, saving up to 0.2 seconds per file.
Security optimalizations
Disabling server_tokens makes sure that no nginx version information is added to the
response headers of an http request.
Disabling server_name_in_redirect makes sure that the server name (specified in
server_name) is not used in redirects. It’ll instead use the name specified in the Host
request header field.
Timeouts
The keepalive_timeout assigns the timeout for keep-alive connections with the client.
Connections who are kept alive, for the duration of the timeout, can be reused by the client
(faster because the client doesn’t need to setup an entirely new TCP connection to the
server; with all the overhead associated).
Internet Explorer will disregard the timeout setting, and will auto close them by itself after
60 seconds.
Disables keep-alive connections with misbehaving browsers. The value msie6 disables
keep-alive connections with Internet Explorer 6
Allow the server to close the connection after a client stops responding. Frees up socket-
associated memory.
Gzip Compression
Gzip can help reduce the amount of network transfer by reducing the size of html, css, and
javascript.
Here is how to enable it:
Buffers
Buffers sizes should be big enough so that nginx doesn’t need to write to temporary file
causing disk I/O.
The buffer size for HTTP POST actions (eg. Form submissions).
Sets the maximum allowed size of the client request body, specified in the “Content-
Length” request header field. If the size in a request exceeds the configured value, the 413
(Request Entity Too Large) error is returned to the client
The maximum number and size of buffers for large client headers
Miscellaneous
Defines a directory for storing temporary files with data received from proxied servers.
Caches for Frequently Accessed Files
You can add more if needed. Just define a new type {…} block inside the http block:
Create a system user for each website domain which can login via SFTP (secure file
transfer protocol) - this way you can upload your website to the server
Create a home directory for each website domain where you will put the html files,
… of your website
Create a nginx config file for each website domain (so nginx actually listens for that
domain.
web: where we will place the html, css, … files of our website
log: where specific logs for this website will be logged
When SFTPing to the server with the user mywebsite you’ll not be able to step outside of
the root directory /home/mywebsite.
$
The above command will add a user mywebsite, that is part of the nginx group. The home
directory is /home/mywebsite and we don’t provide any login shell via SSH for this user.
Remark: /usr/sbin/nologin is correct in Ubuntu; in other Linux distributions this can also
be /sbin/nologin !
Now we will generate a good password for the user mywebsite with openssl:
$ 8
yDIv39Eycn8=
Change the ownership to root for this directory (this is needed for the remote SFTP to
work correctly)
$
Make sure that /home/mywebsite is only writable by the root user. (read only for the rest
of the world):
$
Setting directories g+s with chmod makes all new files created in said directory have their
group set to the directory’s group by default. (eg. in this case our group is nginx). This
makes sure that nginx is correctly able to read the html files etc. for our website.
Configuring Remote SFTP access For mywebsite.com
In this section we will configure remote SFTP access by configuring the ssh daemon.
$
We will change the Subsystem sftp from sftp-server to internal-sftp. We do this because
we want to enforce that the SFTP user can not get outside of its home directory for
security reasons. Only internal-sftp supports this.
Thus change
with
Match User mywebsite indicates that the lines that follow only apply for the user
mywebsite.
The ChrootDirectory is the root directory (/) the user will see after the user is
authenticated.
“%h” is a placeholder that get’s replaced at run-time with the home folder path of that user
(eg. /home/mywebsite in our case).
ForceCommand internal-sftp - This forces the execution of the internal-sftp and ignores
any commands that are mentioned in the ~/.ssh/rc file.
You also need to add /usr/sbin/nologin to /etc/shells, or the sftp user will not be able to
login:
$
add
at the end
Restart ssh with:
$
You can now try out to login via SFTP (eg. by using FileZilla on Windows). If the process
fails it is best to check in the authentication log of the ssh daemon:
$
In this log file you could also find login attempts of (would-be) hackers trying to break in
into your site.
Create a nginx config file for each website domain
First we’ll create a seperate nginx configuration file for our domain. We’ll then include it
in the main nginx.conf. This way, if we add more sites we have everything cleanly
seperated.
$
A server block defines a ‘virtual host’ in nginx. It maps a server_name to an IPv4 and/or
an IPv6 address. You can see we’re listening on the default HTTP port 80. The root tag
specifies in which directory the html files are present.
access_log and error_log respectively log the requests that arrive for www.mywebsite.com
and the errors.
You can create multiple server { … } blocks that map different server names to the same
IPv4 address. This allows you to host multiple websites using only 1 IPv4 or IPv6 address.
These are also sometimes called ‘virtual hosts’ (Most probably you’ll only have 1 Ipv4
address available for your VPS because they are becoming more scarce).
Now we’ll add the mywebsite.com variant (without the www) in a second server { … }
block.
server {
server_name mywebsite.com;
listen <your Ipv4 address>:80; # Listen on IPv4 address
listen [<your Ipv6 address>]:80; # Listen on IPv6 address
return 301 http://www.mywebsite.com$request_uri;
}
The last line tells nginx to issue an HTTP 301 Redirect response to the browser. This will
make sure that everyone that types in mywebsite.com will be redirected to
www.mywebsite.com. We do this because we want our website to be available on a single
address for Search Engine optimalization reasons. Google, Bing and other search engines
don’t like content that is not unique, eg. when it is available on more then one website.
The $request_uri is a variable, and contains whatever the user typed in the browser after
mywebsite.com
Configuring a Default Server
When you issue a request to www.mywebsite.com with your browser it’ll normally
include a Host: www.mywebsite.com parameter in the HTTP request.
nginx uses this “Host” parameter to determine which virtual server the request should be
routed to. (as multiple hosts can be available on one IP address)
If the value doesn’t match any server name or if the Host parameter is missing, then nginx
will route the request to the default server (eg. Port 80 for http).
If you haven’t defined a default server, nginx will take the first server { … } block as the
default.
In the config below we will explicitly set which server should be default, with the
default_server parameter.
We return a HTTP response 444, this returns no information to the client and closes the
connection.
Setting Up Log Rotation For nginx
Logs from nginx can quickly take up a lot of diskspace when your site(s) have a lot of
visitors.
We’ll setup the Linux ‘logrotate’ application to solve this problem automatically.
‘logrotate’ provides us with a lot of options:
renaming (rotating) & compressing log files when they reach a certain size
keeping compressed backups for log files (with limits)
In the end the options make sure that the size taken by the log files will be constant.
Here are the options we’ve chosen:
Autorotate the logs of nginx: * daily * max 10 backups, before log files are deleted * size
take by log files is max 100MB * when rotating the log files, compress them * do a daily
compress * do a postrotate when nginx is restarted
We’ll create the following configuration file for the above settings:
$
$
We use the /etc/logrotate.d directory because logrotate will look there by default (default
settings are in /etc/logrotate.conf)
The postrotate specifies that’ll restart nginx after the rotation of log files took place.
By using sharedscripts we are ensuring that the post rotate script doesn’t run on every
rotated log file but only once.
You can view the status of the logrotation via the command:
$
You’ll see that log files will have been compressed and renamed as <filename of log
file>.1 (1 = first backup)
Disabling nginx Request Logs and Not Found Errors
By default nginx will log all requests for html files, images, … to a log file called the
access log. The information recorded contains the IP address, the date of visit, the page
visited, the HTTP response code and more.
If you don’t need this information, eg. if you have added Google Analytics code to your
site, you can safely disable this logging and lessen the load on your I/O subsystem a bit.
Here is how: inside a http or server block you can specify the access_log and error_log
variables:
In the above example we completely turn off the access log. We still log errors to the
error_log, but we disable the HTTP 404 not found logs. We recommend to use Google
Webmaster tools to keep track of bad links on your site.
Updating nginx
nginx regurarly comes out with new versions. As we have compiled nginx from source,
we will need to recompile the nginx version.
Here is our guide on how to do this. (very similar to a clean install)
Backup the nginx config files
We start by taking a backup of our nginx config files which are located in
/usr/local/nginx/conf/
$
Like previously explained download the sources of the new nginx version. Also check
whether there are new versions of the modules we’re using. (eg. Google Pagespeed nginx
plugin, OpenSSL).
If there are new versions of the modules; also download these and update the ./configure
command used for building nginx with the new paths.
Start the compilation:
$
$
Before we can do ‘sudo make install’ we need to stop the currently running server:
$
$
Zend OpCache
Since PHP5.5 a default OpCode cache has been included: Zend OpCache.
We recommend using Zend OpCache instead of the older APC cache. (Alternative PHP
cache) because APC Cache was not always 100% stable with the latest PHP versions and
Zend OpCache appears to be more performant.
$
$ ( for $ \
)
$
$
$
$
$
$
$
$
$
$
$
Downloading PHP
We will now download and unzip the latest version of the PHP sources. You can find the
download links at http://php.net/downloads.php
$ cd
$
$
Compilation flags
–enable-opcache
The enable-opcache option will make sure the OpCache is available.
–enable-cgi
This is an interface to the mcrypt library, which supports a wide variety of block
algorithms such as DES, TripleDES, Blowfish (default), 3-WAY, SAFER-SK64, SAFER-
SK128, TWOFISH, TEA, RC2 and GOST in CBC, OFB, CFB and ECB cipher modes
This is used by eg. the Magento shopping cart and other PHP frameworks.
–with-zlib
PHP Zlib module allows you to transparently read and write gzip compressed files. Thus it
is used for serving faster content to the end users by compressing the data stream.
Some applications like Pligg require zlib compression enabled by default in the PHP
engine
–with-gettext
With the exif extension you are able to work with image meta data. For example, you may
use exif functions to read meta data of pictures taken from digital cameras by working
with information stored in the headers of the JPEG and TIFF images.
–enable-zip
The bzip2 functions are used to transparently read and write bzip2 (.bz2) compressed files.
–enable-soap
These modules provide wrappers for the System V IPC family of functions. It includes
semaphores, shared memory and inter-process messaging (IPC).
–enable-shmop
Shmop is an easy to use set of functions that allows PHP to read, write, create and delete
UNIX shared memory segments
–with-pear
PEAR is short for “PHP Extension and Application Repository” and is pronounced just
like the fruit. The purpose of PEAR is to provide:
mbstring provides multibyte specific string functions that help programmers deal with
multibyte encodings in PHP. In addition to that, mbstring handles character encoding
conversion between the possible encoding pairs. mbstring is designed to handle Unicode-
based encodings such as UTF-8 and UCS-2 and many single-byte encodings for
convenience.
–with-openssl
This option needs to be enabled if you want to work with certificates and
verify/encrypt/decrypt functions in PHP
–with-mysql=mysqlnd
Enables the user of the MySQL native driver for PHP which is highly optimized for and
tightly integrated into PHP.
More information at http://dev.mysql.com/downloads/connector/php-mysqlnd/
–with-mysqli=mysqlnd
Enables the use of the MySQL native driver with mysqli, the improved MySQL interface
API which is used by a lot of PHP frameworks.
–with-mysql-sock=/var/run/mysqld/mysqld.sock
Sets the path of the MySQL unix socket pointer (used by all PHP MySQL extensions)
–with-curl
Enables the PHP support for cURL (a tool to transfer data from or to a server)
–with-gd
PHP can not only output HTML to a browser. By enabling the GD extension PHP can
output image streams directly to a browser. (eg. JPG, PNG, WebP, …)
–enable-gd-native-ttf
Enables the use of arbitrary precision mathematics in PHP via the Binary Calculator
extension. Supports numbers of any size and precision, represented as strings.
–enable-calendar
A PHP script can use this extension to access an FTP server providing a wide range of
control to the executing script
–enable-pdo
Enables the use of the MySQL native driver with the PDO API interface
–enable-inline-optimization
Inlining is a way to optimize a program by replacing function calls with the actual body of
the function being called at compile-time.
It reduces some of the overhead associated with function calls and returns.
Enabling this configuration option will result in potentially faster php scripts that have a
larger file size.
–with-imap, –with-imap-ssl, –with-kerberos
Adds support for the IMAP protocol (used by email servers) and related libraries
–with-fpm-user=nginx, –with-fpm-group=nginx
When enabling the FastCGI Process Manager (FPM), we set the user and group to our
nginx web server user.
To verify the PHP installation, let’s create a test php file which we will execute using the
php command line tool:
$ cd ~
$ nano phpinfo.php
<?php
phpinfo();
?>
As our server is located in the New York timezone we selected this timezone. You can find
a list of possible timezones at http://www.php.net/manual/en/timezones.php. Choose the
continent/city which is nearest your server data center.
Set maximum execution time
This sets the maximum time in seconds a script is allowed to run before it is terminated by
the parser. This helps prevent poorly written scripts from tying up the server. The default
setting is 30. When running PHP from the command line the default setting is 0.
Your web server can have other timeout configurations that may also interrupt PHP
execution.
Duration of time (in seconds) for which to cache realpath information for a given file or
directory. For systems with rarely changing files, consider increasing the value.
Sets the maximum filesize that can be uploaded by a PHP script. Eg. If your Wordpress
blog system complains that you can not upload a big image, this can be caused by this
setting.
This sets the maximum amount of memory in bytes that a script is allowed to allocate.
This helps prevent poorly written scripts for eating up all available memory on a server.
Note that to have no memory limit, set this directive to -1.
Sets max size of HTTP POST data allowed. This setting also affects file upload. To upload
large files, this value must be larger than upload_max_filesize. If memory limit is enabled
by your configure script, memory_limit also affects file uploading. Generally speaking,
memory_limit should be larger than post_max_size.
This setting is a security setting. This means that you don’t want to expose to the world
that PHP is installed on the server.
This directive allows you to disable certain PHP functions for security reasons. It takes on
a comma-delimited list of function names.
Only internal functions can be disabled using this directive. User-defined functions are
unaffected.
Don’t add a X-PHP-Originating-Script HTTP header that will include the UID (Unique
identifier) of the script followed by the filename.
Sets the max nesting depth of input variables in HTTP GET, POST (eg.
$_GET, $_POST.. in PHP)
Set the maximum amount of input HTTP variables
The maximum number of HTTP input variables that may be accepted (this limit is applied
to $_GET, $_POST and $_COOKIE separately).
Using this directive mitigates the possibility of denial of service attacks which use hash
collisions. If there are more input variables than specified by this directive, further input
variables are truncated from the request.
Configuring PHP-FPM
To configure PHP-FPM we’ll create the following php-fpm configuration file.
$
First we’ll add the path to the PID file for PHP-FPM. We’ll use this later in our
startup/shutdown script of PHP-FPM.
The above 3 settings configure that if 10 PHP-FPM child processes exit with an error
within 1 minute then PHP-FPM restarts automatically. This configuration also sets a 10
seconds time limit for child processes to wait for a reaction on signals from the master
PHP-FPM process.
[ ]
user =
group =
The [www] defines a new pool with the name “www”. We’ll launch PHP with our nginx
user which makes sure that our PHP interpreter will only be able to read/write
files/directories that are owned by nginx. (in our case that would be our websites and
nothing else).
The listen configuration is the IP address and port where PHP-FPM will listen for
incoming requests (in our case nginx forwarding requests for PHP files).
The allowed_clients setting will limit from where the clients can access PHP-FPM. We
specify 127.0.0.1 or the local host; this makes sure that no one is able to access the PHP-
FPM from the outside (evil) world. Only applications also running on the same server can
communicate with PHP-FPM. In our case nginx will be able to communicate with the
PHP-FPM server.
You can choose how the PHP-FPM process manager will control the number of child
processes. Possible values are static, ondemand and dynamic.
Ondemand spawns the processes on demand; as such we don’t have any PHP-FPM
childprocesses lingering around (being idle in the worst case). Only when processes are
needed, they will be started, until the maximum of max_children is reached. Because the
processes need to be started, this option may not be the most performant. It’s a trade-off
between memory consumption and performance.
The number of seconds after which an idle process will be killed with the ondemand
process manager.
You could also go for the following dynamic configuration which preforks 15 processes
(pm.start_servers):
This defines the path were we can view the PHP FPM status page. The status page is very
usefull to see if we need to tweak the FPM settings further.
To view this information on your site via your browser you’ll need to add a location to the
nginx configuration too. We’ll add this later in this chapter when we configure nginx to
pass PHP requests to PHP-FPM.
This setting disables the creation of core dumps when PHP FPM would crash. This way
we save on disk space.
This log file will contain all PHP files which executed very slowly. (this could for example
be due to a slow database call in the PHP script).
The ping URI to call the monitoring page of FPM. This could be used to test from outside
that FPM is alive and responding. (eg. with Pingdom
Sets the PHP error_log location. Because we use php_admin_value the log location cannot
be overriden in a users php.ini file.
Now edit the file and check whether the following paths are correct:
$
prefix=
exec_prefix=${prefix}
php_fpm_BIN=${exec_prefix}
php_fpm_CONF=${prefix}
php_fpm_PID=
Stopping php-fpm:
$
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $request_filename;
fastcgi_connect_timeout 60;
fastcgi_send_timeout 180;
fastcgi_read_timeout 180;
fastcgi_buffer_size 40k;
fastcgi_buffers 290 40k;
fastcgi_busy_buffers_size 512k;
fastcgi_temp_file_write_size 512k;
fastcgi_intercept_errors on;
The fastcgi_buffers and fastcgi_buffer_size settings can be tweaked after your site is
receiving traffic. Then it is possible to compute the average and maximum response sizes
of the HTML files via the access logs. Here is the command to compute the average
response size for all requests in an access.log file.
$ echo $(( ` '($9 ~ /200/)' | '{print $10}' | '{s+=$1} END\
{print s}'` / ` '($9 ~ /200/)' | ` ))
Set the fastcgi_buffer_size accordingly. (eg. a little bit higher then the average response
size. This makes sure that most requests will fit into 1 buffer, possibly optimizing
performance).
To compute the maximum response size use:
$ '($9 ~ /200/)' | '{print $10}' | |
Then divide this number (in bytes) by 1000 (to get kb) and 40,
Now opcache.php is available in the directory where your nginx is looking for its files.
Going to the opcache.php file via your browser should display the following information:
OpCache
size=
}
Installing memcached
Memcached is a free and high performance memory object caching system. It is intended
to speed up dynamic websites by reducing the load on the database.
Memcached stores small chunks of arbitrary data that comes from results of database
calls, API calls or page rendering. It thus reduces calls made to your database, which will
increase the speed of regurarly accessed dynamic webpages. It’ll also improve the
scalability of your infrastructure.
Multiple client APIs exist, written in different languages like PHP and Java. In this guide
we’ll focus on the PHP integration, which will make it possible to let Wordpress, phpBB
and other PHP frameworks use the memcached server.
Comparison of caches
At this moment we have already configured an OpCode cache in PHP. Why would we
need another kind of caching?
The OpCode cache in PHP stores the compiled PHP code in memory. Running PHP code
will speed up significantly due to this.
Memcache can be used as a temporary data store for your application to reduce calls to
your database. Applications which support Memcache include PHPBB3 forum software,
Wordpress with the W3 Total Cache plugin and more.
Two PHP APIs for accessing memcached exist:
pecl/memcache
pecl/memcached (newer)
We will install the two, because some PHP webapp frameworks may still use the older
API.
memcached Server
You can find the latest release of the memcached server at
http://memcached.org/downloads
Download the release as follows:
$ cd
$
$
$ cd
$ && &&
Now we’ll make sure that Memcached server is able to find the libevent library:
$
Add
/usr/local/lib is the directory where libevent.so is located. With sudo ldconfig we make the
configuration change active.
You should now be able to start Memcached manually via:
$
Press Ctrl-C to stop the memcached server, as we will add a startup script that’ll start
memcached at boot up of our server.
Installing libmemcached
libMemcached is an open source C/C++ client library for the memcached server . It has
been designed to be light on memory usage, thread safe, and provide full access to server
side methods.
It can be used by the pecl/memcached PHP extension as a way to communicate with the
Memcached server. Because it is written in C it is also very performant.
Here is how to install it:
$ cd
$ \
$
$ cd
$
$
Installing igbinary
Igbinary is a PHP extension which provides binary serialization for PHP objects and data.
It’s a drop in replacement for PHP’s built in serializer.
The default PHP serializer uses a textual representation of data and objects. Igbinary stores
data in a compact binary format which reduces the memory footprint and performs
operations faster.
Why is this important? Because the pecl/memcached PHP extension will use the igbinary
serializer when saving and getting data from the memcached server. The memcached
server cache will then contain the more compact binary data and use less memory.
$ cd
$
$
$ cd
$
for
$ CFLAGS="-O2 -g"
$
$
The igbinary.so library is now installed in the default extension directory. (in our case
/usr/local/lib/php/extensions/no-debug-zts-20131226/)
Now add igbinary to your php.ini configuration:
$
Add:
Now restart PHP-FPM to reload the changed php.ini configuration:
$
You can view whether the igbinary module is now available in php via:
$
Add
Add
Now memcached is running as a daemon. On the command line we can now check if it is
listening on the default port 11211:
$ |
Now lets create a Memcached server startup/stop script that is automatically run on boot
up of our server.
In the startup script we’ll also put in some Memcached configuration settings.
# Usage:
# /etc/init.d/memcached start
# /etc/init.d/memcached stop
PATH=
DAEMON=
DAEMONNAME=
DESC=
# ?
CON=
# nr of threads; It is typically not useful to set this higher than the number o\
THREADS=
# ?
MINSP=
# ?
CHUNKF=
# Port to listen on
PORT1=
# Start with a cap of 64 megs of memory. It's reasonable, and the daemon default
# Note that the daemon will grow to this size, but does not start out holding th\
# memory
MEMSIZE=
# Server IP
SERVERIP='127.0.0.1'
# Experimental options
OPTIONS='-o slab_reassign,slab_automove,lru_crawler,lru_maintainer,maxconns_fast\
,hash_algorithm=murmur3'
set
case "$1"
)
echo "Starting $DESC: "
$DAEMON $MEMSIZE $SERVERIP $PORT1 $CON $THREADS $MI\
$CHUNKF $USER $OPTIONS
;;
)
echo "Stopping $DESC: "
$DAEMON
;;
)
N= $DESC
echo "Usage: $N {start|stop}" &
exit
;;
esac
exit
Save the file and then make the script executable via:
$
After rebooting memcached should be running. You can check this via the following
command:
$ |
2430 1 0 \
4 11211 1024 4 72 \
=
$
$
$
$ cd
$
$
$
$
Now you can run a test like:
It could be that due to the upgrade he it will complain about Unable to initialize module:
PHP Warning: PHP Startup: memcache: Unable to initialize module
Module compiled with module API=20121212
PHP compiled with module API=20131226
or
Install ImageMagick
$ cd
$
If ImageMagick configured and compiled without complaint, you are ready to install it on
your system. Administrator privileges are required to install. To install, type
$
Add
Installing PHPMyAdmin
phpMyAdmin is a free software tool written in PHP, intended to handle the administration
of MySQL over the Web. phpMyAdmin supports a wide range of operations on MySQL
and MariaDB.
Frequently used operations (managing databases, tables, columns, relations, indexes,
users, permissions, etc) can be performed via the user interface, while you still have the
ability to directly execute any SQL statement.
Installing phpMyAdmin
We’ll download the latest version of phpMyAdmin and add it to our Nginx website root:
$ cd
$ \
Now we will use the browser based setup pages to setup PHPMyAdmin. We’ll configure
on which host, port, username and password our database can be accessed.
To setup phpMyAdmin via the browser, you must manually create a folder “config” in the
phpMyAdmin directory. This is a security measure. On a Linux system you can use the
following commands:
$ # create directory for saving
$ 774 # give it world writable permissions
In the Authentication tab, you can enter the user and password phpMyAdmin should use
to connect to the database. We previously showed you how you can create users to limit
access to databases. When you use the root MySQL user you’ll have access to everything
which could be a security risk.
Now press Apply; we’re back in the main config screen where we can download the
generated configuration file. (config.inc.php).
Upload this file to /home/<mywebsite>/web/dbadmin and delete the config directory.
Now you should be able to browse to http://<yourwebsite.com>/dbAdmin/index.php,
login and see the databases available on this server.
In the https / SSL support chapter we will enable secure access to our PHP My Admin
installation.
Installing Java
In this chapter we will install the most recent version of the Java virtual machine software
on our server. Just like PHP, it is regularly used for creating dynamic websites.
In the next section we will install a Java based web application server Jetty, which scales
very well and is very performant.
The latest major release of Java is Java 8. Each release sees new features and optimized
performance (eg. better garbage collection, …). We’ll describe how to install or update
your Java VM to the latest version below.
This adds all the java executables to the system path and defines the JAVA_HOME
variable to /usr/local/java/jdk1.8.0_66
Now we need to tell the Linux OS that the Oracle Java version is available for use. We
execute 3 commands, one for the java executable, one for the javac executable (java
compiler) and one for javaws (java webstart):
$ "/usr/bin/java" "java" "/usr/local/java/jdk\
1.8.0_66/bin/java"
$ "/usr/bin/javac" "javac" "/usr/local/java/j\
dk1.8.0_66/bin/javac"
$ "/usr/bin/javaws" "javaws" "/usr/local/java\
/jdk1.8.0_66/bin/javaws"
The following commands tell the OS to use our Java8 version as the default Java:
$
$
$
In the last step we will reload the system wide PATH /etc/profile by typing the following
command:
$
It is compatible with Java 1.8 and above and uses Java 1.7/1.8 features. Jetty will
have better performance when run on recent Java versions.
It supports the latest HTTP/2 protocol which speeds up websites
It supports the latest Servlet programming API (usefull for Java developers)
The following commands will make the jetty user the owner of the jetty installation
directory. We will also create a directory /var/log/jetty where Jetty can put its logs (which
is also owned by the jetty user):
$
$
$
JETTY_ARGS= =
JETTY_HOME=
JETTY_LOGS=
JETTY_PID=
We’re setting the path where Jetty has been installed, the log files location and the process
id file respectively.
Let’s make the jetty startup script executable:
$
If you want more logging when stop and starting Jetty you can edit /etc/init.d/jetty and put
change DEBUG=0 to DEBUG=1 in the file.
You’ll then receive similar output like below from the start script:
You can also view the status of a Jetty instance be executing:
$
START_INI =
JETTY_HOME =
JETTY_BASE =
JETTY_CONF =
JETTY_PID =
JETTY_START =
JETTY_ARGS = =8080 = \
JAVA_OPTIONS = = = \
6 = = \
JAVA =
RUN_CMD =
START_INI =
JETTY_HOME =
JETTY_BASE =
JETTY_CONF =
JETTY_PID =
JETTY_START =
JETTY_LOGS =
JETTY_STATE =
CLASSPATH =
JAVA =
JAVA_OPTIONS = = = \
6 = = \
JETTY_ARGS = =8080 = \
RUN_CMD = = = \
= = \
=8080 \
=
To create a default set of configuration files you can run the following command from our
jetty-conf directory:
$ = \
Let’s make sure that the configuration files are all owned by our jetty user:
$
$
We need to update our Jetty launch script to specify our Jetty Base directory:
$
You should now be able to stop and start the Jetty server via sudo /etc/init.d/jetty stop/start
with the configuration inside the JETTY_BASE directory.
You can also view the configuration details of the Jetty server and Jetty base configuration
via:
$ cd
$
Jetty Environment:
-----------------
This command will also list the Jetty Server classpath in case you would come across
some classpath or jar file issues.
You can of course modify the generated configuration. We’ll not cover this in detail but
will give an example we have used in production systems:
A start.ini file has been created which we can modify below to eg. Change the listening
port, max. number ot threads and more:
$
=
=
=
# If this module is activated, then all jar files found in the lib/ext/ paths wi\
=
# minimum number of threads
=
# Dump the state of the Jetty server, components, and webapps after startup
=false
=
=
=
=
=
Here we have copied a default jetty-deploy.xml from the unzipped Jetty download and
added it to our jetty-conf directory.
Here is an example configuration:
<Configure id="Server" class="org.eclipse.jetty.server.Server">
<Call name="addBean">
<Arg>
<New id="DeploymentManager" class="org.eclipse.jetty.deploy.DeploymentMana\
ger">
<Set name="contexts">
<Ref refid="Contexts" />
</Set>
<Call name="setContextAttribute">
<Arg> </Arg>
<Arg> </Arg>
</Call>
</New>
</Set>
</New>
</Arg>
</Call>
</New>
</Arg>
</Call>
</Configure>
Jetty can use the monitoredDirName to find the directory where your Java webapp is
located. When starting Jetty, you’ll see Jetty deploying this directory.
Another important parameter is the scanInterval. This setting defines the number of
seconds between scans of the provided monitoredDirName.
A value of 0 disables the continuous hot deployment scan, Web Applications will be
deployed on startup only.
For production it is recommended to disable the scan for performance reasons and restart
the server when you have done webapp changes. For development you can use a 1 second
interval to see your changes immediately.
You can view the groups where user jetty is member of via:
$
receives the real IP address of the user visiting the site (X-Real-IP)
receives the URL that is in the users webbrowser bar (Host)
knows whether the website was accessed on https or http (X-Forwarded-Proto)
Now let’s add the port and host where the JMX information will be made available:
$
Add the following after the JMX section:
yourhostname should be the hostname you’ve chosen when you installed the Linux
distribution. In case you forgot you can find it via:
$
Please choose a jmxrmiport of your liking. In our example we chose 36806. We’ll need to
edit our firewall configuration so that this port is not blocked for incoming connections:
$
TCP_IN = "22,25,53,80,110,143,443,465,587,993,995,11211,3306,9000,8080,36806"
TCP6_IN = "22,25,53,80,110,143,443,465,587,993,995,11211,3306,9000,8080,36806"
We’re not yet done. Let’s create a jetty.conf file in our etc jetty base directory; this file is
automatically read by the /etc/init.d/jetty startup script we’re using.
$
Uncomment the following section and add the IP Address of your server here: <Call
class=”java.lang.System” name=”setProperty”> <Arg>java.rmi.server.hostname</Arg>
<Arg><Your Server IP></Arg> </Call>
That’s it, now you need to restart your Jetty so the settings take effect.
$
Now let’s get back to the VisualVM we started on our PC/Mac. Right click on the Remote
option and choose Add Remote host. Enter your servers IP Address here and click OK.
Now right click on the Remote host you’ve just added, and choose Add JMX Connection.
Add the JMX port after the colon in the Connection textfield and press OK. You should
now be able to connect to the remotely running Jetty server and monitor its status
<HOST> <PORT>
There is one other G1 collector settings which we kept on its default setting, and thus
haven’t had to include:
Sets a target for the maximum GC pause time. This is a soft goal, and the JVM will make
its best effort to achieve it. Therefore, the pause time goal will sometimes not be met. The
default value is 200 milliseconds.
You can also see that we’ve added “-XX:+UseStringDeduplication” “-
XX:+PrintStringDeduplicationStatistics”
Here is why:
String objects generally consume a large amount of memory in an average application. It
is very much possible that there are multiple instances of the same string in memory.
The String deduplication algorithm can be enabled for the G1 GC collector since Java7
update 6. If it can find two strings with the same content, it’ll make sure there is only one
underlying character array instead of two character arrays; cutting the memory
consumption in half this way.
-XX:+PrintStringDeduplicationStatistics enables string deduplication logging in the
Garbage collection log.
$
$ cd
$
$ cd
$
$
Configuring Jetty to use a HikariCP connection pool with the MariaDB JDBC driver
<?xml version="1.0"?>
<!DOCTYPE Configure PUBLIC "-//Mort Bay Consulting//DTD Configure//EN" "http://j\
etty.mortbay.org/configure.dtd">
<Configure id="<yourwebsitehost>" class="org.eclipse.jetty.webapp.WebAppContext">
<Set name="virtualHosts">
<Array type="java.lang.String">
<Item> <yourwebsitehost> </Item>
</Array>
</Set>
<New id="HikariConfig" class="com.zaxxer.hikari.HikariConfig">
<Set name="maximumPoolSize"> </Set>
<Set name="dataSourceClassName"> </Set>
<Call name="addDataSourceProperty">
<Arg> </Arg>
<Arg> <your_database_name></Arg>
</Call>
<Call name="addDataSourceProperty">
<Arg> </Arg>
<Arg><your_database_user></Arg>
</Call>
<Call name="addDataSourceProperty">
<Arg> </Arg>
<Arg><your_database_password></Arg>
</Call>
</New>
Alternatively we can also configure the Oracle MySQL Datasource and JDBC URL:
$
<?xml version="1.0"?>
<!DOCTYPE Configure PUBLIC "-//Mort Bay Consulting//DTD Configure//EN" "http://j\
etty.mortbay.org/configure.dtd">
<Configure id="wimsbios" class="org.eclipse.jetty.webapp.WebAppContext">
<Set name="virtualHosts">
<Array type="java.lang.String">
<Item> <yoursite> </Item>
</Array>
</Set>
<New id="HikariConfig" class="com.zaxxer.hikari.HikariConfig">
<Set name="maximumPoolSize"> </Set>
<Set name="dataSourceClassName">
</Set>
<Call name="addDataSourceProperty">
<Arg> </Arg>
<Arg> <your_database_\
name></Arg>
</Call>
<Call name="addDataSourceProperty">
<Arg> </Arg>
<Arg><your_database_user></Arg>
</Call>
<Call name="addDataSourceProperty">
<Arg> </Arg>
<Arg><your_database_password></Arg>
</Call>
<Call name="addDataSourceProperty">
<Arg> </Arg>
<Arg> </Arg>
</Call>
<Call name="addDataSourceProperty">
<Arg> </Arg>
<Arg> </Arg>
</Call>
<Call name="addDataSourceProperty">
<Arg> </Arg>
<Arg> </Arg>
</Call>
<Call name="addDataSourceProperty">
<Arg> </Arg>
<Arg> </Arg>
</Call>
<Call name="addDataSourceProperty">
<Arg> </Arg>
<Arg> </Arg>
</Call>
<Call name="addDataSourceProperty">
<Arg> </Arg>
<Arg> </Arg>
</Call>
<Call name="addDataSourceProperty">
<Arg> </Arg>
<Arg> </Arg>
</Call>
<Call name="addDataSourceProperty">
<Arg> </Arg>
<Arg> </Arg>
</Call>
</New>
prepStmtCacheSize: 500 - sets the number of prepared statements that the MySQL
JDBC driver will cache per connection
prepStmtCacheSqlLimit: 2048 - This is the maximum length of a prepared SQL
statement that the driver will cache
cachePrepStmts: true - enable the prepared statement cache
useServerPrepStmts: true - enable server-side prepared statements
cacheServerConfiguration: true - caches the Maria DB server configuration in the
JDBC driver
useLocalTransactionState: true
rewriteBatchedStatements: true
maintainTimeStats: false
Add
Add:
Now we will configure the logrotate configuration for Jetty
$
{
size=
The postrotate specifies that it will restart jetty after the rotation of log files took place.
By using sharedscripts we are ensuring that the post rotate script doesn’t run on every
rotated log file but only once.
You can view the status of the logration via the command:
$
off loads traffic from your server (less load on your own server)
the static resources will be fetched from the CDN server closest to the user; which
will probably be closer then your web hosting server; because a CDN has many
servers located around the world. There will be less latency for these browser
requests for static resources.
Choosing a CDN
There are quite a few CDN providers in the market place. A good overview is listed at
http://www.cdnplanet.com/. Pricing can differ quite a lot, so we advise you to take your
time to choose a CDN.
One term you’ll come across is POP, which stands for Point of Presence. Depending on
where your site visitors are coming from (Europe, USA, Asia, Australia, …) you should
investigate whether the CDN provider has one or more POPs in that region.
For example, if your target market is China, you should use one of the CDNs which has a
POP in China.
There are two different types of CDNs: Push and Pull based. With most CDN providers
you can configure both.
With a Push CDN, the user manually uploads all the resources to the CDN server. He/She
then links to these resources from eg. the webpages on his site/server.
A pull CDN takes another approach. Here the resources (images, javascript, …) stay on
the users server. The user links to the resources via the Pull CDN URL. When the Pull
CDN URL is asked for a file, it’ll fetch this file from the original server (‘pulling’ the file)
and serve it to the client.
The Pull CDN will then cache the resource until it expires. It can cause some extra traffic
on the original server if the resources would expire too soon (eg. before being changed).
Also, the first person asking for a certain resource will have a slower response because the
file is not yet cached yet (or if it was expired).
Which type should you use? The Pull CDN is easier to use as you don’t have to upload
new resources manually. If the resources remain static or if your site has a minimal
amount of traffic, you could choose for a Push CDN; because there the content stays
available and is never re-pulled from the original server.
Here is a list of features we think you should take into consideration when choosing a
CDN:
Support for https - if you have a secure website, the static resources should also be
loaded over https so that everything is secure. This means the CDN also has support
this
Support for SPDY 3.1 protocol - SPDY is a Google protocol to achieve higher
transmission speeds
Support for HTTP/2 protocol - HTTP/2 the next generation HTTP standard is
available since 2015 and supersedes the SPDY protocol.
Support for Gzip compression - compresses javascript and stylesheets before sending
it to the browser; reducing the size of the response significantly
Support for using your custom DNS CNAMEs (eg. cdn.<mywebsite>.com should
redirect traffic to the CDN servers)
Support for Push and Pull Zones
Easy to use Dashboard for configuring your CDN
Pricing: check what the bandwidth charges are (eg. x amount of GB costs x USD
cents/eurocents
Performance
Our recommendations for stable, good performing and cheap CDNs are:
KeyCDN
MaxCDN
They both have all of the above features available at a very attractive pricepoint.
The Origin URL is the URL to your server where the CDN will find your website
resources (eg. CSS, Javascript, …)
For example if your website is hosted at www.mywebsite.com then you should fill in
http://www.mywebsite.com (or https://www.mywebsite.com if you have configured https -
which we will cover in our HTTPS chapter)
For now leave the other settings at their default values. We’ll cover them in the tuning
section. When you save the zone an URL will be created of the form: <name of
zone>*.kxcdn.com. For example let’s suppose this is mywebsite-93x.kxcdn.com.
Then your website resources will be downloadable via that URL; eg. a request to
http://mywebsite-93x.kxcdn.com/images/logo.png will pull the resource from your Origin
server at http://www.mywebsite.com/images/logo.png and cache it at the CDN server
(mywebsite-93x.kxcdn.com)
Now you have enabled your Pull CDN zone, you can start using it on your site. You’ll
need to replace all URLs to static resources (images, css, javascript) that reference your
www.mywebsite.com with new URLs that start with mywebsite-93x.kxcdn.com.
If you’re using a blogging platform like Wordpress, you can use the W3 Total Cache
plugin to automate this process.
Fill in the following values: * Name: cdn * Alias to: mywebsite-93x.kxcdn.com. (the dot
at the end is needed!) * Leave the Time to live (TTL) at its default value. A higher value
means that DNS servers will cache this alias longer which will result in less queries to
DNSMadeEasy (which you’re paying for). A higher value will also result in changes
propagating slower.
With KeyCDN it is possible to compress the resources that are hosted on your CDN with
Gzip. You can easily enable this setting in the advanced options of your zone.
Login at KeyCDN
Go to Zones
Click on the Managed - Edit button of your zone
Click on Show Advanced Features
Set Gzip to Enabled
Set expiration of resources
By default KeyCDN will not modify or set any cache response headers for instructing
browsers to expire resources (images, …). This means that whatever cache response
headers set by the Origin server stay intact. (in our case this would be what nginx sets).
You can also override this behavior via the KeyCDN management dashboard in the Show
Advanced Features section of your zone.
We recommend to set the value to the max value allowed. (1 year) if possible for your
website. If you have resources which change a lot, you can get into problems that visitors
of your site keep seeing the cached version in their browser cache. To circumvent this
problem the image(s) should be served from a different URL in those cases. Via the
Google PageSpeed plugin which we will cover later, this can be automated.
Setting the canonical URL Location
Configuring robots.txt
In the same Advanced Features section you can also enable a robots.txt file instructing
Google and other search engines not to crawl any content at cdn.wimsbios.com. We don’t
recommend to enable this unless you know very well what you’re doing. Eg. For example
if you have enabled the Canonical Header option in the previous section, you shouldn’t
enable a robots.txt which disallows Google to fetch the images and read the canonical
URL.
HTTPS everywhere
In this chapter we’ll first explain the reasons why it is a good idea to secure your site.
Then we will show you the exact steps on how to make your site secure, starting from
ordering a certificate to testing your https website.
HTTP is an insecure protocol, which means everything that is sent between the
browser and your server is in plain text and readable by anyone tapping the internet
connection. This could be a government agency (eg. NSA, …) or someone using the
same free unencrypted free WIFI hotspot as your user.
HTTPS on the other hand encrypts the communication between the browser and the
server. As such nobody can listen to your users “conversations”. A https certificate
for a website also proves that users communicate with the intended website and not a
fake website run by malicious people.
Since the summer of 2014, Google has publicly said that having a https site can give
a small ranking boost in the search engine results.
It is also vital that you secure all parts of your website. This includes all pages, all
resources (images, javascript, css,), all resources hosted on a CDN, …
When you would only use https for eg. A login into a forum or a credit card detail
information page, your website is still ‘leaking’ sensitive information hackers can use.
More in detail this could be a session identifier or cookies which are typically set after a
login. The hacker could reuse this information to hijack the users session and being
effectively logged in without knowing any password.
In October 2010 the Firesheep plugin for the Firefox browser was released which
intercepted unencrypted cookies from Twitter and Facebook, forcing them to go https
everywhere.
We also recommend to only offer an https version of your site and redirect any users
accessing the http version to the https version. We’ll explain how to do this technically in
the next sections.
Standard certificate
A standard certificate can be used for a single website domain. Eg. if all your content is
hosted on www.mywebsite.com, you could buy a standard certificate which is valid for
www.mywebsite.com. Note this doesn’t include any subdomains which you may also use.
For example cdn.mywebsite.com is not included. Browsers will issue warnings to the user
if you try to use a certificate which is not valid. You could by a second certificate for the
subdomain cdn.mywebsite.com to solve this problem.
Wildcard certificate.
A wildcard certificate is still only valid for one top domain (eg. mywebsite.com), but it
also supports all subdomains (*.mywebsite.com); hence the name wildcard certificate.
This kind of certificate is usually a little bit more expensive, then a standard certificate.
Depending on the price and on the number of subdomains you’re going to use you’ll need
to decide between a standard and wildcard certificate.
Other types of certificates exists (eg. Multidomain), but are usually pretty expensive; so
we’ll not cover them here.
An EV certificate could be interesting for an ecommerce site because it gives your user a
greater trust in your site which could lead to more sales.
There are some restrictions with EV certificates though: only companies can order an EV
certificates, individuals cannot. EV certificates are always for one domain only; there are
no wildcard EV certificates at this moment.
GlobalSign
Network Solutions
Symantec
Thawte
Trustwave
Comodo
You can view daily updated reports of the market shares of the leading Certificate
Authorities at http://w3techs.com/technologies/overview/ssl_certificate/all
Because of better pricing we have chosen to buy a certificate from Comodo. They also
support generating 2048 bit certificates for better security.
Many companies resell certificates from the above Certificate Authorities. They are the
exact same certificates, but come with a reduced price tag. We recommend you to shop
around.
One such reseller we recommend and use is the SSLStore which we will use in the
example ordering process below.
Generate a Certificate Signing request
When ordering a Certificate from a Certificate Authority you’ll need to create a Certificate
Signing request. (CSR)
A Certificate Signing request is file with encrypted text that is generated on the server
where the certificate will be used on. It contains various details like your organization
name, the common name (=domain name), email address, locality and country. It also
contains your public key; which the Certificate Authority will put into your certificate.
When we create the Certificate Signing request below we will also generate a private key.
The Certificate Signing request will only work with the private key that was generated
with it. The private key will be needed for the certificate you’ll buy, to work.
Here is how you can create the Certificate Signing request on your server:
$ \
req: activates the part of openssl that deals with certificate requests signing
-nodes: no des, stores the private key without protecting it with a passphrase. While
this is not considered to be best practice, many people do not set a passphrase or later
remove it, since services with pass phrase protected keys can not be auto-restarted
without typing in the passphrase
-newkey: generate a new private key
rsa:2048 1024 is the default bit length of the private key. We will use 2048 bit keys
because our Certificate Authority supports this and is required for certificates which
expire after October 2013
-sha256: used by certificate authorities to generate a SHA-2 certificates (which is
more secure then SHA-1)
-keyout myprivatekey.key: store the private key in a file called myprivatekey.key (in
PEM format)
-out certificate-signing-request.csr: store the certificate request in a file called
certificate-signing-request.csr
When launching the above command you’ll be asked to enter information that will be
incorporated into your certificate request.
There are quite a few fields but you can leave some blank. For some fields there will be a
default value (displayed in […] brackets). If you enter ‘.’, the field will be left blank.
Country Name (2 letter code) [AU]: <2 letter country code> eg. BE for Belgium
State or Province Name (full name) [Some-State]
Locality Name (eg. city) []
Organization Name (eg. company) [Internet Widgits Pty Ltd]: Wim Bervoets
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []: this is an important setting
which we will discuss below.
Email Address []: email address which will be in the certificate and used by the
Certificate Authority to verify your request . Make sure this email is valid & you
have access to it. The email address should also match with the email address in the
DNS contact emails used for the particular domain you’re requesting a certificate for.
The Common Name should be the domain name you’re requesting a certificate for. Eg.
www.mywebsite.com
This should include the www or the subdomain you’re requesting a certificate for.
If you want to order a wildcard certificate which is valid for all subdomains you should
specify this with a star; eg. *.mywebsite.com
OpenSSL will now ask you for a few ‘extra’ attributes to be sent with your certificate
request:
Now we can download the freshly generated csr file and use it when ordering our SSL
certificate at the SSLStore.
Ordering a certificate
Let’s suppose we want a Comodo Wildcard certificate. Go to
https://www.thesslstore.com/wildcardssl-certificates.aspx?aid=52910623 and click on the
Add To cart button next to ‘Comodo EssentialSSL Wildcard certificate’.
Next you’ll be asked for your billing details and credit card information. After completing
these steps an email will be sent with a link to the Configure SSL service of Comodo
(together with a PIN)
You may wonder why there are so many different certificates included and what you need
to do with it.
To explain this, we’ll need to cover what SSL Certificate chains are.
Browsers and devices connecting to secure sites have a fixed list of Certificate Authorities
they trust - the so called root CAs. The other kind of CAs are the intermediate Certificate
Authorities.
If the certificate of the intermediate CA is not trusted by the browser or device, the
browser will check if the certificate of the intermediate CA was issued by a trusted CA
(this goes on until a trusted (root) CA is found).
This chain of SSL certificates from the root CA certificate, over the intermediate CA
certificates to the end-user certificate for your domain represents the SSL certificate chain.
You can view the chain in all popular browsers, for example in Chrome you can click on
the padlock item of a secure site, choose Connection and then Certificate data to view the
chain:
HTTPS Certificate chain
In the next section we’ll make use of the certificates as we install them in our nginx
configuration.
the browser will receive the full certificate chain. (except for the root certificate but
the browser already has this one builtin).
Some browsers will display warnings when they can not find a trusted CA certificate
in the chain. This can happen if the chain is not complete.
Other browsers will try to download the intermediary CA certificates; this is not good
for the performance of your website because it slows down setting up a secure
connection. If we combine all the certificates and configure nginx properly this will
be much faster.
Note: In general a combined SSL certificate with less intermediary CAs will be a little bit
better performance wise still.
You can combine the certificates on your server, after you have uploaded all the certificate
.crt files with the following command:
$ \
server {
server_name www.mywebsite.com;
# SSL config
listen <ipv4 address>:443 default_server ssl http2;
listen [ipv6 address]:443 default_server ssl http2;
ssl_certificate /usr/local/nginx/conf/<yourdomain.chained.crt>;
ssl_certificate_key /usr/local/nginx/conf/<yourprivate.key>;
...
}
In this configuration we tell nginx to listen on an IPv4 and Ipv6 address on the default
HTTPS port 443. We enable ssl and http2.
HTTP/2 is the next generation standardized HTTP v2 protocol. It is based on the SPDY
Google specification which manipulates HTTP traffic, with the goal to reduce web page
load latency. It uses compression and prioritizes and multiplexes the transfer of a web
page so that only one connection per client is required. (eg. Getting the html, images,
stylesheets and javascript files all happens with a connection that is kept open).
You can check an example what kind of performance improvements are possible with
HTTP2 on the Akaimai HTTP2 test page
HTTP/2 is best used with TLS (Transport Layer security) encryption (eg. https) for
security and better compatibility across proxy servers.
Now restart the nginx server. Your site should now be accessible via https.
We recommend you to now run an SSL analyzer. You’ll get a security score and a detailed
report of your SSL configuration:
HTTPS Certificate chain
To get an A+ score the default nginx SSL configuration shown above is not enough. More
likely you’ll receive one or more of the following warnings:
HTTPS Certificate chain
To make your users use the https version of your site by default, you’ll need to redirect all
http traffic to the https protocol. Here is an example server nginx configuration which does
this:
server {
server_name www.yourwebsite.com;
listen <ip_address>:80; # Listen on the HTTP port
listen [<ip_address>]:80; # Listen on IPv6 address and HTTP 80 port
Actually you should not be receiving this error, as we previously combined all the
intermediate certificates with our domain certificate. If the SSLLabs test still reports this
error, then you should revisit the previous section.
If you don’t fix this error users may receive strong browser warnings and experience slow
performance.
Sometimes security issues are found in the security protocols or ciphers used for securing
websites. Some of these issues get an official name like BEAST or POODLE attacks.
By using the latest version of OpenSSL and properly configuring the nginx SSL settings
you can mitigate most of these issues.
https://wiki.mozilla.org/Security/Server_Side_TLS has an up-to-date list of configuration
settings to be used on your server. Actually there are three sets of configurations, a
Modern, an Intermediate and Old configuration.
We recommend to at least use the Intermediate or the Modern version as they give you
higher levels of security between browsers/clients and your server. The modern version is
the most secure, but doesn’t work well with old browsers.
Here are the minimum versions supported for the Modern & Intermediate configuration.
Modern: Firefox 27, Chrome 22, IE 11, Opera 14, Safari 7, Android 4.4, Java 8
Intermediate: Firefox 1, Chrome 1, IE 7, Opera 5, Safari 1, Windows XP IE8,
Android 2.3, Java 7
We’ll use the online tool at https://mozilla.github.io/server-side-tls/ssl-config-generator/ to
generate an ‘Intermediate’ configuration.
Choose nginx, Intermediate and fill in the nginx version & OpenSSL version. The nginx
configuration generated should be like the one in the screenshot above.
In summary these settings will: * disable SSL3.0 protocol (even IE6 on Windows XP
supports the successor TLSv1 with Windows update) - clients are forced to use at least
TLSv1 * The SSL Ciphersuites nginx/OpenSSL supports server side are ordered: the most
secure at the beginning of the list. This will make sure client/servers will try to use the
most secure options they both support. * Specifies that server ciphers should be preferred
over client ciphers when using the TLS protocols (to fix BEAST SSL attack) * Enable
OSCP stapling (explained in the next chapter)
For Diffie-Hellman based ciphersuites an extra parameter is needed:
OCSP stands for Online Certificate Status Protocol. Let’s explain the context a bit.
Certificates issued by a Certificate Authority can be revoked by the CA. For example
because the customer lost their private key or was stolen, or the domain was transferred to
a new owner.
The Online Certificate Status Protocol (OCSP) is one method for obtaining certificate
revocation information. When presented with a certificate, the browser asks the issuing
CA if there are any problems with it. If the certificate is fine, the CA can respond with a
signed assertion that the certificate is still valid. If it has been revoked, however, the CA
can say so by the same mechanism.
OCSP has a few drawbacks:
it slows down new HTTPS connections. When the browser encounters a new
certificate, it has to make an additional request to a server operated by the CA.
Additionally, if the browser cannot connect to the CA, it must choose between two
undesirable options: ** It can terminate the connection on the assumption that
something is wrong, which decreases usability. ** It can continue the connection,
which defeats the purpose of doing this kind of revocation checking.
OCSP stapling solves these problems by having the site itself periodically ask the CA for a
signed assertion of status and sending that statement in the handshake at the beginning of
new HTTPS connections.
To enable OCSP stapling in nginx; add the following options:
The ssl_trusted_certificate file should only contain the root Certificate Authority
certificate. In our case, we created this file like this:
When nginx asks for the revocation status of your certificate, it’ll ask the CA this in a
secure manner using the root CA certificate (ca.root.crt in our case).
To validate OSCP stapling is working run the following command:
$ \
|
It should give back:
when it is working.
“OCSP response: no response sent” means it is not active yet.
You may need to rerun this command a few times if you just recently started nginx.
If OCSP is not working correctly nginx will also issue the following warning in its error
log file (/var/log/nginx/error.log)
Suppose a user types in the URL of your website in a browser without any https or http
protocol specified. Then the browser will likely choose to load the site via http (the
unsecure version). Even if you have configured your server to redirect all http requests to
https, the user may will talk to the non-encrypted version of the site before being
redirected.
This opens up the potential for a man-in-the-middle attack, where the redirect could be
exploited to direct a user to a malicious site instead of the secure version of the original
page.
The HTTP Strict Transport Security feature lets a website inform the browser it should
never try to load the site using HTTP, and that it should automatically convert all attempts
to access the site using HTTP to HTTPs requests.
In your nginx configuration you’ll need to add the following line:
To further optimize the SSL performance of nginx we can enable some caches.
ssl_session_cache shared:SSL:20m;
ssl_session_timeout 10m;
The ssl_session_cache will create a shared cache between all the nginx worker processes.
We have reserved 20MB for storing SSL sessions (for 10 minutes). According to the nginx
documentation 1MB can store about 4000 sessions. You can reduce or increase the size of
the cache based on the traffic you’re expecting.
Enabling SSL on a CDN
When serving your site over https, you need to make sure that all resources used by your
HTML are also served via HTTPS. (eg. Images, javascript, stylesheets).
When you’re using a CDN to host your resources, you’ll need to configure the SSL
settings in your CDN Account.
We’re going to show you how you can enable HTTPS on a KeyCDN server. The process
will be similar for eg. MaxCDN.
For setting up a CDN, take a look at our Chapter Using a CDN.
SSL
Custom SSL certficate
Custom SSL Private key
Force SSL
You’ll also need to provide your private key in the Custom SSL Private Key section. This
key is available at /usr/local/nginx/conf/<yourprivate.key>
Make sure to use a https URL for your Origin URL too (eg. https://www.yourwebsite.com)
Please note that most CDNs that support SSL implement it via Server Name Indication
which means multiple certificates can be presented to the browser on 1 single IP address.
This reduces their need for dedicated IP addresses per customer which lowers the cost
significantly. The only (small) downlside of SNI is that it isn’t supported by IE6 on
Windows XP, meaning those users will see a certificate warning.
Depending on usage there is still a free option available (Zoho Mail), which we’ll
configure in the remainder of the chapter. If you choose another provider, the setup details
may differ a bit, but the general concepts should remain the same.
Next, specify your existing company or business domain name for which you want to
setup a youremail@yourdomain email address.
Now provide your name, lastname, password and the email address you would like to
create. Also provide a contact email address. (eg. a gmail or hotmail address you’re
already using).
Zoho Mail will now configure your account and within a few seconds come back to
congratulate you:
Now click on the link ‘Setup <your domain> in Zoho’. You’ll need to complete the
following steps:
Zoho Email Steps
Verify Domain
You need to start by verifying that you own the domain you want to use in your email
address. Select your domain DNS Hosting provider from the list. If you’re using
DNSMadeEasy then choose Other.
You can validate your domain via three methods:
1. CNAME Method
2. TXT Method
3. HTML Method
The CNAME and TXT method both need configuration changes inside your DNS Service
Provider Control Panel for your domain. You’ll either need to add a CNAME record or a
TXT record in eg. your DNSMadeEasy control panel. Please check our ‘Ordering a
domain name’ chapter for more information on how to setup DNSMadeEasy.
Once you have added either record you can click on the green ‘Verify by TXT’ or ‘Verify
by CNAME’ button.
The HTML method lets you upload an HTML to your website at
http://yourdomain.com/zohoverify/verifyforzoho.html.
Add Users
Now you can provide a desired username to create your domain based email account.
You’re also given the possibility to add other email accounts for your domain.
Zoho Add Users
Add Groups
Groups are common email accounts that serve the purpose of having common email
addresses for a team of users in your organization. For example, you can create
[email protected] as a Group account, a common account with all the HRs as members
of the group.
Zoho Groups
You must remove (delete) any other MX records other than the above 2 records.
Zoho MX Records
Your Zoho Mail account can be read in any email client that supports the POP or IMAP
protocol. The links contain the configuration instructions.
You’re then redirected to your Zoho Mail Inbox! Try to send and receive a few emails to
see everything is working correctly!
Zoho Inbox
Installing Wordpress
Wordpress is one of the worlds most used blogging platforms. It allows you to easily
publish a weblog on the internet.
Wordpress includes the possibility to use themes to customize the look and feel of your
website. Most people will choose one of the free themes available or buy a professionally
designed theme template.
The Wordpress plugin system makes the Wordpress core functionality very extensible.
There are free and paid plugins for almost every feature or functionality you’ll need.
These include but are not limited to:
SEO plugins: optimize your articles for better Search engine optimalization visibility
Performance plugins: optimize the performance of your Wordpress website
Tracking plugins: integrate Google Analytics tracking
Landingpage, sales pages, membership portal plugins (eg. OptimizePress)
In this chapter we’ll focus on how to install Wordpress, and install the Yoast SEO plugin.
Downloading Wordpress
You can download the latest version of Wordpress with the following commands:
$ cd
$
$
$
$
In this case you have installed Wordpress in the root directory of your site. (eg. it’ll be
available at http://www.<yourwebsitedomain>.com)
location / {
# $uri/ needed because otherwise the Wordpress Administration panel
# doesn't work well
try_files $uri $uri/ /index.php;
}
‘index index.php’ specifies that when a directory is requested it should serve index.php by
default. (this will make sure that http://www.<yourwebsitedomain>.com will execute
index.php (thus your Wordpress blog))
We added a location /, because in our example Wordpress is available via http://www.
<yourwebsitedomain>.com
The ‘try_files’ directive will try to execute three different URLS, the URL itself, the URL
with a slash appended and the URL with index.php appended. It’ll only try the next
possibility when the previous one returned a HTTP 404 error.
For example, if a request for http://www.yourwebsitedomain.com/my-newest-blog-post is
handled by Nginx, it’ll try to fetch http://www.yourwebsitedomain.com/my-newest-blog-
post, if not found then http://www.yourwebsitedomain.com/my-newest-blog-post/, if not
found then http://www.yourwebsitedomain.com/my-newest-blog-post/index.php. If the
file is still not found Nginx will return the HTTP 404 error to the browser.
We need these 3 different URLs, to make Nginx’s forwarding to PHP-FPM work with the
kind of URLs Wordpress creates.
Save the configuration file and then restart nginx:
$
In this screen you need to choose a username and a password. To generate a good database
password you can use the Generate button.
In the Host field, you should select the dropdown ‘local’, as such the database can not be
accessed from remote locations, but only from your server. (which is where your
Wordpress installation is also running).
Now you need to assign the necessary rights for this user so the user can insert, update and
delete records on your newly created database (eg. ‘wordpress_site’)
Edit the rights of the user and then click on the rounded subtab Database. Choose the
database you have created previously. (eg. ‘wordpress_site’)
In the next screen, choose “Select all”
Installing Wordpress
The Wordpress installation is web based. Go to http://www.<yourwebsitedomain>.com/ to
install Wordpress.
Installing Wordpress
Click on the button ‘Let’s go’ takes you to the database details configuration screen:
Configuring the Wordpress database
Database name: the name of the database you have created previously (eg.
‘wordpress_site’)
Username / password: the database user and password you have created previously
with PHPMyAdmin
Database host: leave it on localhost
Table prefix: leave it on wp_
Wordpress will now test the database connection. If succesfull, Wordpress will try to write
a wp-config.php file with its configuration.
Due to file permissions it is possible that Wordpress is not yet able to write this file
automatically. In this case you’ll see the following screen where you can copy-paste the
contents in a file you create manually on your server:
Configuring the Wordpress database
Here is how you can do this via the SSH command line:
Make sure you’re in the root directory of your Wordpress installation.
$ cd
$
Now click on the ‘Run the install’ button on the Wordpress installation screen.
Wordpress will now ask you for the following details:
Site title: you can put the name of your blog here
Username: choose a username to login to your Wordpress administration pages (eg.
where you create new posts and so on). We recommend you to not choose ‘admin’, as
this makes it easier for hackers to break in to your site.
Password: Use a strong password
Your email: Use an email address you own
Enable the flag ‘Allow search engines to index this site’
Wordpress is now installed on your host. You should be able to access the administration
screen via http://www.<yourwebsitedomain>.com/wp-admin
This effectively disables the Wordpress screen where it asks for FTP/SFTP details to
upload updated files to your server.
Next you’ll need to make sure that the files are readable and writable by the user and
group (should be nginx) the files are owned by:
$ 664 {} \;
The -f option specifies you’re searching for files, and for every file found you execute the
command chmod 664.
You’ll also make the plugins and themes directory readable, writable and executable for
the user and group:
$ \
775 {} \;
$ \
775 {} \;
The -f option specifies you’re searching for directories, and for every directory found you
execute the command chmod 775.
You also need to make sure that all Wordpress files are owned by your OS user/nginx
group you created for your <yourwebsitedomain>\ website:
$ cd
$
The first thing we’ll fix are the URLs Wordpress will generate for your pages and
blogposts. Each blogpost is uniquely accesible by an URL. In Wordpress this is called a
‘Permalink’.
By default WordPress uses web URLs which have question marks and lots of numbers in
them; however, WordPress offers you the ability to create a custom URL structure for your
permalinks and archives.
Go to Settings -> Permalinks to change the URL structure.
For SEO reasons it is best to use the option ‘Post name’:
Configuring Permalinks Structure
Choosing a Tagline
After installing Yoast SEO plugin, you’ll see a new sidebar tab ‘SEO’ in your
administration panel. Click on the ‘SEO’ -> General item. Then click on the tab Your info.
Choose whether you’re a person or company.
We recommend you to verify your site with at least the Google Search Console and the
Bing Webmaster tools. Click on any of the links and follow the instructions that are
provided.
Now go to SEO -> Social.
On this page you can add all the social profiles for your website. (eg. Twitter, Facebook,
Instagram, Pinterest, Google+ and so on).
If you have not yet created social profiles for you website, we recommend todo so as all
these profiles can direct traffic to your own site.
Sitemap files are XML files that list all the URLS available on a given site. They include
information like last update date, how often it changes and so on.
The URL location of the Sitemaps can be submitted to the Search Engines Webmaster
Tools - which will use it to crawl your site more thoroughly; increasing the amount of
visitors.
The Yoast SEO plugin will automatically generate a sitemap (index) xml file. You can
enable it via the SEO -> XML Sitemaps menu.
If you click on the XML Sitemap button you’ll see that the XML Sitemap file is available
at the URL http://www.<yourwebsite>.com/sitemap_index.xml
Adding URL Rewrites for Yoast SEO Sitemap XML Files
We need some extra URL rewrite rules in the nginx configuration to make the XML
Sitemap button not return a HTTP 404 though:
$
We recommend you to enable Breadcrumbs for your blog via the SEO -> Advanced
section
Breadcrumbs have an important SEO value because they help the search engine (and your
visitors!) to understand the structure of your site.
For the breadcrumbs to show up on your Wordpress blog, it is possible you may need to
edit your Wordpress theme. You can learn how to do this by reading this Yoast SEO KB
article
Optimizing Wordpress performance
In this chapter we will optimize the performance of your Wordpress blog. We recommend
to execute some page speed tests before doing any optimizations. This way you can
compare the before and after performance when changing one or more settings.
The following websites analyse your site’s speed and give advice on how to make it faster:
GTMetrix
PageSpeed Insights
The results are sorted in order of impact upon score; thus optimizing rules at the top of the
list can greatly improve your overall score.
Below we will install some plugins which will help to achieve a faster site and improve
your score:
exec
disable_functions=exec
After enabling the EWWW Image Optimizer plugin; go the the Settings page to optimally
configure it:
Now click on the Install button to continue the installation. Click on the Activate Plugin
link.
Now go the Plugins overview page and click on the W3 Total Cache Settings link.
In the General Settings tab we want to enable the Memcache integration for the following
types of caches: 1. Page Cache 1. Database cache 1. Object cache
If you have setup a CDN at KeyCDN as explained in our CDN chapter, then you can
enable the CDN settings to let Wordpress serve all resource files from the CDN location.
CDN Settings
Make sure to fill in the setting ‘Replace site’s hostname with’ with the URL for your CDN
Pull zone. (eg. cdn.mysite.com)
CDN Configuration
Notice we didn’t enable any Minification of our Javascript and Stylesheets via
Performance -> Minify. We will use Googles Pagespeed plugin for nginx to minify our
resources (and a lot more) in our Google Pagespeed plugin chapter.
The Browser cache settings are also disabled, because we’ll also use Google Pagespeed
for this.
Speed up your site with Google PageSpeed nginx
plugin
The Google Pagespeed module for nginx optimizes your site for better end-user
performance. It directly complements the Google PagesPeed Insights tool which analyses
your site’s speed and gives advice on how to make it faster.
Many of the suggestions given by the Pagespeed Insight analysis can be fixed by using
and configuring the Google Pagespeed plugin for nginx.
Pagespeed Insights
In our nginx chapter, we compiled nginx with the Google Pagespeed plugin included. We
haven’t enabled it yet in our configuration until now. Let’s do this now!
# Often PageSpeed needs to request URLs referenced from other files in order to \
optimize them. To do this it uses a fetcher. By default ngx_pagespeed uses the s\
ame fetcher mod_pagespeed does, serf, but it also has an experimental fetcher th\
at avoids the need for a separate thread by using native Nginx events. In initia\
l testing this fetcher is about 10% faster
pagespeed UseNativeFetcher on;
resolver 8.8.8.8;
# The PageSpeed Console reports various problems your installation has that can \
lead to sub-optimal performance
pagespeed Statistics on;
pagespeed StatisticsLogging on;
pagespeed LogDir /var/log/pagespeed;
Previous versions of the PageSpeed plugin would rewrite relative URLs into absolute
URLs. This wastes bytes and can cause problems for sites that run over HTTPS. This
setting makes sure this rewriting doesn’t take place.
The admin path specifies the URL where you can browse the admin pages. The admin
pages provide visibility into the operation of the PageSpeed optimization plugin. Please
make sure that you create your own URL which cannot be easily guessed. (and/or protect
the URL with authentication).
By default the Pagespeed plugin enables a set of core filters. By setting the RewriteLevel
on PassThrough, no filters are enabled by default. We will manually enable the filters we
need below.
Most important, first-time site visitors can benefit from browser caching, since they
may have visited other sites making use of the same service to obtain the libraries.
The JavaScript hosting service acts as a content delivery network (CDN) for the
hosted files, reducing load on the server and improving browser load times.
There are no charges for the resulting use of bandwidth by site visitors.
The hosted versions of library code are generally optimized with third-party
minification tools. These optimizations can make use of library-specific annotations
or minification settings that aren’t portable to arbitrary JavaScript code, so the
libraries benefit from more aggressive optimization than can be provided by
PageSpeed.
Here is how you should generate the pagespeed_libraries.conf which should be located in
/usr/local/nginx/conf:
In Nginx you need to convert pagespeed_libraries.conf from Apache-format to Nginx
format:
$ cd
$ cd
$ \
The extend_cache filter improves the cacheability of a web page’s resources without
compromising the ability of site owners to change the resources and have those changes
propagate to users’ browsers. By default this filter will cache the resources for 1 year. It’ll
also improve the cacheability of images references from within CSS files if the
rewrite_css filter is enabled.
The Combine CSS filter seeks to reduce the number of HTTP requests made by a browser
during page refresh by replacing multiple distinct CSS files with a single CSS file
This filter parses linked and inline CSS (in the HTML file), rewrites the images found and
minifies the CSS (stylesheet).
This flag inserts width= and height= attributes into <img> HTML tags that lack them and
sets them to the image’s width and height. The effect on performance is minimal,
especially on modern browsers.
The “Inline JavaScript” filter reduces the number of requests made by a web page by
inserting the contents of small external JavaScript resources directly into the HTML
document.
Defering the execution of JavaScript code can often dramatically improve the rendering
speed of a site. Use this filter with caution as it may not work on all sites.
This filter improves the page render times by identifying CSS rules from your CSS
stylesheet that are needed to render the visible part of the page, inlining those critical rules
and deferring the load of the full CSS resources.
This filter will remove whitespaces from your HTML, further reducing the size of the
hTML
The ‘Combine JavaScript’ rule seeks to reduce the number of HTTP requests made by a
browser during a page refresh by replacing multiple distinct JavaScript files with a single
one.
The rewrite_images filter enables the following image optimalizations if the optimized
version is actually smaller then the original:
inline images: this optimization replaces references to small images by an inline data:
URL, eliminating the need to initiate another connection to fetch the image data.
recompress images: this filter attempts to recompress image data and strip
unnecessary metadata such as thumbnails. This is a group filter, and is equivalent to
enabling: convert_gif_to_png, convert_jpeg_to_progressive, convert_jpeg_to_webp,
jpeg_subsampling, recompress_jpeg, recompress_png, recompress_webp,
strip_image_color_profile, and strip_image_meta_data
convert png images to jpeg: Enabling convert_png_to_jpeg allows a gif or png image
to be converted to jpeg if it does not have transparent pixels and if the Pagespeed
plugin considers that it is not sensitive to jpeg compression noise. The conversion is
lossy, but the resulting jpeg is generally substantially smaller than the corresponding
gif or png.
resize images: This attempts to resize any image that is larger than the size called for
by the width= and height= attributes on the <img>
The purpose of this filter is to reduce the number of HTTP round-trips by combining
multiple CSS resources into one. It parses linked and inlined CSS and flattens it by
replacing all @import rules with the contents of the imported file, repeating the process
recursively for each imported file
Inlining a CSS will insert the contents of small external CSS resources directly into the
HTML document. This can reduce the time it takes to display content to the user,
especially in older browsers.
The CSS parser cannot parse some CSS3 or proprietary CSS extensions. If
fallback_rewrite_css_urls is not enabled, these CSS files will not be rewritten at all. If the
fallback_rewrite_css_urls filter is enabled, a fallback method will attempt to rewrite the
URLs in the CSS file, even if the CSS cannot be successfully parsed and minified.
The “Inline @import to Link” filter converts a <style> tag consisting of only @import
statements into the corresponding <link> tags. This conversion does not itself result in any
significant optimization, rather its value lies in that it enables optimization of the linked-to
CSS files by later filters, in particular the combine_css, rewrite_css, inline_css, and
extend_cache filters.
When a server returns a response to the browser, the HTML can contain meta tags like the
following:
Certain http-equiv meta tags, specifically those that specify content-type, require a
browser to reparse the html document if they do not match the headers. By ensuring that
the headers match the meta tags, these reparsing delays are avoided.
The “Rewrite Style Attributes” filter rewrites the CSS inside elements’ style attributes to
enable CSS minification, image rewriting, image recompression, and cache extension, if
enabled. It is enabled only for style attributes that contain the text ‘url(‘ as these images
references are generally the source for greatest improvement.
DNS resolution time varies from <1ms for locally cached results, to hundreds of
milliseconds due to the cascading nature of DNS. This can contribute significantly
towards total page load time. This filter reduces DNS lookup time by providing hints to
the browser at the beginning of the HTML, which allows the browser to pre-resolve DNS
for resources on the page.
# Disable filters when a HTTP2 connection is made
set $disable_filters "";
if ($http2) {
set $disable_filters "combine_javascript,combine_css,sprite_images";
}
pagespeed DisableFilters "$disable_filters";
Clients that make use of a HTTP/2 connection make some of the Google Pagespeed filters
unnecessary. These include all filters which combine resources like Javascript files and
CSS files into one file.
With HTTP/1 connections there was a lot of overhead for all these extra requests. HTTP/2
allows multiple concurrent exchanges for all these resources on the same connection, as
such making one big file would actually hurt performance because the parallel download
of the different resources can not take place.
The above code selectively disables the combine filters for Javascript and CSS for HTTP2
connections. It does leave them enabled for people accessing your site via browsers which
don’t yet support HTTP/2.
Cache lookups that were expired means that although these resources were found in cache,
they were not rewritten because they were older than their max-age. max-age is a HTTP
Cache-Control header that is sent by your http server when Pagespeed fetches these
resources.
If you notice that you have a lot of cache lookups that were expired, you can tell
Pagespeed to load the files straight from disk rather than through HTTP:
Inside /usr/local/nginx/conf/pagespeed.conf add:
pagespeed LoadFromFile "http://www.yourwebsite.com/" "/home/<yourwebsite>/www/";
Note that we have also enabled https support in the Pagespeed plugin by using
LoadFromFile with a https protocol and specifying where the resources are located on
your web server.
Locate the MEMSIZE=64 option and increase it to 128M or 256M depending on the
amount of RAM available.
Appendix: Resources
TCP IP
http://blog.tsunanet.net/2011/03/out-of-socket-memory.html
https://ticketing.nforce.com/index.php?/Knowledgebase/Article/View/40/11/sysctl-
settings-which-can-have-a-negative-affect-on-the-network-speed
http://blog.cloudflare.com/optimizing-the-linux-stack-for-mobile-web-per
Ubuntu / Linux
http://www.debuntu.org/how-to-managing-services-with-update-rc-d/
KVM
http://en.wikipedia.org/wiki/Hypervisor
http://s152758605.onlinehome.us/wp-content/uploads/2012/02/slide33.png
https://software.intel.com/sites/default/files/OVM_KVM_wp_Final7.pdf
SSD
https://sites.google.com/site/easylinuxtipsproject/ssd
https://rudd-o.com/linux-and-free-software/tales-from-responsivenessland-why-linux-
feels-slow-and-how-to-fix-that
OpenSSL
http://sandilands.info/sgordon/upgrade-latest-version-openssl-on-ubuntu
Nginx
http://news.netcraft.com/archives/2014/12/18/december-2014-web-server-survey.html
https://wordpress.org/plugins/nginx-helper/
https://rtcamp.com/wordpress-nginx/tutorials/single-site/fastcgi-cache-with-purging/
http://unix.stackexchange.com/questions/86839/nginx-with-ngx-pagespeed-ubuntu
https://www.digitalocean.com/community/tutorials/how-to-optimize-nginx-configuration
http://nginx.com/blog/tuning-nginx/
http://linuxers.org/howto/howto-use-logrotate-manage-log-files
PHP
https://support.cloud.engineyard.com/entries/26902267-PHP-Performance-I-Everything-
You-Need-to-Know-About-OpCode-Caches
https://www.erianna.com/enable-zend-opcache-in-php-5-5
https://rtcamp.com/tutorials/php/fpm-status-page/
http://nitschinger.at/Benchmarking-Cache-Transcoders-in-PHP
MariaDB
https://wiki.debian.org/Hugepages
http://time.to.pullthepl.ug/blog/2008/11/18/MySQL-Large-Pages-errors/
http://dino.ciuffetti.info/2011/07/howto-java-huge-pages-linux/
http://www.cyberciti.biz/tips/linux-hugetlbfs-and-mysql-performance.html
http://matthiashoys.wordpress.com/tag/nr_hugepages/
http://blog.yannickjaquier.com/linux/linux-hugepages-and-virtual-memory-vm-
tuning.html
https://mariadb.com/blog/how-tune-mariadb-write-performance/
https://snipt.net/fevangelou/optimised-mycnf-configuration/
http://www.percona.com/files/presentations/MySQL_Query_Cache.pdf
Jetty
http://www.eclipse.org/jetty/documentation/current/quickstart-running-jetty.html
http://dino.ciuffetti.info/2011/07/howto-java-huge-pages-linux/
http://greenash.net.au/thoughts/2011/02/solr-jetty-and-daemons-debugging-jettysh/
http://java-performance.info/java-string-deduplication/
http://dev.mysql.com/doc/connector-j/en/connector-j-reference-configuration-
properties.html
http://assets.en.oreilly.com/1/event/21/Connector_J%20Performance%20Gems%20Presentation.p
https://github.com/brettwooldridge/HikariCP/wiki/MySQL-Configuration
https://mariadb.com/kb/en/mariadb/about-the-mariadb-java-client/
CDN
https://www.maxcdn.com/blog/manage-seo-with-cdn/
HTTPS
https://support.comodo.com/index.php?/Default/Knowledgebase/Article/View/1/19/csr-
generation-using-openssl-apache-wmod_ssl-nginx-os-x
https://www.wormly.com/help/ssl-tests/intermediate-cert-chain
https://blog.hasgeek.com/2013/https-everywhere-at-hasgeek
https://wiki.mozilla.org/Security/Server_Side_TLS#Nginx
http://www.nginxtips.com/hardening-nginx-ssl-tsl-configuration/
https://bjornjohansen.no/optimizing-https-nginx
http://security.stackexchange.com/questions/54639/nginx-recommended-ssl-ciphers-for-
security-compatibility-with-pfs
https://www.mare-system.de/guide-to-nginx-ssl-spdy-hsts/#choosing-the-right-cipher-
suites-perfect-forward-security-pfs
http://blog.mozilla.org/security/2013/07/29/ocsp-stapling-in-firefox/
https://blog.kempkens.io/posts/ocsp-stapling-with-nginx
https://gist.github.com/plentz/6737338