This guide will cover setting up and hosting a secure website using the Apache web server. I will also cover how to secure and encrypt the website using a free SSL certificate obtained from Let’s Encrypt. Along the way, we cover some intermediate Linux commands as well as an introduction to DNS.
While much of the information in this guide can be used in a variety of environments, it was written specifically for the users with the following:
For a detailed guide on setting up and securing a remote server, please see my previous article: Setting up and securing a Linux server. Also, while this guide was written from the perspective of a Windows user connecting to a remote Ubuntu server, nearly all of the information will apply to those connecting from other operating systems to any Debian based Linux server. Some of the configuration details and file locations will be different for those using Fedora or Red Hat.
The terms web host and web server have a number of meanings depending on the context. At a general level, the terms often refer to the name of a company or service that is hosting a website and serving it to anyone with an Internet connection and web browser. At this level, a web host is synonymous with the web server that contains the files and pages for a website. However, a server that is publically accessible from the Internet can provide many services in addition to hosting a website. These services are provided by running various software components on the server, and these components are often combined together in a cohesive manner to serve a more complex site. At the core of these components is the software responsible for intercepting HTTP requests and providing (serving) the requested files. It is this software that is the real web server and is what this guide will focus on.
On Linux, the two most widely used web server applications are Apache Httpd , and NGINX. Apache is the largest web server in terms of use and while it will be the focus of this guide, those interested a server designed to easily scale to handle huge numbers of requests should check out NGINX.
The Apache Software Foundation creates and maintains dozens of software packages, including multiple HTTP related servers. Somewhat confusingly, the core web server is simply titled “Apache HTTP Server” and many people simply refer to it as Apache (even more confusingly, the actual name of the installed software is Apache httpd). Unless otherwise specified, I will simply use the term Apache to refer to this core service. Other Apache web servers include Apache Tomcat (for Java Servlets and Java Server Pages), and Apache Axis (tools for creating Web Services).
The Apache HTTP Server is very modular and can be configured as simple or as complex as required. It can be used to host a single, static website consisting of only a few pages, a complex data-driven site for an online retailer, or even a single-page web application backed by other server-side technologies such as Node.js. This aim of this guide will be to set up a virtual host within Apache that will serve as a good starting point for more advanced web projects. We will be using Apache Server 2.4, although almost all of the instructions and examples are compatible with any 2.x. Apache has provided extensive documentation consisting of both reference and tutorial sections. The documentation for version 2.4 can be found at https://httpd.apache.org/docs/2.4/.
Almost all websites consist of both a name (e.g. google.com) and an associated IP address (172.217.11.238). DNS is the protocol and software that translates one into the other. While it is entirely possible to host a website without ever having to register a domain name, both you and any visitors would always have to refer to the site only by IP address. In addition, not using a domain name will make it more difficult to secure your site using SSL. For these reasons, we will register a domain name and incorporate it into our configuration as necessary.
While your cloud hosting provider is always responsible for assigning your server and IP address, they may or may not also offer domain registration. Digital Ocean, for instance, does not provide domain registration services at this time. Instead, you will need to use a third-party. Even if you are using a cloud provider that offers both server hosting and domain registration, the two areas will be separate in terms of cost and configuration. There are numerous domain registrars available, and for the most part, all of them will provide the same sets of features and configurations. All will charge a yearly fee for registering the domain. The amount of the fee will vary based primarily on the chosen top-level domain (.com, .net, .me, .tv, etc.). Most registers will also offer private registration for an additional fee. The Internet Corporation for Assigned Names and Numbers (ICANN) is the entity ultimately responsible for issuing and maintaining domain names and top-level domains. While they have decentralized much of their authority over the past few years, they are still the primary governing body for policies on domain registration serve as accreditors for all of the other domain registrars. Two of the larger domain registrars are Network Solutions and 1&1. Depending on the domain and current price, I tend to prefer a company called Directnic for their simplified website and control interface. Regardless of which registrar you choose, the process will be similar for each.
First, you will need to perform a search for your chosen domain name. While all of the registrars will allow you to search from their site, it can also be helpful to use a service such as https://namechk.com/ which will check if your chosen name is available not only as a domain but also as a username for services like Facebook, Twitter and many more. Once you have selected the domain, the registration process is simple. One thing to decide is whether or not to use private registration. All domain registrations are required to contain publically available contact information on the owner. The information includes name, email address, phone number, and physical address. Private registration will allow you to retain ownership and control over the domain but will use the domain registrar’s information when filling out these fields. In case of any dispute or legal action, another party would first have to contact the domain registrar, who would then contact you. If you choose not to use private registration, then you will need to provide accurate ownership information. The provided name and address are the primary means you have for later proving that you are the legitimate owner of the domain in case there is some dispute.
Once the domain is registered, it will likely point to a simple static webpage with a link back to the registrar’s website. Once you are able to see this page by navigating to your domain, you will know that the initial DNS records have been updated. The Internet’s DNS system is composed of a hierarchical network of name servers. When a name record is changed at one server, it will eventually propagate to other servers as needed. The full propagation time can vary widely but generally occurs within 24 hours. Some DNS servers tend to receive updates quicker than others, and it can be useful to change your client PC to use one of these servers. Google, for instance, has DNS servers available for the public to use. They generally receive record updates quickly, often within minutes of a change occurring. The IP address of Google’s DNS server’s are 8.8.8.8 (primary)
and 8.8.4.4 (secondary)
. DNS servers can be changed at both the router and operating system levels. For detailed instructions take a look at this guide.
When working with DNS records, a very useful tool is the nslookup
utility. Available on both Windows and *nix systems, the command line tool allows you to query and interact directly with DNS servers. There are numerous options for the program, but one of the simplest is to use the form nslookup [host] [dns_server]
. For example, after registering the domain example.com
you could use the following command to see if the DNS record had propagated to Google’s name servers:
When making DNS record changes it will be helpful to first test that the change has propagated to the domain registrar’s DNS server, and then to either Google’s server or another DNS server that you are using.
There are a number of different types of DNS records, but for our purposes, we only need two. The A
and CNAME
records. An A record is used to directly link a canonical domain name to an associated IP address. The CNAME record is used to link either a domain name or an alias to a canonical name. Although it is possible to configure a small website using only A records, CNAME records make managing virtual and sub-domains much easier. The inputs needed for an A record are the registered domain name and the IP address of your cloud server. When creating CNAME records you will use a sub-domain and registered domain as inputs.
Let’s say that you have registered the domain name example.com
, but that you intend to create three different websites under the name, all using the same cloud server. For instance, you might have store.example.com
, news.example.com
and www.example.com
. Each of these three is a sub-domain of the top-level example.com and so you would then set up your DNS records as follows:
You can also use a *
wildcard as the name for a CNAME record, which will direct all sub-domain requests to the provided domain. Using a wildcard can be helpful if you want to have a catch-all website to direct visitors to in case they enter a typo or other address mistake. However, when combining regular CNAME records with wildcard CNAME records, you will need to also set the preference or priority field. This is a numeric field that controls which record is used when there is a conflict, with a lower value having a greater priority. Make sure to set priority value of the wildcard record to a higher number than any of the other CNAME and A records.
The majority of websites use the www
prefix for the main site, and then other prefixes for sub-domains. Thus we generally associate www as being a required portion of a domain name. However, on a technical level www
is just one of any number of possible prefixes for a domain name and that is why it is given a CNAME record along with store
and news
.
Once you have your DNS records configured, use nslookup
to confirm that the IP address of your server is being returned for each of the domains and sub-domains you want. Once these updates have fully propagated, you will be able to reach your server from any web browser by using the domain name. However, unless your cloud provider enables a web server by default, no web page will be sent back, likely leading to a timeout error in your browser.
According to W3 Techs, Apache accounts for nearly 50% of all web servers worldwide. It is used to host websites both small and massive and is relatively easy to set up. In fact, getting the “hello world” equivalent of Apache takes only a few minutes. The easiest way to install the software is with apt install apache2
Many cloud providers include Apache pre-installed in their Ubuntu images, but it is always a good idea to run the above command regardless, which will also update the software if it is already present. Once complete, use which apache2
to confirm.
The default Apache installation includes a basic configuration that will monitor port 80 for any incoming request and serve a basic web page stating that the server is running the Apache service and providing links to documentation. Note the use of the word service. Unlike other Linux utilities such as SSH and Nano, the Apache web server will be running in the background. Interacting with Linux services requires a different set of commands than interacting with traditional programs. The most common of these are the service
and systemcgtl
commands. Here is a list of some of the most common commands you will use for interacting with the Apache service (some of these commands may need to be run as sudo):
Go ahead and issue a status
command to check if the service is running, and start
the service if it is not. Once started, service apache2 status
will provide information about how long the service has been running, how much memory and CPU time it is consuming, and the process or processes that the service has spawned along with their PID values.
Apache is now hosting a default website. If your firewall has port 80 open then you can now navigate to your server using any browser to view a webpage showing the version of the Apache server installed, along with some useful links to Apache guides and documentation.
Websites primarily use the HTTP protocol for delivering content, and this protocol has two well-known ports: port 80
for normal HTTP traffic, and port 443
for encrypted HTTPS traffic. Both your cloud provider’s external firewall and your internal Ubuntu firewall need to allow incoming and outgoing traffic on both of these ports. Use your cloud provider’s control panel for configuring the external firewall. For the internal firewall, this guide assumes that you have configured and are using Ubuntu’s firewall, UFW. You can open the needed ports either manually, or by using Apache’s ufw profile:
For a more detailed guide on configuring internal and external firewalls, see the Changing the SSH port number section in my previous article. Once the new rules have been added reload the firewall then test the connection to your server using a web browser. By default, Apache is only configured to monitor HTTP requests on port 80. We will configure HTTPS requests in a later section.
If you are unable to reach the default webpage, try the following:
nmap example.com -p 80
). If the result is ‘filtered’ then the firewall(s) isn’t configured properly. A result of ‘closed’ means the Apache service isn’t runningThere are numerous strategies for configuring websites under Apache, but one of the most common is to create virtual hosts for each website that you want to serve. This enables you to fully isolate all of the files and configuration settings for each website and allows you to enable or disable a virtual site with a single command. All configuration in Apache is done by way of configuration text files. These files use a straight-forward syntax created by Apache. When working with these files, it is very helpful to have the Apache2 documentation available as a reference.
Below are the locations of the Apache files and directories that we will be using:
The general structure for hosting a new site in Apache 2 is:
/var/www/new-site
and fill it with website assets (scripts, HTML, CSS, etc).conf
virtual host configuration and save it to /etc/apache2/sites-available
a2ensite
command to enable the virutal host specified by the file.The first step to setting up a new site is to create a new directory. The location of the directory can be anywhere, but traditionally /var/www
is used. This directory will contain everything required by your website. This includes HTML pages, style sheets, libraries, images, scripts, etc. You are free to organize these assets however you wish, but a typical setup involves creating two sub-directories; one that is publically accessible and contains all of the assets you need to share with others, and another directory that contains assets still related to the site but that you don’t want to be served or made available to outside connections.
The following is an example directory structure for two different websites, each a sub-domain of example.com
. Once you have the below (or similar) structure in place, go ahead and edit the index.html file to create a simple webpage that you can use for testing.
Depending on the permission settings of your server, you likely had to use sudo
when making the new directories and/or files. This means that the user root
is the owner, and unless we change this, Apache will be unable to modify any of the website assets. First, we want to change ownership of the public directory to the current, non-root user with a chown [user] /var/www/store.example.com/public
command. This is followed by chmod -R +r /var/www/store.example.com/public
which will recurisvely grant read access to all files and folders inside public. If the current user is part of the www-data
group, then a slightly safter alternative is to grant read access only to that group instead of to everyone on the server. This can be done by changing the second command to chmod -R g+r /var/www/store.example.com/public
The next step is to create a new .conf virtual host configuration file that will tell Apache where the website files are located and how to respond to HTTP requests. Apache comes pre-loaded with a default configuration that you can use as a starting point. The file is located at /etc/apache2/sites-available/000-default.conf
. Copy the file to a new path within the same directory. In general, it is a good idea to name the file after the site it will be configuring. Using the above example, the command would be:
Open the copied .conf file in your editor of choice. You should see something like this:
Apache configuration files are very modular and are often broken into multiple sections. Each section starts with angle brackets along with a name and attributes. This file has a single VirtualHost section which contains only a few directives and some comments (prefixed with ‘#’). The *:80
string inside the angle brackets tells Apache that this virtual host should listen to port 80 on all IP addresses assigned to the server.
The ServerName
directive tells Apache what the domain name of the virtual host is. In our example, we will un-comment this line and change the value to store.example.com
. Next, DocumentRoot
is the directive that tells Apache the location of the base directory for the site’s public content. If you are segregating your site assets into public and private directories, then it is very important that you set this correctly since Apache will allow access to everything stored in this location and any sub-directories underneath.
The ServerAdmin
directive sets the default email address that will appear in some public error messages. It is seldom used within Apache any longer, but it can be useful to keep the directive for compatibility reasons. You can set it to an actual email address or simply leave it as is. The two directives related to log files don’t need to be changed unless you want to store the log files in another location.
We are going to add one additional directive to our configuration: ServerAlias
. This directive allows the virtual host to respond to requests from another specified domain as if it were being directed to the value specified in ServerName
. For instance, most websties are configured to serve the exact same content no matter if you use the www
prefix or not.
The new store.example.com.conf
confiuration file:
Although simple, this is all that is needed for a basic virtual host configuration. However, there are many more types of directives and several other accepted ways for configuring multiple sites on a single server. Consult the Apache Virtual Host documentation for more information. Once you are finished editing the file, write your changes and return to the shell.
Now that we have created the content structure and configuration file for our new host, we are almost ready to enable the site for serving. First, however, let’s run a quick test with apachectl configtest
to make sure that all of our configuration files are valid. The utility will look for syntax errors, missing directives, conflicting settings, and other common configuration pitfalls. If any problems are found, the utility will output warning or error messages. Any errors must be corrected before the site can be enabled. Warnings will not prevent the site from being served but should be eliminated if possible.
One common warning is "H00558: apache2: Could not reliably determine the server's fully qualified domain name"
. This indicates that there is no global domain name configured for the server. Since we are using a complete virtual host configuration, this isn’t really an issue, but the warning is easy to correct regardless. To fix the warning open the global Apache configuration file (/etc/apache2/apache2.conf
) and add ServerName example.com
somewhere within the file (replacing example.com with your registered root-level domain name).
Once you have corrected any errors, enable your site and reload the Apache service.
After a few seconds Apache will begin monitoring HTTP requests on port 80, and if the domain name of those requests matches any of the active virtual hosts, will serve the appropriate page (with index.html as a default). At this point you can now point your browser to store.example.com
and test that index.html is being returned.
Once everything has correctly been configured we can enable the second virtual host very quickly. First, copy store.example.com.conf
to news.example.com.conf
. Next, open the file and update the ServerName
, ServerAlias
, and DocumentRoot
directives to point to the name and location of the second website. Save the file and add any needed website assets to /var/www/news.example.com/public
. Finally, enable the virtual host using a2ensite
and test the site in a web browser.
This guide is written from the perspective that a single person or group is using the private server for web hosting. As such, all of the created files are accessible to the sudo
account that is being used. This also means that the two virtual hosts are owned and controlled by a single account, meaning that in theory, they could access each other’s content. In a single-entity environment this really isn’t much of a security risk, as Apache does a very good job of preventing public access to any content outside of the DocumentRoot
. Even if an external attacker were able to compromise the website, say by finding a flaw in one of the JS scripts, it would be very difficult to escalate the attack and break out of the website’s public directory.
However, if you are planning on allowing multiple entities to access the web server (e.g. each virtual host is administered by a different user), then there is a security risk of allowing one site to access another. In such a multi-user environment, you will want to change the ownership and ACL rights related to the two sites. One common approach is to have all files inside the /var/www/[site_x]
directory owned by different users. Each of these users will, in turn, need to be a member of the www-data
group so that Apache has access to the sites. Additionally, write access of the www-data
group should be as limited as possible. Start with a default of having no group
permissions, and no other
read or write permissions for any files in the web directory. Then, add individual group
write permissions to files and folders as needed. For instance, if your website contained a blog that allowed replies, you would need to enable write access to the directory containing the files or database where the replies were stored.
Additionally, whether you are running a single or multiple entity configuration, it may be helpful to change ownership of the [site_x].conf
file. Since this file is located in a protected directory, by default only those with sudo
privileges can edit it. However, when running Apache with only virtual hosts, it is safe to change ownership of this file to other users so that they can modify it without having to first have sudo
permissions. This is useful if you want to use Bitvise’s (or other) SFTP client to open the configuration file natively in whatever operating system you are SSHing in from. As long as only ownership for the specific .conf file is changed, the web server will remain secure.
In previous sections we installed Apache and configured it to virtually host two websites: store.example.com
and news.example.com
. Our base configuration has Apache listening to regular HTTP requests on port 80. These requests are transmitted in clear, human-readable strings over the network. Any routers, servers, or other entities that sit between a computer requesting the website and our server can easily read, or even modify these requests without the server ever knowing. The HTTPS protocol was designed to eliminate both of these risks. When using HTTPS and SSL, all traffic between the host and server is encrypted. The traffic is also digitally signed so that both the server and the client can guarantee that a message has not been altered or tampered with. Traditionally, HTTPS/SSL encryption was only used when transmitting sensitive information, such as a password or credit card number. However, due in part to recent FCC changes which allow ISPs to monitor and even alter all Internet traffic, most websites now use the HTTPS protocol for ALL web traffic. Modern implementations of the HTTPS stack are very resource efficient, and even websites with heavy traffic can be fully encrypted. Some websites offer both HTTP and HTTPS access to the same content, while many others offer HTTPS only. Requiring HTTPS for all connections is by far the most secure configuration and it is the one that we will be implementing in this section.
When a client machine first connects to a web server over HTTPS, the client asks for the server’s public encryption key by way of a public certificate. However, since the client doesn’t actually know if the server it is talking to is really who it claims, the client must use another resource to verify that the certificate it receives is authentic. A real-world analogy is corresponding with your bank via mail. You might write a letter to your bank, place it in an envelope, then look up the mailing address, fill out the envelope, and mail it off. A few weeks later you check your mail and see a letter addressed to you and that has the return address of your bank listed on the envelope. You open the envelope and find a letter written on your bank’s stationary. By all appearances, the letter is authentic and was written by a bank employee in response to your request. However, you can’t know this for sure. It’s possible that a rogue postal worker intercepted your letter, opened it up and read it, then wrote a response using a copy of your bank’s stationary and sent it to you, using your bank’s return address instead of his own. Encrypting your letter to the bank would help, but to do this, you must first get your bank’s public encryption key. But since you can’t trust what the return address on the envelope says, you can’t know for sure if the encryption key you receive is really from the bank, or if it is from the rogue postal worker. The solution is that you must use a third-party that you already trust to help you look at the envelope with the bank’s public encryption key and confirm that it really is from the bank.
In the realm of SSL encryption, this third party is known as a certificate authority. These authorities are entities that browsers have been pre-programmed to trust. When a browser receives a public certificate from a web server, the browser checks to see if that certificate has been signed (validated) by one of the certificate authorities. If it has not, then the browser rejects the certificate and does not allow you to visit the page. Until recently, if you were a website operator, you had to pay one of these certificate authorities a monthly or yearly fee to have your website’s certificate validated. You also had to submit paperwork and other information to prove to the authority that you were who you said you were. The cost of this verification was typically $500 or more per year, making it unattractive for small websites. Then, in 2016, a new certificate authority was launched. Named Let’s Encrypt, this authority used new security protocols to drastically simplify and automate the verification process to the point where they could verify SSL certificates at no cost to most websites (see the Let’s Encrypt Wikipedia page) for more information on their history).
Obtaining a verified certificate from Let’s Encrypt is a straightforward process thanks to the excellent CertBot utility provided by the EFF. (See https://letsencrypt.org/getting-started/ for more information and a list of alternate guides). The first thing we need to do is install the version of CertBot designed to work with Apache. The utility was written in Python and requires that Python3 has been installed on your system. Most versions of Ubuntu come with Python preinstalled. You can use the following commands to check:
One (or both) of these should return a version number of 3.x.x. If not, then you can install Python3 with apt-get install python3
. Next, we need to install some common software components that are shared by many Python scripts with a apt-get install software-properties-common
command.
Once Python and the common properties are installed, we are ready to install CertBot. However, since the repository for CertBot is not known to Ubuntu, we have to first add it. Once added, we can install CertBot as we would any other package.
Before using CerBot’s automated process, it is very important that we ensure that all of our Apache virtual host configuration files have properly configured ServerName
and ServerAlias
directives. CertBot scans these directives to determine which domains to generate the needed certificates for. It is a good idea to manually test each of the ServerAliases being used and ensure they are able to reach your website.
When ready, start the issuing process with certbot --apache
(or certbot --authenticator webroot --installer apahce
if the first command doesn’t work). Certbot will present you with some information about how the verification process works and will then list all of the domain names it found and ask you to select which ones you want a certificate issued for. Since we want to encrypt all traffic to our website, just press Enter to select all domains.
CertBot will now present you with some options for proving that you are the website owner, and therefore have authority to issue certificates. There are several options, but the simplest is to choose to use webroot
authentication. This works by having Certbot place a unique file on your server, then telling Let’s Encrypt to try and access that file using a public URL. If the file can be accessed, then Let’s Encrypt knows that you are who you say you are (or at least that you are someone who has full access to the web server) and that any certificates you issue can be trusted.
Once control over the website has been verified, Certbot will ask you if you want to automatically configure Apache for HTTPS only, HTTP and HTTPS access, or to leave the configuration alone and just generate the certificates. For now, choose the HTTPS-only option. This will automate some of the configuration work for us, which we will compete in the next step.
Certbot will now generate and sign the appropriate certificates then display some information about them. The issued certificate is actually composed of two pieces: one which contains the private key used to decrypt communication, and a certificate chain that contains the public key both for our site and for Let’s Encrypt itself. It is this chain that is sent to clients when they connect to the web server. The files are stored in:
Certbot also provides a link to test if your website is not configured for HTTPS access, but we need to make some additional configuration changes first.
Navigate to /etc/Apache2/sites-available
and list the contents of the directory. Notice that CertBot created some new virtual host configuration files. One called default-ssl.conf
and one or more that are named after existing configurations, but with a -ssl
in the name. While these configuration files are valid, the structure used by CertBot makes working with multiple websites a little more difficult, since each virtual host is now broken into two different sites, one for HTTP and one for HTTPS. Since we are requiring all traffic to use HTTPS, it doesn’t’ make sense to have to enable and disable two different configurations for each virtual host.
We are going to combine the HTTP and HTTPS (SSL) configurations into a single file, then modify the HTTP portion to automatically redirect to HTTPS. This will force all visitors use only secure HTTPS access. This example will cover the store.example.com
but the process is identical for any virtual host. First, open store.example.com-ssl.conf
using nano
or another text editor. Note how the SSL configuration has wrapped the original and added some new directives:
<IfModule mod_ssl.c>
: The configuration file is now wrapped in an If directive. This tells Apache to only process everything inside the brackets if the SSL module has been loaded. One of the things that CertBot installed was a C library for Apache that enables HTTPS and SSL encryption.*:443
: The port number that Apache listen on has been changed from 80 to 443ServerName...
: None of the ServerName, ServerAlias, or DocumentRoot directives have been changedInclude /etc/.../options-ssl-apache.conf
: This tells Apache to also process any directives in the global SSL configuration file created by CertBotSSLCertificateFile and SSLCertificateKeyFile
: The locations of the actual certificate used for the virtual host.Once you are familiar with the changes, copy the entire contents of the file to the clipboard, then close and exit. Next, open store.example.com.conf
. Notice that this file too has been changed by CertBot. There are now several Rewrite
directives at the end. Theese directives use Apache’s RewriteEngine to alter the content of incoming HTTP requests before they are further processed. CertBot uses these commands to redirect all HTTP requests to HTTPS instead. The RewriteEngine in Apache is very powerful and allows for all sorts of customized behavior. However, for our needs, it is a bit of overkill and introduces a small performance penalty. We can achieve the same result with a single, simpler directive.
Disable all of the Rewrite
directives by inserting a #
at the start of each line, turning it into a comment. Now add the following directive: Redirect / https://store.example.com
to a new line (the location doesn’t matter, as long as it is inside the VirtualHost
section). The Redirect
directive does exactly what it says. The /
tells it to redirect all traffic from the root name of the virual host (store.example.com
) to a new location (https://...
) by way of the HTTP protocol’s built-in redirect mechanism. Also, while not required, it is a good idea to remove (or comment out) the DocumentRoot directive in the non-secure HTTP section. This makes it harder for an undiscovered flaw or exploit in the SSL Plugin to revert back to non-secure communication.
Next, move to the end of the file and paste the contents of the -ssl.conf
file on the line after the closing /VirtualHost
tag. The file should now have the following structure:
Double check the file, write it to disk, then exit your editor. When we ran CertBot it likely enabled both the HTTP and HTTPS configurations for your hosts. To find out, navigate to /etc/apache2/sites-enabled/
and list the contents of the directory. Since we have combined the SSL and non-SSL sections into a single file, we can disable any .conf
files that have -ssl
in their name. For instance, if you see the file store.example.com-ssl.conf
listed, then issue a a2dissite store.example.com-ssl.conf
command.
Once the correct configurations are enabled, run service apache2 reload
to load the changes. Next, have SSL Labs run an external security check against your site by visiting www.ssllabs.com/ssltest/analyze.html?d=store.example.com
. This will generate a detailed report on the certificates and ciphers being used, as well as provide a letter-grade rating of how secure your site is. Most the sections of the report provide links to more information on specific security concerns.
In addition to testing your site with SSL Labs, you should also confirm that it is not possible to access anything on your site via non-secure HTTP. Try forcing your browser to request the non-secure version of your site using port 80 by navigating to http://[store.example.com]:80
, as well as any sub-domain aliases such as http://www.store.example.com:80
. All of these should transparently redirect your browser to the secure HTTPS site on port 443.
One final step you might want to take is to set up a task that will automatically renew your certificates. As a security precuassion, Let’s Encrypt sets your certificates to expire after 90 days. You can renew your certificates any time you like with a certbot renew
command. To set up automated renewal, we can use built-in task scheduler cron
. To edit the file with the current cron jobs, run crontab -e
. Now add a new line at the end of the file that contains
This will instruct cron
to run the certbot renew
every week at midnight on Sunday The --quiet
option means that no output will be printed to the console.
You now have a secure website and an Apache configuration that will easily allow you to create additional virtual hosts. The sites and configuration created in this guide can serve as an excellent starting point for both traditional websites as well as installing other technologies such as Node.js or WordPress. Please let me know if you have any comments or corrections. Happy coding.
Your email address will not be published. Required fields are marked *
2 Comments