Static website & web server in a Docker image
Posted on September 10, 2023 • 7 min read • 1,347 wordsHow do you pack a Hugo website together with an HTTP web server into a Docker image?
Although this article was written in the context of the previous
“Generate website with Hugo without installation”
, the principle can be applied to any project consisting of static HTML pages, from under construction
to coming soon
landing pages and even entire (Hugo
) blog projects.
And how do you get the static websites served? 🧐
The easiest way to do this is to put the website together with a web server in a Docker image;
but please automate it!
As the websites are already available, we will only create a script and a few config files to then create a Docker image that contains the actual pages as well as a web server for delivery.
Creating the image only takes a few minutes and is fully automated.
Safety Note
The generated image will deliver the pages via http
on port 80. This does not reflect the state of the art and is not recommended for productive operation without further action.
However, the missing secure protocol (https) can be easily added in the form of a reverse proxy. In a later article, I will show you how to use Traefik proxy and Let’s Encrypt as a middleman to make a simple http endpoint, such as our web server container, fit for the use of https.
All tools are completely free and open source.
I assume you already have Docker installed. If not, there are good tutorials out there on containerisation and the easiest way to get started (e.g. Docker Docs ).
In detail you need
./public
The Docker image is essentially created with the command docker image build
and parameters
-f
points to the config file (dockerfile
) and-t
sets an optional Docker tag of your choiceTo be able to conveniently move the image to another computer later, we export it immediately via docker save <docker tag>
as a .tar.gz
file.
Anyone who has read the article
“Generate website with Hugo without installation”
will already have guessed that I will save the script in the Hugo project folder under ./tools
. Of course, you can change this at any time as long as you adjust the essential paths.
#!/bin/bash
set -e
set -o pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
docker image build -f ${DIR}/dockerfile -t hugo-nginx .
docker save hugo-nginx | gzip > ${DIR}/hugo-nginx.tar.gz
In order to be able to build the Docker image, a blueprint for the creation is still missing, which is described in the so-called dockerfile
. A minimal image is referenced as the basis (Nginx:alpine
). To configure the web server, the default configuration file is replaced with a customised one.
The final published Hugo project from the local folder ./public
is also copied into the image, which is located in the web server area of Nginx (/usr/share/nginx/html
).
# Minimal Nginx image as basis
FROM nginx:alpine
# Delete Nginx default config file
RUN rm /etc/nginx/conf.d/default.conf
# Copy new Nginx config file
COPY ./tools/nginx.conf /etc/nginx/nginx.conf
# Copy generated Hugo project
# into root folder of the Nginx webver
COPY ./public /usr/share/nginx/html
The web server also runs reasonably well for test purposes without its own configuration file. However, if you want to go one step further and put the image together with an https
termination (such as Traefik Proxy and Let’s Encrypt) on the Internet, you will immediately run into cross-origin problems. So let’s do it the right way.
When copying the configuration file, please ensure that the future domain is entered in three places!
In a Hugo project, this is entered in config.toml
like this
baseURL = "https://FrankSchmidt-Bruecken.com/"
worker_processes auto;
events {
worker_connections 1024;
}
http {
include mime.types;
map $http_origin $allow_origin {
default "*";
"~^https?://(frankschmidt-bruecken\.de|localhost:8080)$" "$http_origin"; # <<< replace domain (no www.)
}
map $request_method $cors_method {
default "allowed";
"OPTIONS" "preflight";
}
map $cors_method $cors_max_age {
default "";
"preflight" 3600;
}
map $cors_method $cors_allow_methods {
default "";
"preflight" "GET, POST, OPTIONS";
}
map $cors_method $cors_allow_headers {
default "";
"preflight" "Authorization,Content-Type,Accept,Origin,User-Agent,DNT,Cache-Control,X-Mx-ReqToken,Keep-Alive,X-Requested-With,If-Modified-Since";
}
map $cors_method $cors_content_length {
default $initial_content_length;
"preflight" 0;
}
map $cors_method $cors_content_type {
default $initial_content_type;
"preflight" "text/plain charset=UTF-8";
}
server {
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_types text/plain text/css text/xml application/json application/javascript application/rss+xml application/atom+xml image/svg+xml;
listen 80;
#listen [::]:80;
server_name frankschmidt-bruecken.com; # <<< replace domain (no www.)
#access_log /var/log/nginx/host.access.log main;
add_header Access-Control-Allow-Origin $allow_origin;
add_header Access-Control-Allow-Credentials 'true';
add_header Access-Control-Max-Age $cors_max_age;
add_header Access-Control-Allow-Methods $cors_allow_methods;
add_header Access-Control-Allow-Headers $cors_allow_headers;
set $initial_content_length $sent_http_content_length;
add_header 'Content-Length' "";
add_header 'Content-Length' $cors_content_length;
set $initial_content_type $sent_http_content_type;
add_header Content-Type "";
add_header Content-Type $cors_content_type;
if ($request_method = 'OPTIONS') {
return 204;
}
location / {
add_header Access-Control-Allow-Origin https://www.frankschmidt-bruecken.com; # <<< replace www.domain
add_header Cache-Control "public, max-age=3600";
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
}
After the script has been made executable once (chmod +x
),
chmod +x ./tools/create-docker-image.sh
the final image can now be created.
./tools/create-docker-image.sh
If everything went correctly, the generated Docker image (/hugo-nginx.tar.gz
) of our project can be found in the ./tools
folder.
For local testing of the generated image, it is important that the parameter baseURL
baseURL = "http://localhost/"
is set before the image is created. If the production domain name is set here, the static files cannot be tested locally as the links would point to the production domain.
#!/bin/bash
set -e
set -o pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
IMAGE="hugo-nginx"
echo "The web server is available at http://localhost/"
docker run --rm \
-p 80:80 \
${IMAGE}
chmod +x ./tools/test-image-lokal.sh
As the container is now listening on port 80, simply start the local browser with http://localhost/
.
The container can be closed again with CTRL+C.
Often you want to copy the final image to another server after creating it locally. Here is an example of how this can be done using ssh
. I like to have the name of the target system already visible in the script name to avoid misunderstandings.
Within the script, the variables still have to be adapted to your own requirements: A description of the image, the name of the image file, the address of the target system, the user name for the SSH transfer and the folder in which the image is to be stored there.
The password is then requested during execution (unless you have transferred the local ssh key to the target system).
#!/bin/bash
set -e
set -o pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
IMAGEDESCRIPTION="MyHugoBlog"
IMAGEFILE="${DIR}/hugo-nginx.tar.gz"
TARGETSYSTEM="meine-domain.de oder IP"
USERNAME="fritzchen"
TARGETFOLDER="/srv/dockerimages/
echo ">>> Copy ${IMAGEDESCRIPTION} to ${TARGETSYSTEM}"
scp ${DIR}/tools/${IMAGEFILE} ${USERNAME}@${TARGETSYSTEM}:${TARGETFOLDER}
echo "\n>>>Done"
echo ">>> The image was stored on ${TARGETSYSTEM} in the ${TARGETFOLDER} folder."
Next, log in to the target system and start a container of the image or restart an existing one. How this is done essentially depends on what type of reverse proxy is used there (see next steps).
In just a few steps, static websites can be packed into a Docker image together with a web server and copied to the target system.
In my opinion, it is always helpful to invest a little time in the creation of little helpers in the form of scripts at the start of a project. These not only relieve you of repetitive and often mindless work, but above all prevent careless errors.
In order to bring the web project securely online, we still need to ensure that the pages can be delivered securely via HTTPS. There are several ways to do this, e.g.