HashiCorp Packer: Automating Docker Image Creation
Introduction
In the realm of DevOps and infrastructure automation, maintaining consistency across environments is paramount. HashiCorp Packer addresses this need by enabling the creation of identical machine images for multiple platforms from a single source configuration. Whether you’re deploying to AWS, Azure, Docker, or on-premises environments, Packer ensures that your infrastructure is reproducible and reliable.
What is Packer?
Packer is an open-source tool developed by HashiCorp that automates the creation of machine images. It supports various platforms, including:
- Cloud Providers: AWS (AMI), Azure (VHD), Google Cloud (GCE)
- Virtualization Platforms: VMware, VirtualBox
- Containers: Docker
By using Packer, you can define your machine image configurations in code, allowing for version control, automated builds, and integration into CI/CD pipelines.
Installing Packer on MacOS
# Homebrew is a free and open-source package management system for macOS.
# Install the official Packer formula from the terminal.
# First, install the HashiCorp tap, a repository of all our Homebrew packages.
$ brew tap hashicorp/tap
$ brew install hashicorp/tap/packer
# After installing Packer, verify the installation worked by opening a new command prompt or console, and checking that packer is available:
$ packer
# If you get an error that packer could not be found, then your PATH
# environment variable was not set up properly. Please go back and ensure
# that your PATH variable contains the directory which has Packer installed.
With Packer installed, it is time to build your first image.
Core Components of Packer
Understanding Packer’s architecture is key to leveraging its full potential. The primary components include:
1. Packer Template
A Packer template is a configuration file that defines the image you want to build and how to build it. Packer templates use the Hashicorp Configuration Language (HCL).
Create a file and add the following HCL block to it and save the file.
touch docker-ubuntu.pkr.hcl
variable "docker_image" {
type = string
default = "ubuntu:jammy"
}
echo "packer {
required_plugins {
docker = {
version = ">= 1.0.8"
source = "github.com/hashicorp/docker"
}
}
}
source "docker" "ubuntu" {
image = var.docker_image
commit = true
}
build {
name = "learn-packer"
sources = [
"source.docker.ubuntu"
]
provisioner "shell" {
environment_vars = [
"FOO=hello world",
]
inline = [
"echo Installing Nginx",
"export DEBIAN_FRONTEND=noninteractive",
"echo tzdata tzdata/Areas select Etc | debconf-set-selections",
"echo tzdata tzdata/Zones/Etc select UTC | debconf-set-selections",
"apt-get update",
"apt-get install -y tzdata nginx",
"echo Adding file to Docker Container",
"echo \"FOO is $FOO\" > example.txt",
"echo Running ${var.docker_image} Docker image."
]
}
post-processor "docker-tag" {
repository = "myapp"
tags = ["v1.0", "latest"]
only = ["docker.ubuntu"]
}
}" > docker-ubuntu.pkr.hcl
2. Packer Block
packer {
required_plugins {
docker = {
version = ">= 1.0.8"
source = "github.com/hashicorp/docker"
}
}
}
- Contains Packer settings, including specifying a required Packer version.
required_plugins
block in the Packer block, which specifies all the plugins required by the template to build your image.- Each plugin block contains a version and
source
attribute. - The
source
attribute is only necessary when requiring a plugin outside the HashiCorp domain. You can find the full list of HashiCorp and community maintained builders plugins in the Packer Builders documentation page. - The version attribute is optional, but we recommend using it to constrain the plugin version so that Packer does not install a version of the plugin that does not work with your template. If you do not specify a plugin version, Packer will automatically download the most recent version during initialization.
2. Source Block / Builder Block
Builders are responsible for creating machines and generating images from them. Each builder corresponds to a specific platform. For example:
amazon-ebs
: Builds images for Amazon EC2docker
: Builds Docker imagesgooglecompute
: Builds images for Google Compute Engine
Example: Docker Builder
This configuration tells Packer to use the specified Docker image as a base and commit any changes made during provisioning.
source "docker" "ubuntu" {
image = var.docker_image
commit = true
}
- The
source
block configures a specific builder plugin, which is then invoked by abuild
block. - A
source
can be reused across multiple builds, and you can use multiplesources
in a single build. - A builder plugin is a component of Packer that is responsible for creating a machine and turning that machine into an image.
- A source block has two important labels: a builder type and a name. These two labels together will allow us to uniquely reference sources later on when we define build runs.
- The Docker builder starts a Docker container, runs provisioners within the container, then exports the container for reuse or commits the image.
3. Provisioner
Provisioners install and configure software within the machine image.
Example: Shell Provisioner
This provisioner installs Nginx on the machine image.
provisioner "shell" {
environment_vars = [
"FOO=hello world",
]
inline = [
"echo Installing Nginx",
"export DEBIAN_FRONTEND=noninteractive",
"echo tzdata tzdata/Areas select Etc | debconf-set-selections",
"echo tzdata tzdata/Zones/Etc select UTC | debconf-set-selections",
"apt-get update",
"apt-get install -y tzdata nginx",
"echo Adding file to Docker Container",
"echo \"FOO is $FOO\" > example.txt",
"echo Running ${var.docker_image} Docker image."
]
}
- Provisioners allows you to completely automate modifications to your image. You can use shell scripts, file uploads, and integrations with modern configuration management tools such as Chef or Puppet.
- This block defines a
shell
provisioner which sets an environment variable namedFOO
in the shell execution environment and runs the commands in theinline
attribute. This provisioner will create a file namedexample.txt
that containsFOO is hello world
. - You can run as many provisioners as you’d like. Provisioners run in the order they are declared.
4. Variables
Variables allow you to parameterize your templates, making them more flexible and reusable.
Example: Defining a Variable
variable "docker_image" {
type = string
default = "ubuntu:jammy"
}
- Without variable your Packer template is static which means if you want to change the contents of the file, you need to manually update your template.
- You can use input variables to serve as parameters for a Packer build, allowing aspects of the build to be customized without altering Packer template.
- In addition, Packer variables are useful when you want to reference a specific value throughout your template.
- Variable blocks can declare the variable with or without any default value.
- You can override this variable at build time:
$ packer build -var 'docker_image=ubuntu:22.04' .
5. Post-Processors
Post-processors perform actions on the images after they are built. They can compress images, upload them to cloud storage, or tag Docker images.
Example: Docker Tag Post-Processor
post-processor "docker-tag" {
repository = "myapp"
tags = ["v1.0", "latest"]
only = ["docker.ubuntu"]
}
This post-processor tags the Docker image with v1.0
and latest
.
Sequential Post Processing
- You may add as many post-processors as you want using the
post-processor
syntax, but each one will start from the original artifact output by the builder, not the artifact created by a previously declared post-processor. - Use the
post-processors
block to create post-processing pipelines where the output of one post-processor becomes the input to another post-processor.
Example: The following configuration will tag your image then push it to Docker Hub
post-processors {
post-processor "docker-import" {
repository = "swampdragons/testpush"
tag = "0.7"
}
post-processor "docker-push" {}
}
6. Build Block
The build block ties together the sources and provisioners to define the build process.
Example:
build {
name = "learn-packer"
sources = [
"source.docker.ubuntu"
]
provisioner "shell" {
environment_vars = [
"FOO=hello world",
]
inline = [
"echo Installing Nginx",
"export DEBIAN_FRONTEND=noninteractive",
"echo tzdata tzdata/Areas select Etc | debconf-set-selections",
"echo tzdata tzdata/Zones/Etc select UTC | debconf-set-selections",
"apt-get update",
"apt-get install -y tzdata nginx",
"echo Adding file to Docker Container",
"echo \"FOO is $FOO\" > example.txt",
"echo Running ${var.docker_image} Docker image."
]
}
post-processor "docker-tag" {
repository = "myapp"
tags = ["v1.0", "latest"]
only = ["docker.ubuntu"]
}
}
- The
build
block defines what Packer should do with the Docker container after it launches.
Format Packer Template
Formats your Packer template files (*.pkr.hcl
) according to standard conventions.
$ packer fmt .
- Ensures consistent code formatting across the team.
- Makes the templates easier to read, debug, and maintain.
- Packer will print out the names of the files it modified, if any.
Validate Packer Template
Checks your Packer configuration for syntax errors and required variables.
$ packer validate .
- Validates the HCL syntax of your template.
- Verifies whether all required variables are passed correctly.
- Ensures that your builders, provisioners, and post-processors are defined properly.
- If Packer detects any invalid configuration, Packer will print out the file name, the error type and line number of the invalid configuration.
1 error occurred:
* docker-ubuntu.pkr.hcl:11,18-19: Missing newline after argument; An argument
definition must end with a newline.
Initialize Packer Configuration
Initializes your Packer project directory.
packer init .
- Packer will download the plugin you’ve defined in packer template file (e.g.,
docker
,amazon
,googlecompute
, etc.). - Sets up plugin cache directory (usually
~/.packer.d/plugins
). - You can run
packer init
as many times as you'd like. If you have already have the plugins you need, Packer will exit without an output.
Build Packer image
Executes the entire image build lifecycle as defined in the Packer template.
What it does:
- Launches the specified builder (e.g., Docker, AWS, etc.).
- Runs the defined provisioners (e.g., shell scripts, file upload, Ansible, etc.).
- Applies post-processors if specified (e.g., tagging, compressing, pushing images).
It will:
- Build the image with the
packer build
command. - Since
docker_image
is parameterized, you can define your variable before building the image. There are multiple ways to assign variables. The order of ascending precedence is: variable defaults, environment variables, variable file(s), command-line flag.
$ packer build --var docker_image=ubuntu:focal .
Notice how the build step runs the ubuntu:focal
Docker image, the value for docker_image
defined by the command-line flag. The command-line flag has the highest precedence and will override values defined by other methods.
Managing the Image
Packer only builds images. It does not attempt to manage them in any way. After they’re built, it is up to you to launch or destroy them as you see fit.
You can delete your Docker image by running the following command. Replace <IMAGE_ID>
with the image ID return by the Packer output.
$ docker rmi <IMAGE_ID>
Parallel Builds
Parellel build is a very useful and important feature of Packer. For example, Packer can build an Amazon AMI and a VMware virtual machine in parallel provisioned with the same scripts, resulting in near-identical images. The AMI can be used for production, the VMware machine can be used for development.
To use parallel builds, create multiple source then add all the source to the sources
array in the build block. Your sources do not need to be the same type. This tells Packer to build multiple images when that build is run.
source "docker" "ubuntu" {
image = "ubuntu:jammy"
commit = true
}
source "docker" "ubuntu-focal" {
image = "ubuntu:focal"
commit = true
}
build {
name = "build-docker-image"
sources = [
"source.docker.ubuntu",
"source.docker.ubuntu-focal"
]
## ...
}
Real-World Use Cases
Packer is widely used in various scenarios:
1. Golden Image Creation
Organizations create standardized base images with pre-installed software and configurations to ensure consistency across environments.
2. CI/CD Integration
Integrate Packer into CI/CD pipelines to automate the creation of machine images during the build process, ensuring that deployments use the latest configurations.
3. Multi-Cloud Deployments
Use Packer to create images for multiple cloud providers from a single template, facilitating multi-cloud strategies.
4. Immutable Infrastructure
Adopt an immutable infrastructure approach by deploying new images for changes instead of modifying existing servers.
Advantages of Using Packer
- Consistency: Ensures that environments are identical across development, testing, and production.
- Automation: Reduces manual steps in image creation, minimizing human errors.
- Speed: Parallel builds accelerate the image creation process.
- Flexibility: Supports multiple platforms and integrates with various provisioners.
- Version Control: Templates can be stored in version control systems, enabling tracking of changes.
Sample Code to Build Golden Docker Image
To automate the creation of a golden Ubuntu Docker image using HashiCorp Packer within a Jenkins pipeline, follow the steps below. This setup ensures consistent and repeatable builds, integrating Packer’s capabilities into your CI/CD workflow.
Prerequisites
Ensure the following are installed and configured:
- Jenkins: Installed and running.
- Docker: Installed and running on the Jenkins server.
- Packer: Installed on the Jenkins server.
- Git: Installed on the Jenkins server.
- Git Repository: Contains your Packer template (
docker-ubuntu.pkr.hcl
) andJenkinsfile
.
Jenkinsfile
pipeline {
agent any
environment {
PACKER_TEMPLATE = 'docker-ubuntu.pkr.hcl'
DOCKER_IMAGE_TAG = "ubuntu-golden:${env.BUILD_NUMBER}"
}
stages {
stage('Checkout') {
steps {
git url: 'https://your-git-repo-url.git', branch: 'main'
}
}
stage('Install Dependencies') {
steps {
sh '''
echo "Installing Packer and Docker if not present..."
if ! command -v packer &> /dev/null
then
echo "Packer not found, installing..."
curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
sudo apt-get update && sudo apt-get install -y packer
fi
if ! command -v docker &> /dev/null
then
echo "Docker not found, installing..."
sudo apt-get update
sudo apt-get install -y docker.io
sudo systemctl start docker
sudo systemctl enable docker
fi
'''
}
}
stage('Packer Init') {
steps {
sh 'packer init .'
}
}
stage('Packer Validate') {
steps {
sh 'packer validate ${PACKER_TEMPLATE}'
}
}
stage('Packer Build') {
steps {
sh 'packer build -var "docker_image=ubuntu:20.04" ${PACKER_TEMPLATE}'
}
}
stage('Tag Docker Image') {
steps {
sh '''
IMAGE_ID=$(docker images -q ubuntu:20.04)
docker tag $IMAGE_ID your-docker-repo/${DOCKER_IMAGE_TAG}
'''
}
}
stage('Push Docker Image') {
steps {
withCredentials([usernamePassword(credentialsId: 'dockerhub-credentials', usernameVariable: 'DOCKERHUB_USER', passwordVariable: 'DOCKERHUB_PASS')]) {
sh '''
echo "$DOCKERHUB_PASS" | docker login -u "$DOCKERHUB_USER" --password-stdin
docker push your-docker-repo/${DOCKER_IMAGE_TAG}
'''
}
}
}
}
post {
always {
sh 'docker system prune -f'
}
}
}
Explanation of Packer Commands
In the pipeline above, several Packer commands are utilized:
packer fmt
: Formats Packer templates for readability and consistency.packer init
: Initializes the Packer configuration, downloading necessary plugins.packer validate
: Checks the syntax and configuration of the Packer template.packer build
: Executes the build process as defined in the Packer template.
These commands ensure that the Packer template is correctly formatted, validated, and built, resulting in a reliable Docker image.
Note
- Credentials: Replace
'dockerhub-credentials'
with the ID of your Docker Hub credentials stored in Jenkins. - Docker Repository: Replace
'your-docker-repo'
with your actual Docker repository name. - Git Repository: Ensure that your Git repository URL is correct and accessible by Jenkins.