Kubernetes Image Builder
This is the official documentation for the Kubernetes Image Builder.
Introduction
The Kubernetes Image Builder is a SIG Cluster Lifecycle sponsored project for building Kubernetes virtual machine images across multiple infrastructure providers. The resulting VM images are specifically intended to be used with Cluster API but should be suitable for other setups that rely on Kubeadm.
CAPI Images
The Image Builder can be used to build images intended for use with Kubernetes CAPI providers. Each provider has its own format of images that it can work with. For example, AWS instances use AMIs, and vSphere uses OVAs.
Prerequisites
Packer and Ansible are used for building these images. This tooling has been forked and extended from the Wardroom project.
- Packer version >= 1.6.0
- Goss plugin for Packer version >= 1.2.0
- Ansible version >= 2.10.0
If any needed binaries are not present, they can be installed to images/capi/.bin
with the make deps
command. This directory will need to be added to your $PATH
.
Providers
- AWS
- Azure
- CloudStack
- DigitalOcean
- GCP
- IBM Cloud
- Nutanix
- OCI
- 3DSOutscale
- OpenStack
- OpenStack remote image building
- Raw
- VirtualBox
- vSphere
- Proxmox
Make targets
Within this repo, there is a Makefile located at images/capi/Makefile
that can be used to create the default images.
Run make
or make help
to see the current list of targets. The targets are categorized into Dependencies
, Builds
, and Cleaning
. The Dependency targets will check that your system has the proper tools installed to run the build for your specific provider. If the dependencies are not present, they will be installed.
Configuration
The images/capi/packer/config
directory includes several JSON files that define the default configuration for the images:
File | Description |
---|---|
packer/config/ansible-args.json | A common set of variables that are sent to the Ansible playbook |
packer/config/cni.json | The version of Kubernetes CNI to install |
packer/config/containerd.json | The version of containerd to install and customizations specific to the containerd runtime |
packer/config/kubernetes.json | The version of Kubernetes to install. The default version is kept at n-2. See Customization section below for overriding this value |
Due to OS differences, Windows images has additional configuration in the packer/config/windows
folder. See Windows documentation for more details.
Customization
Several variables can be used to customize the image build.
Variable | Description | Default |
---|---|---|
firstboot_custom_roles_pre firstboot_custom_roles_post node_custom_roles_pre node_custom_roles_post | Each of these four variables allows for giving a space delimited string of custom Ansible roles to run at different times. The “pre” roles run as the very first thing in the playbook (useful for setting up environment specifics like networking changes), and the “post” roles as the very last (useful for undoing those changes, custom additions, etc). Note that the “post” role does run before the “sysprep” role in the “node” playbook, as the “sysprep” role seals the image. If the role is placed in the ansible/roles directory, it can be referenced by name. Otherwise, it must be a fully qualified path to the role. | "" |
disable_public_repos | If set to "true" , this will disable all existing package repositories defined in the OS before doing any package installs. The extra_repos variable must be set for package installs to succeed. | "false" |
extra_debs | This can be set to a space delimited string containing the names of additional deb packages to install | "" |
extra_repos | A space delimited string containing the names of files to add to the image containing repository definitions. The files should be given as absolute paths. | "" |
extra_rpms | This can be set to a space delimited string containing the names of additional RPM packages to install | "" |
http_proxy | This can be set to URL to use as an HTTP proxy during the Ansible stage of building | "" |
https_proxy | This can be set to URL to use as an HTTPS proxy during the Ansible stage of building | "" |
kubernetes_deb_version | This can be set to the version of Kubernetes which will be installed in debian based image | "1.26.7-1.1" |
kubernetes_rpm_version | This can be set to the version of Kubernetes which will be installed in rpm based image | "1.26.7" |
kubernetes_semver | This can be set to semantic verion of Kubernetes which will be installed in the image | "v1.26.7" |
kubernetes_series | This can be set to series version Kubernetes which will be installed in the image | "v1.26" |
no_proxy | This can be set to a comma-delimited list of domains that should be excluded from proxying during the Ansible stage of building | "" |
reenable_public_repos | If set to "false" , the package repositories disabled by setting disable_public_repos will remain disabled at the end of the build. | "true" |
remove_extra_repos | If set to "true" , the package repositories added to the OS through the use of extra_repos will be removed at the end of the build. | "false" |
pause_image | This can be used to override the default pause image used to hold the network namespace and IP for the pod. | "registry.k8s.io/pause:3.9" |
pip_conf_file | The path to a file to be copied into the image at /etc/pip.conf for use as a global config file. This file will be removed at the end of the build if remove_extra_repos is true . | "" |
containerd_additional_settings | This is a string, base64 encoded, that contains additional configuration for containerd. It must be version 2 and not contain the pause image configuration block. See image-builder/images/capi/ansible/roles/containerd/templates/etc/containerd/config.toml for the template. | null |
load_additional_components | If set to "true" , the load_additional_components role will be executed. This needs to be set to "true" if any of additional_url_images , additional_registry_images or additional_executables are set to "true" | "false" |
additional_url_images | Set this to "true" to load additional container images using a tar url. additional_url_images_list var should be set to a comma separated string of tar urls of the container images. | "false" |
additional_registry_images | Set this to "true" to load additional container images using their registry url. additional_registry_images_list var should be set to a comma separated string of registry urls of the container images. | "false" |
additional_executables | Set this to "true" to load additional executables from a url. additional_executables_list var should be set to a comma separated string of urls. additional_executables_destination_path should be set to the destination path of the executables. | "false" |
ansible_user_vars | A space delimited string that the user can pass to use in the ansible roles | "" |
containerd_config_file | Custom containerd config file a user can pass to override the default. Use ansible_user_vars to pass this var | "" |
enable_containerd_audit | If set to "true" , auditd will be configured with containerd specific audit controls. | "false" |
kubernetes_enable_automatic_resource_sizing | If set to "true" , the kubelet will be configured to automatically size system-reserved for CPU and memory. | "false" |
The variables found in packer/config/*.json
or packer/<provider>/*.json
should not need to be modified directly. For customization it is better to create a JSON file with your changes and provide it via the PACKER_VAR_FILES
environment variable. Variables set in this file will override any previous values. Multiple files can be passed via PACKER_VAR_FILES
, with the last file taking precedence over any others.
Examples
Passing custom Kubernetes version
PACKER_FLAGS="--var 'kubernetes_rpm_version=1.28.3' --var 'kubernetes_semver=v1.28.3' --var 'kubernetes_series=v1.28' --var 'kubernetes_deb_version=1.28.3-1.1'" make ...
Passing a single extra var file
PACKER_VAR_FILES=var_file_1.json make ...
Passing multiple extra var files
PACKER_VAR_FILES="var_file_1.json var_file_2.json" make ...
Passing in extra packages to the image
If you wanted to install the RPMs nfs-utils
and net-tools
, create a file called extra_vars.json
and populate with the following:
{
"extra_rpms": "\"nfs-utils net-tools\""
}
Note that since the extra_rpms
variable is a string, and we need the string to be quoted to preserve the space when placed on the command line, having the escaped double-quotes is required.
Then, execute the build (using a Photon OVA as an example) with the following:
PACKER_VAR_FILES=extra_vars.json make build-node-ova-local-photon-3
Configuring Containerd at runtime
Containerd default configuration has the following imports
value:
imports = ["/etc/containerd/conf.d/*.toml"]
This allows you to place files at runtime in /etc/containerd/conf.d/
that will then be merged with the rest of containerd configuration.
For example to enable containerd metrics, create a file /etc/containerd/conf.d/metrics.toml
with the following:
[metrics]
address = "0.0.0.0:1338"
grpc_histogram = false
Disabling default repos and using an internal package mirror
A common use-case within enterprise environments is to have a package repository available on an internal network to install from rather than reaching out to the internet. To support this, you can inject custom repository definitions into the image, and optionally disable the use of the default ones.
For example, to build an image using only an internal mirror, create a file called internal_repos.json
and populate it with the following:
{
"disable_public_repos": "true",
"extra_repos": "/home/<user>/mirror.repo",
"remove_extra_repos": "true"
}
This example assumes that you have an RPM repository definition available at /home/<user>/mirror.repo
, and it is correctly configured to point to your internal mirror. It will be added to the image within /etc/yum.repos.d
, with all existing repositories found with /etc/yum.repos.d
disabled by setting disable_public_repos
to "true"
. Furthermore, the (optional) use of "remove_extra_repos"
means that at the end of the build, the repository definition that was added will be removed. This is useful if the image you are building will be shared externally and you do not wish to include a file with internal network services and addresses.
For Ubuntu images, the process works the same but you would need to add a .list
file pointing to your DEB package mirror.
Then, execute the build (using a Photon OVA as an example) with the following:
PACKER_VAR_FILES=internal_repos.json make build-node-ova-local-photon-3
Setting up an HTTP Proxy
The Packer tool itself honors the standard env vars of HTTP_PROXY
, HTTPS_PROXY
, and NO_PROXY
. If these variables are set and exported, they will be honored if Packer needs to download an ISO during a build. However, in order to use these proxies with Ansible (for use during package installation or binary download), we need to pass them via JSON file.
For example, to set the HTTP_PROXY env var for the Ansible stage of the build, create a proxy.json
and populate it with the following:
{
"http_proxy": "http://proxy.corp.com"
}
Then, execute the build (using a Photon OVA as an example) with the following:
PACKER_VAR_FILES=proxy.json make build-node-ova-local-photon-3
Loading additional components using additional_components.json
{
"additional_executables": "true",
"additional_executables_destination_path": "/path/to/dest",
"additional_executables_list": "http://path/to/exec1,http://path/to/exec2",
"additional_s3": "true",
"additional_s3_endpoint": "https://path-to-s3-endpoint",
"additional_s3_access": "S3_ACCESS_KEY",
"additional_s3_secret": "S3_SECRET_KEY",
"additional_s3_bucket": "some-bucket",
"additional_s3_object": "path/to/object",
"additional_s3_destination_path": "/path/to/dest",
"additional_s3_ceph": "true",
"additional_registry_images": "true",
"additional_registry_images_list": "plndr/kube-vip:0.3.4,plndr/kube-vip:0.3.3",
"additional_url_images": "true",
"additional_url_images_list": "http://path/to/image1.tar,http://path/to/image2.tar",
"load_additional_components": "true"
}
Using ansible_user_vars
to pass custom variables
{
"ansible_user_vars": "var1=value1 var2={{ user `myvar2`}}",
"myvar2": "value2"
}
Enabling Ansible custom roles
Put the Ansible role files in the ansible/roles
directory.
{
"firstboot_custom_roles_pre": "setupRole",
"node_custom_roles_post": "role1 role2"
}
Note, for backwards compatibility reasons, the variable custom_role_names
is still accepted as an alternative to node_custom_roles_post
, and they are functionally equivalent.
Quick Start
In this tutorial we will cover the basics of how to download and execute the Image Builder.
Installation
As a set of scripts and Makefiles that rely on Packer and Ansible, there is image builder binary/application to install. Rather we need to download the tooling from the GitHub repo and make sure that the Packer and Ansible are installed.
To get the latest image-builder source on your machine, choose one of the following methods:
Tarball download:
curl -L https://github.com/kubernetes-sigs/image-builder/tarball/main -o image-builder.tgz
mkdir image-builder
tar xzf image-builder.tgz --strip-components 1 -C image-builder
rm image-builder.tgz
cd image-builder/images/capi
Or, if you’d like to keep tracking the repo updates (requires git):
git clone git@github.com:kubernetes-sigs/image-builder.git
cd image-builder/images/capi
Dependencies
Once you are within the capi
directory, you can execute make
or make help
to see all the possible make targets. Before we can build an image, we need to make sure that Packer and Ansible are installed on your system. You may already have them, Mac users may have them installed via brew
, or you may have downloaded them directly.
If you want the image-builder to install these tools for you, they can be installed by executing make deps
. This will install dependencies into image-builder/images/capi/.bin
if they are not already on your system. make deps
will first check if Ansible and Packer are available and if they are, will use the existing installations.
Looking at the output from make deps
, if Ansible or Packer were installed into the .bin
directory, you’ll need to add that to your PATH
environment variable before they can be used. Assuming you are still in images/capi
, you can do that with the following:
export PATH=$PWD/.bin:$PATH
Builds
With the CAPI image builder installed and dependencies satisfied, you are now ready to build an image. In general, this is done via make
targets, and each provider (e.g. AWS, GCE, etc.) will have different requirements for what information needs to be provided (such as cloud provider authentication credentials). Certain providers may have dependencies that are not satisfied by make deps
, for example the vSphere provider needs access to a hypervisor (VMware Fusion on macOS, VMware Workstation on Linux). See the specific documentation for your desired provider for more details.
Building Images for 3DS OUTSCALE
Prerequisites for 3DS OUTSCALE
- A Outscale account
- The Outscale CLI (osc-cli) installed and configured
- Set environment variables for
OSC_ACCESS_TOKEN
, forOSC_SECRET_TOKEN
and forOSC_REGION
Building Images
The build prerequisites for using image-builder
for
building Outscale images are managed by running:
make deps-osc
From the images/capi
directory, run make build-osc-<OS>
where <OS>
is the desired operating system. The available choices are listed via make help
.
Configuration
In addition to the configuration found in images/capi/packer/config
, the outscale
directory includes several JSON files that define the default configuration for
the different operating systems.
File | Description |
---|---|
ubuntu-2004.json | The settings for the Ubuntu 20.04 image |
You must have your Access Keys. You must have your Account Id. Please set the following enviroement variables before building image:
OSC_SECRET_KEY: Outscale Secret Key
OSC_REGION: Outscale Region
OSC_ACCESS_KEY: Outscale Access Key Id
OSC_ACCOUNT_ID: Outscale Account Id
Building Images for AWS
Prerequisites for Amazon Web Services
- An AWS account
- The AWS CLI installed and configured
Building Images
The build prerequisites for using image-builder
for
building AMIs are managed by running:
make deps-ami
From the images/capi
directory, run make build-ami-<OS>
, where <OS>
is
the desired operating system. The available choices are listed via make help
.
To build all available OS’s, uses the -all
target. If you want to build them in parallel, use make -j
. For example, make -j build-ami-all
.
In the output of a successful make
command is a list of created AMIs. To
format them you can copy the output and pipe it through this to get a desired
table:
echo 'us-fake-1: ami-123
us-fake-2: ami-234' | column -t | sed 's/^/| /g' | sed 's/: //g' | sed 's/ami/| ami/g' | sed 's/$/ |/g'
| us-fake-1 | ami-123 |
| us-fake-2 | ami-234 |
Note: If making the images public (the default), you must use one of the Public CentOS images as a base rather than a Marketplace image.
Configuration
In addition to the configuration found in images/capi/packer/config
, the ami
directory includes several JSON files that define the default configuration for
the different operating systems.
File | Description |
---|---|
amazon-2.json | The settings for the Amazon 2 Linux image |
centos-7.json | The settings for the CentOS 7 image |
flatcar.json | The settings for the Flatcar image |
rhel-8.json | The settings for the RHEL 8 image |
rockylinux.json | The settings for the Rocky Linux image |
ubuntu-2004.json | The settings for the Ubuntu 20.04 image |
ubuntu-2204.json | The settings for the Ubuntu 22.04 image |
ubuntu-2404.json | The settings for the Ubuntu 24.04 image |
windows-2019.json | The settings for the Windows 2019 image |
Common AWS options
This table lists several common options that a user may want to set via
PACKER_VAR_FILES
to customize their build behavior. This is not an exhaustive
list, and greater explanation can be found in the
Packer documentation for the Amazon AMI builder.
Variable | Description | Default |
---|---|---|
ami_groups | A list of groups that have access to launch the resulting AMI. | "all" |
ami_regions | A list of regions to copy the AMI to. | "ap-south-1,eu-west-3,eu-west-2,eu-west-1,ap-northeast-2,ap-northeast-1,sa-east-1,ca-central-1,ap-southeast-1,ap-southeast-2,eu-central-1,us-east-1,us-east-2,us-west-1,us-west-2" |
ami_users | A list of groups that have access to launch the resulting AMI. | "all" |
aws_region | The AWS region to build the AMI within. | "us-east-1" |
encrypted | Indicates whether or not to encrypt the volume. | "false" |
kms_key_id | ID, alias or ARN of the KMS key to use for boot volume encryption. | "" |
snapshot_groups | A list of groups that have access to create volumes from the snapshot. | "" |
snapshot_users | A list of groups that have access to create volumes from the snapshot. | "" |
skip_create_ami | If true, Packer will not create the AMI. Useful for setting to true during a build test stage. | false |
In the below examples, the parameters can be set via variable file and the use
of PACKER_VAR_FILES
. See Customization for
examples.
Examples
Building private AMIs
Set ami_groups=""
and snapshot_groups=""
parameters to
ensure you end up with a private AMI. Both parameters default to "all"
.
Encrypted AMIs
Set encrypted=true
for encrypted AMIs to allow for use with EC2 instances
backed by encrypted root volumes. You must also set ami_groups=""
and
snapshot_groups=""
for this to work.
Sharing private AMIs with other AWS accounts
Set ami_users="012345789012,0123456789013"
to make your AMI visible to a
select number of other AWS accounts, and
snapshot_users="012345789012,0123456789013"
to allow the EBS snapshot backing
the AMI to be copied.
If you are using encrypted root volumes in multiple accounts, you will want to
build one unencrypted AMI in a root account, setting snapshot_users
, and then
use your own methods to copy the snapshot with encryption turned on into other
accounts.
Limiting AMI Regions
By default images are copied to many of the available AWS regions. See
ami_regions
in AWS options for the default list. The
list of all available regions can be obtained running:
aws ec2 describe-regions --query "Regions[].{Name:RegionName}" --output text | paste -sd "," -
To limit the regions, provide the ami_regions
variable as a comma-delimited list of AWS regions.
For example, to build all images in us-east-1 and copy only to us-west-2 set
ami_regions="us-west-2"
.
Required Permissions to Build the AWS AMIs
The Packer documentation for the Amazon AMI builder supplies a suggested set of minimum permissions.
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action" : [
"ec2:AttachVolume",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:CopyImage",
"ec2:CreateImage",
"ec2:CreateKeypair",
"ec2:CreateSecurityGroup",
"ec2:CreateSnapshot",
"ec2:CreateTags",
"ec2:CreateVolume",
"ec2:DeleteKeyPair",
"ec2:DeleteSecurityGroup",
"ec2:DeleteSnapshot",
"ec2:DeleteVolume",
"ec2:DeregisterImage",
"ec2:DescribeImageAttribute",
"ec2:DescribeImages",
"ec2:DescribeInstances",
"ec2:DescribeRegions",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSnapshots",
"ec2:DescribeSubnets",
"ec2:DescribeTags",
"ec2:DescribeVolumes",
"ec2:DetachVolume",
"ec2:GetPasswordData",
"ec2:ModifyImageAttribute",
"ec2:ModifyInstanceAttribute",
"ec2:ModifySnapshotAttribute",
"ec2:RegisterImage",
"ec2:RunInstances",
"ec2:StopInstances",
"ec2:TerminateInstances"
],
"Resource" : "*"
}]
}
Testing Images
Connect remotely to an instance created from the image and run the Node Conformance tests using the following commands:
Initialize a CNI
As root:
(copied from containernetworking/cni)
mkdir -p /etc/cni/net.d
wget -q https://github.com/containernetworking/cni/releases/download/v0.7.0/cni-amd64-v0.7.0.tgz
tar -xzf cni-amd64-v0.7.0.tgz --directory /etc/cni/net.d
cat >/etc/cni/net.d/10-mynet.conf <<EOF
{
"cniVersion": "0.2.0",
"name": "mynet",
"type": "bridge",
"bridge": "cni0",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"subnet": "10.22.0.0/16",
"routes": [
{ "dst": "0.0.0.0/0" }
]
}
}
EOF
cat >/etc/cni/net.d/99-loopback.conf <<EOF
{
"cniVersion": "0.2.0",
"name": "lo",
"type": "loopback"
}
EOF
Run the e2e node conformance tests
As a non-root user:
wget https://dl.k8s.io/$(< /etc/kubernetes_community_ami_version)/kubernetes-test.tar.gz
tar -zxvf kubernetes-test.tar.gz kubernetes/platforms/linux/amd64
cd kubernetes/platforms/linux/amd64
sudo ./ginkgo --nodes=8 --flakeAttempts=2 --focus="\[Conformance\]" --skip="\[Flaky\]|\[Serial\]|\[sig-network\]|Container Lifecycle Hook" ./e2e_node.test -- --k8s-bin-dir=/usr/bin --container-runtime=remote --container-runtime-endpoint unix:///var/run/containerd/containerd.sock --container-runtime-process-name /usr/local/bin/containerd --container-runtime-pid-file= --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/containerd.service" --extra-log="{\"name\": \"containerd.log\", \"journalctl\": [\"-u\", \"containerd\"]}"
Building Images for Azure
These images are designed for use with Cluster API Provider Azure (CAPZ). Learn more on using custom images with CAPZ.
Prerequisites for Azure
- An Azure account
- The Azure CLI installed and configured
- Set environment variables for
AZURE_SUBSCRIPTION_ID
,AZURE_CLIENT_ID
,AZURE_CLIENT_SECRET
- Set optional environment variables
RESOURCE_GROUP_NAME
,BUILD_RESOURCE_GROUP_NAME
,STORAGE_ACCOUNT_NAME
,AZURE_LOCATION
&GALLERY_NAME
to override the default values
Building Images
The build prerequisites for using image-builder
for
building Azure images are managed by running:
make deps-azure
Building Managed Images in Shared Image Galleries
From the images/capi
directory, run make build-azure-sig-ubuntu-1804
Building VHDs
From the images/capi
directory, run make build-azure-vhd-ubuntu-1804
If building the Windows images from a Mac there is a known issue with connectivity. Please see details on running macOS with ansible.
Hyper-V Generation 2 VHDs
Most of the images built from the images/capi
directory for Azure will be Hyper-V Generation 1 images. There are also a few available configurations to build Generation 2 VMs. The naming pattern is identical to Generation 1 images, with -gen2
appended to the end of the image name. For example:
# Generation 1 image
make build-azure-sig-ubuntu-1804
# Generation 2 image
make build-azure-sig-ubuntu-1804-gen2
Generation 2 images may only be used with Shared Image Gallery, not VHD.
Confidential VM Images
Confidential VMs require specific generation 2 OS images. The naming pattern of those images includes the suffix -cvm
. For example:
# Ubuntu 20.04 LTS for Confidential VMs
make build-azure-sig-ubuntu-2004-cvm
# Windows 2019 with containerd for Confindential VMs
make build-azure-sig-windows-2019-containerd-cvm
Configuration
Common Azure options
This table lists several common options that a user may want to set via
PACKER_VAR_FILES
to customize their build behavior. This is not an exhaustive
list, and greater explanation can be found in the
Packer documentation for the Azure ARM builder.
Variable | Description | Default |
---|---|---|
community_gallery_image_id | Use image from a Community gallery as a base image instead of default one from the marketplace. Depending on the target distro, fields like image_offer etc. might need to be explicitly emptied. | "" |
debug_tools | Set to true to install the az command-line tool for troubleshooting and debugging purposes. By default, az is not installed. | "" |
direct_shared_gallery_image_id | Use image from Directly shared gallery as a base image instead of default one from the marketplace. Depending on the target distro, fields like image_offer etc. might need to be explicitly emptied. | "" |
private_virtual_network_with_public_ip | This value allows you to set a virtual_network_name and obtain a public IP. If this value is not set and virtual_network_name is defined Packer is only allowed to be executed from a host on the same subnet / virtual network. | "" |
virtual_network_name | Use a pre-existing virtual network for the VM. This option enables private communication with the VM, no public IP address is used or provisioned (unless you set private_virtual_network_with_public_ip). | "" |
virtual_network_resource_group_name | If virtual_network_name is set, this value may also be set. If virtual_network_name is set, and this value is not set the builder attempts to determine the resource group containing the virtual network. If the resource group cannot be found, or it cannot be disambiguated, this value should be set. | "" |
virtual_network_subnet_name | If virtual_network_name is set, this value may also be set. If virtual_network_name is set, and this value is not set the builder attempts to determine the subnet to use with the virtual network. If the subnet cannot be found, or it cannot be disambiguated, this value should be set. | "" |
Developer
If you are adding features to image builder than it is sometimes useful to work with the images directly. This section gives some tips.
Provision a VM directly from a VHD
After creating a VHD, create a managed image using the url output from make build-azure-vhd-<image>
and use it to create the VM:
az image create -n testvmimage -g cluster-api-images --os-type <Windows/Linux> --source <storage url for vhd file>
az vm create -n testvm --image testvmimage -g cluster-api-images
Debugging Packer scripts
There are several ways to debug Packer scripts: https://developer.hashicorp.com/packer/docs/debugging
Building Images for CloudStack
Hypervisor
The image is built using KVM hypervisor as a qcow2
image.
Following which, it can be converted into ova
for VMware and vhd
for XenServer.
Prerequisites for building images
Images can only be built on Linux Machines, and has been tested on Ubuntu 18.04 LTS. Execute the following command to install qemu-kvm and other packages if you are running Ubuntu 18.04 LTS.
Installing packages to use qemu-img
$ sudo -i
# apt install qemu-kvm libvirt-bin qemu-utils
If you’re on Ubuntu 20.04 LTS, then execute the following command to install qemu-kvm packages.
$ sudo -i
# apt install qemu-kvm libvirt-daemon-system libvirt-clients virtinst cpu-checker libguestfs-tools libosinfo-bin
Adding your user to the kvm group
$ sudo usermod -a -G kvm <yourusername>
$ sudo chown root:kvm /dev/kvm
Then exit and log back in to make the change take place.
Building Images
The build prerequisites for using image-builder
for
building cloudstack images are managed by running:
$ cd image-builder/images/capi
$ make deps-qemu
KVM Hypervisor
From the images/capi
directory, run make build-qemu-xxxx-yyyy
. The image is built and located in images/capi/output/BUILD_NAME+kube-KUBERNETES_VERSION. Please replace xxxx with the OS distribution and yyyy with the OS version depending on WHAT you want to build the image for.
For building a ubuntu-2004 based CAPI image, run the following commands -
$ git clone https://github.com/kubernetes-sigs/image-builder.git
$ cd image-builder/images/capi/
$ cat > extra_vars.json <<EOF
{
"ansible_user_vars": "provider=cloudstack"
}
EOF
$ PACKER_VAR_FILES=extra_vars.json make clean build-qemu-ubuntu-2004
XenServer Hypervisor
Run the following script to ensure the required dependencies are met :
$ ./hack/ensure-vhdutil.sh
Follow the preceding steps to build the qcow2 CAPI template for KVM. It will display the location of the template to the terminal as shown :
$ make build-qemu-ubuntu-2004
.............................
Builds finished. The artifacts of successful builds are:
qemu: VM files in directory: ./output/ubuntu-2004-kube-v1.21.10
Here the build-name is ubuntu-2004-kube-v1.21.10
One completed, run the following commands to convert the template to a XenServer compatible template
$ ./hack/convert-cloudstack-image.sh ./output/<build-name>/<build-name> x
Creating XenServer Export for ubuntu-2004-kube-v1.21.10
NOTE: For better performance, we will do the overwritten convert!
Done! Convert to ubuntu-2004-kube-v1.21.10.vhd.
Back up source to ubuntu-2004-kube-v1.21.10.vhd.bak.
Converting to ubuntu-2004-kube-v1.21.10-xen.vhd.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>Done!
Created .vhd file, now zipping
ubuntu-2004-kube-v1.21.10 exported for XenServer: ubuntu-2004-kube-v1.21.10-xen.vhd.bz2
VMware Hypervisor
Run the following script to ensure the required dependencies are met :
$ ./hack/ensure-ovftool.sh
Follow the preceding steps to build the qcow2 CAPI template for KVM. It will display the location of the template to the terminal as shown :
$ make build-qemu-ubuntu-2004
.............................
Builds finished. The artifacts of successful builds are:
qemu: VM files in directory: ./output/ubuntu-2004-kube-v1.21.10
Here the build-name is ubuntu-2004-kube-v1.21.10
One completed, run the following commands to convert the template to a VMware compatible template
$ ./hack/convert-cloudstack-image.sh ./output/<build-name>/<build-name> v
Creating VMware Export for ubuntu-2004-kube-v1.21.10
/usr/bin/ovftool: line 10: warning: setlocale: LC_CTYPE: cannot change locale (en_US.UTF-8): No such file or directory
Opening VMX source: ubuntu-2004-kube-v1.21.10-vmware.vmx
Opening OVA target: ubuntu-2004-kube-v1.21.10-vmware.ova
Writing OVA package: ubuntu-2004-kube-v1.21.10-vmware.ova
Transfer Completed
Completed successfully
Prebuilt Images
For convenience, prebuilt images can be found here
Building Images for DigitalOcean
Prerequisites for DigitalOcean
- A DigitalOcean account
- The DigitalOcean CLI (doctl) installed and configured
- Set an environment variable for your
DIGITALOCEAN_ACCESS_TOKEN
Building Images
The build prerequisites for using image-builder
for
building Digital Ocean images are managed by running:
make deps-do
From the images/capi
directory, run make build-do-<OS>
where <OS>
is the desired operating system. The available choices are listed via make help
.
Configuration
In addition to the configuration found in images/capi/packer/config
, the digitalocean
directory includes several JSON files that define the default configuration for
the different operating systems.
File | Description |
---|---|
centos-7.json | The settings for the CentOS 7 image |
ubuntu-2004.json | The settings for the Ubuntu 20.04 image |
ubuntu-2204.json | The settings for the Ubuntu 22.04 image |
ubuntu-2404.json | The settings for the Ubuntu 24.04 image |
Building CAPI Images for Google Cloud Platform (GCP)
Prerequisites
Create Service Account
From your google cloud console, follow these instructions to create a new service account with Editor permissions. Thereafter, generate a JSON Key and store it somewhere safe.
Use cloud shell to install Ansible, Packer and proceed with building the CAPI compliant VM image.
Install Ansible and Packer
Start by launching the google cloud shell.
# Export the GCP project id you want to build images in
$ export GCP_PROJECT_ID=<project-id>
# Export the path to the service account credentials created in the step above
$ export GOOGLE_APPLICATION_CREDENTIALS=</path/to/serviceaccount-key.json>
# If you don't have the image-builder repository
$ git clone https://github.com/kubernetes-sigs/image-builder.git
$ cd image-builder/images/capi/
# Run the target make deps-gce to install Ansible and Packer
$ make deps-gce
Run the Make target to generate GCE images.
From images/capi
directory, run make build-gce-ubuntu-<version>
command depending on which ubuntu version you want to build the image for.
For instance, to build an image for ubuntu 24.04
, run
$ make build-gce-ubuntu-2404
To build all gce ubuntu images, run
make build-gce-all
Configuration
The gce
sub-directory inside images/capi/packer
stores JSON configuration files for Ubuntu OS.
File | Description |
---|---|
ubuntu-2004.json | Settings for Ubuntu 20.04 image |
ubuntu-2204.json | Settings for Ubuntu 22.04 image |
ubuntu-2404.json | Settings for Ubuntu 24.04 image |
rhel-8.json | Settings for RHEL 8 image |
Common GCP options
This table lists several common options that a user may want to set via
PACKER_VAR_FILES
to customize their build behavior. This is not an exhaustive
list, and greater explanation can be found in the
Packer documentation for the Google Cloud Platform builder.
Variable | Description | Default |
---|---|---|
zone | The GCP zone in which to launch the VM instance. | null |
project_id | The GCP project ID for the deployment. | ${GCP_PROJECT_ID} |
machine_type | The machine type to use for the VM instance (e.g., n1-standard-1, n2-standard-2, etc.). | "n1-standard-1" |
The parameters can be set via variable file and the use
of PACKER_VAR_FILES
. See Customization for
examples.
List Images
List all images by running the following command in the console
$ gcloud compute images list --project ${GCP_PROJECT_ID} --no-standard-images
NAME PROJECT FAMILY DEPRECATED STATUS
cluster-api-ubuntu-2404-v1-17-11-1603233313 myregistry-292303 capi-ubuntu-2404-k8s-v1-17 READY
cluster-api-ubuntu-2004-v1-17-11-1603233874 myregistry-292303 capi-ubuntu-2004-k8s-v1-17 READY
Delete Images
To delete images from gcloud shell, run following
$ gcloud compute images delete [image 1] [image2]
where [image 1]
and [image 2]
refer to the names of the images to be deleted.
Building Images for Hetzner Hcloud
Prerequisites for Hetzner Hcloud
- A Hetzner account
- Set the environment variables
HCLOUD_LOCATION
andHCLOUD_TOKEN
for your hcloud project
Building Images
The build prerequisites for using image-builder
for
building hcloud images are managed by running:
make deps-hcloud
From the images/capi
directory, run make build-hcloud-<OS>
where <OS>
is the desired operating system. The available choices are listed via make help
. For example: Use make build-hcloud-ubuntu-2404
to build an Ubuntu 22.04 snapshot in hcloud.
Configuration
In addition to the configuration found in images/capi/packer/config
, the hcloud
directory includes several JSON files that define the default configuration for
the different operating systems.
File | Description |
---|---|
centos-7.json | The settings for the CentOS 7 image |
flatcar.json | The settings for the Flatcar image |
rockylinux-8.json | The settings for the RockyLinux 8 image |
rockylinux-9.json | The settings for the RockyLinux 9 image |
ubuntu-2004.json | The settings for the Ubuntu 20.04 image |
ubuntu-2204.json | The settings for the Ubuntu 22.04 image |
ubuntu-2404.json | The settings for the Ubuntu 24.04 image |
Building CAPI Images for IBMCLOUD (CAPIBM)
CAPIBM - PowerVS
Prerequisites for PowerVS Machine Image
- An IBM Cloud account
- PowerVS Service Instance
- Cloud Object Storage
Building Images
The build prerequisites for using image-builder
for building PowerVS images are managed by running:
$ git clone https://github.com/kubernetes-sigs/image-builder.git
$ cd image-builder/images/capi/
$ make deps-powervs
From the images/capi
directory, run make build-powervs-centos-8
. The image is built and uploaded to your bucket capibm-powervs-{BUILD_NAME}-{KUBERNETES_VERSION}-{BUILD_TIMESTAMP}.
Note: Fill the required fields which are listed here in a json file and pass it to the
PACKER_VAR_FILES
environment variable while building the image.
For building a centos-streams8 based CAPI image, run the following commands -
$ ANSIBLE_SSH_ARGS="-o HostKeyAlgorithms=+ssh-rsa -o PubkeyAcceptedAlgorithms=+ssh-rsa" PACKER_VAR_FILES=variables.json make build-powervs-centos-8
Configuration
In addition to the configuration found in images/capi/packer/config
, the powervs
directory includes several JSON files that define the default configuration for the different operating systems.
File | Description |
---|---|
centos-8.json | The settings for the CentOS 8 image |
centos-9.json | The settings for the CentOS 8 image |
Common PowerVS options
This table lists several common options that a user may want to set via PACKER_VAR_FILES
to customize their build behavior.
Variable | Description | Default |
---|---|---|
account_id | IBM Cloud account id. | "" |
apikey | IBM Cloud API key. | "" |
capture_cos_access_key | The Cloud Object Storage access key. | "" |
capture_cos_bucket | The Cloud Object Storage bucket to upload the image within. | "" |
capture_cos_region | The Cloud Object Storage region to upload the image within. | "" |
capture_cos_secret_key | The Cloud Object Storage secret key. | "" |
key_pair_name | The name of the SSH key pair provided to the server for authenticating users (looked up in the tenant’s list of keys). | "" |
region | The PowerVS service instance region to build the image within. | "" |
service_instance_id | The PowerVS service instance id to build the image within. | "" |
ssh_private_key_file | The absolute path to the SSH key file. | "" |
zone | The PowerVS service instance zone to build the image within. | "" |
dhcp_network | The PowerVS image with DHCP support. | false |
The parameters can be set via a variable file and passed via PACKER_VAR_FILES
. See Customization for examples.
Note:
- When setting
dhcp_network: true
, you need to build an OS image with certain network settings using pvsadm tool and replace the fields with the custom image details.- Clone the image-builder repo and run
make build
commands from a system where the DHCP private IP can be reached and SSH able.
CAPIBM - VPC
Hypervisor
The image is built using KVM hypervisor.
Prerequisites for VPC Machine Image
Installing packages to use qemu-img
Execute the following command to install qemu-kvm and other packages if you are running Ubuntu 18.04 LTS.
$ sudo -i
# apt install qemu-kvm libvirt-bin qemu-utils
If you’re on Ubuntu 20.04 LTS, then execute the following command to install qemu-kvm packages.
$ sudo -i
# apt install qemu-kvm libvirt-daemon-system libvirt-clients virtinst cpu-checker libguestfs-tools libosinfo-bin
Adding your user to the kvm group
$ sudo usermod -a -G kvm <yourusername>
$ sudo chown root:kvm /dev/kvm
Then exit and log back in to make the change take place.
Building Images
The build prerequisites for using image-builder
for building VPC images are managed by running:
cd image-builder/images/capi
make deps-qemu
From the images/capi
directory, run make build-qemu-ubuntu-xxxx
. The image is built and located in images/capi/output/{BUILD_NAME}-kube-{KUBERNETES_VERSION}. Please replace xxxx with 1804
or 2004
depending on the version you want to build the image for.
For building a ubuntu-2004 based CAPI image, run the following commands -
$ git clone https://github.com/kubernetes-sigs/image-builder.git
$ cd image-builder/images/capi/
$ make build-qemu-ubuntu-2004
Customizing Build
User may want to customize their build behavior. The parameters can be set via a variable file and passed via PACKER_VAR_FILES
. See Customization for examples.
Building CAPI Images for Nutanix Cloud Platform
Prerequisites for Nutanix builder
# If you don't have the image-builder repository
$ git clone https://github.com/kubernetes-sigs/image-builder.git
$ cd image-builder/images/capi/
# Run the target make deps-nutanix to install Ansible and Packer
$ make deps-nutanix
Configure the Nutanix builder
Modify the packer/nutanix/nutanix.json
configuration file with credentials and informations specific to your Nutanix Prism Central used to build the image, you can also use the corresponding env variables.
This file have the following format:
{
"nutanix_endpoint": "Prism Central IP / FQDN",
"nutanix_port": "9440",
"nutanix_insecure": "false",
"nutanix_username": "Prism Central Username",
"nutanix_password": "Prism Central Password",
"nutanix_cluster_name": "Name of PE Cluster",
"nutanix_subnet_name": "Name of Subnet"
}
Corresponding env variables
NUTANIX_ENDPOINT
NUTANIX_PORT
NUTANIX_INSECURE
NUTANIX_USERNAME
NUTANIX_PASSWORD
NUTANIX_CLUSTER_NAME
NUTANIX_SUBNET_NAME
Additional options
Variable | Description | Default |
---|---|---|
force_deregister | Allow output image override if already exists. | false |
image_delete | Delete image once entire build process is completed. | false |
image_export | Export raw image in the current folder. | false |
image_name | Name of the output image. | BUILD_NAME-kube-KUBERNETES_SEMVER |
source_image_delete | Delete source image once build process is completed | false |
source_image_force | Always download and replace source image even if already exist | false |
vm_force_delete | Delete the vm even if build is not succesful. | false |
:warning: If you are using a recent OpenSSH_9
version, adding the -O
value in scp_extra_vars
may be necessary for servers that do not implement a recent SFTP protocol.
Customizing the Build Process
The builder uses a generic cloud image as source which is basically configured by a cloud-init script. It is also possible to start build-process from an ISO-Image as long as injecting Kickstart or similiar is possible via OEMDRV Media. For more details refer to packer-plugin-nutanix Documentation.
If you prefer to use a different configuration file, you can create it with the same format and export PACKER_VAR_FILES
environment variable containing the full path to it.
Run the Make target to generate Nutanix images.
From images/capi
directory, run make build-nutanix-<os>-<version>
command depending on which os and version you want to build the image for.
For example, to build an image for Ubuntu 22.04
, run
$ make build-nutanix-ubuntu-2204
To build all Nutanix ubuntu images, run
make build-nutanix-all
Output
By default images are stored inside your Nutanix Prism Central Image Library. If you want to use them in different Prism Central or distribute it, you can set the option "image_export": "true"
in your build config file.
In this case the images will be downloaded in raw format on the machine where you launch the image-builder process.
Configuration
The nutanix
sub-directory inside images/capi/packer
stores JSON configuration files for each OS including necessary config.
File | Description |
---|---|
ubuntu-2004.json | Settings for Ubuntu 20.04 image |
ubuntu-2204.json | Settings for Ubuntu 22.04 image |
rockylinux-8.json | Settings for Rocky Linux 8 image (UEFI) |
rockylinux-9.json | Settings for Rocky Linux 9 image |
rhel-8.json | Settings for RedHat Enterprise Linux 8 image |
rhel-9.json | Settings for RedHat Enterprise Linux 9 image |
flatcar.json | Settings for Flatcar Linux image (beta) |
windows-2022.json | Settings for Windows Server 2022 image (beta) |
OS specific options
RHEL
You need to set your image_url
value correctly in your rhel-(8|9).json file with a working Red Hat Enterprise Linux KVM Guest Image url.
When building the RHEL image, the OS must register itself with the Red Hat Subscription Manager (RHSM). To do this, the current supported method is to supply a username and password via environment variables. The two environment variables are RHSM_USER and RHSM_PASS. Although building RHEL images has been tested via this method, if an error is encountered during the build, the VM is deleted without the machine being unregistered with RHSM. To prevent this, it is recommended to build with the following command:
PACKER_FLAGS=-on-error=ask RHSM_USER=user RHSM_PASS=pass make build-nutanix-rhel-9
The addition of PACKER_FLAGS=-on-error=ask
means that if an error is encountered, the build will pause, allowing you to SSH into the machine and unregister manually.
Building Images for OpenStack
Hypervisor
The image is built using KVM hypervisor.
Prerequisites for QCOW2
Execute the following command to install qemu-kvm and other packages if you are running Ubuntu 18.04 LTS.
Installing packages to use qemu-img
$ sudo -i
# apt install qemu-kvm libvirt-bin qemu-utils
If you’re on Ubuntu 20.04 LTS, then execute the following command to install qemu-kvm packages.
$ sudo -i
# apt install qemu-kvm libvirt-daemon-system libvirt-clients virtinst cpu-checker libguestfs-tools libosinfo-bin
Adding your user to the kvm group
$ sudo usermod -a -G kvm <yourusername>
$ sudo chown root:kvm /dev/kvm
Then exit and log back in to make the change take place.
Building Images
The build prerequisites for using image-builder
for
building qemu images are managed by running:
cd image-builder/images/capi
make deps-qemu
Building QCOW2 Image
From the images/capi
directory, run make build-qemu-ubuntu-xxxx
. The image is built and located in images/capi/output/BUILD_NAME+kube-KUBERNETES_VERSION. Please replace xxxx with 2004
or 2204
depending on the version you want to build the image for.
To build a Ubuntu 22.04-based CAPI image, run the following commands -
$ git clone https://github.com/kubernetes-sigs/image-builder.git
$ cd image-builder/images/capi/
$ make build-qemu-ubuntu-2204
Building Images on OpenStack
Hypervisor
The image is built using OpenStack.
Prerequisites for OpenStack builds
First, check for prerequisites at Packer docs for the OpenStack builder.
Also ensure that you have a Ubuntu 20.04 or Ubuntu 22.04 cloud image available in your OpenStack instance before continuing as it will need to be referenced. This build process also supports Flatcar Linux, but only Stable has been tested.
Note
Other operating systems could be supported and additions are welcome.
Setup Openstack authentication
Ensure you have set up your method of authentication. See the examples here. You can also check out the packer builder for more information on authentication.
You should be able to run commands against OpenStack before running this builder, otherwise it will fail.
You can test with a simple command such as openstack image list
.
Building Images
The build prerequisites for using image-builder
for
building OpenStack images are managed by running:
cd image-builder/images/capi
make deps-openstack
Define variables for OpenStack build
Using the Openstack packer provider, an instance will be deployed and an image built from it. A certain set of environment variables must be defined in a json file and referenced as shown below in the build example.
Replace UPPERCASE variables with your values.
{
"source_image": "OPENSTACK_SOURCE_IMAGE_ID",
"networks": "OPENSTACK_NETWORK_ID",
"flavor": "OPENSTACK_INSTANCE_FLAVOR_NAME",
"floating_ip_network": "OPENSTACK_FLOATING_IP_NETWORK_NAME",
"image_name": "KUBE-UBUNTU",
"image_visibility": "public",
"image_disk_format": "raw",
"volume_type": "",
"ssh_username": "ubuntu"
}
Check out images/capi/packer/openstack/packer.json
for more variables such as allowing the use of floating IPs and config drives.
Building Image on OpenStack
From the images/capi
directory, run PACKER_VAR_FILES=var_file.json make build-openstack-<DISTRO>
.
An instance is built in OpenStack from the source image defined. Once completed, the instance is shut down and the image is created.
This image will default to private, however this can be controlled via image_visibility
.
For building a ubuntu 22.04-based CAPI image with Kubernetes 1.23.15, run the following commands:
Example
$ git clone https://github.com/kubernetes-sigs/image-builder.git
$ cd image-builder/images/capi/
$ make deps-openstack
$ PACKER_VAR_FILES=var_file.json make build-openstack-ubuntu-2204
The resulting image will be named ubuntu-2204-kube-v1.23.15
based on the following format: <OS>-kube-<KUBERNETES_SEMVER>
.
This can be modified by overriding the image_name
variable if required.
Building Images for Oracle Cloud Infrastructure (OCI)
Prerequisites
- An OCI account
- The OCI plugin for Packer supports three options for authentication . You may use any of these options when building the Cluster API images.
Building Images
The build prerequisites for using image-builder
for
building OCI images are managed by running the following command from images/capi directory.
make deps-oci
From the images/capi
directory, run make build-oci-<OS>
where <OS>
is
the desired operating system. The available choices are listed via make help
.
Configuration
In addition to the configuration found in images/capi/packer/config
, the oci
directory includes several JSON files that define the default configuration for
the different operating systems.
File | Description |
---|---|
oracle-linux-8.json | The settings for the Oracle Linux 8 image |
oracle-linux-9.json | The settings for the Oracle Linux 9 image |
ubuntu-1804.json | The settings for the Ubuntu 18.04 image |
ubuntu-2004.json | The settings for the Ubuntu 20.04 image |
ubuntu-2204.json | The settings for the Ubuntu 22.04 image |
windows-2019.json | The settings for the Windows Server 2019 image |
windows-2022.json | The settings for the Windows Server 2022 image |
Common options
This table lists several common options that a user must set via
PACKER_VAR_FILES
to customize their build behavior. This is not an exhaustive
list, and greater explanation can be found in the
Packer documentation for the OCI builder.
Variable | Description | Default | Mandatory |
---|---|---|---|
base_image_ocid | The OCID of an existing image to build upon. | No | |
compartment_ocid | The OCID of the compartment that the instance will run in. | Yes | |
subnet_ocid | The OCID of the subnet within which a new instance is launched and provisioned. | Yes | |
availability_domain | The name of the Availability Domain within which a new instance is launched and provisioned. The names of the Availability Domains have a prefix that is specific to your tenancy. | Yes | |
shape | An OCI region. Overrides value provided by the OCI config file if present. This cannot be used along with the use_instance_principals key. | VM.Standard.E4.Flex | No |
Steps to create Packer VAR file
Create a file with the following contents and name it as oci.json
{
"compartment_ocid": "Fill compartment OCID here",
"subnet_ocid": "Fill Subnet OCID here",
"availability_domain": "Fill Availability Domain here"
}
Example make command with Packer VAR file
PACKER_VAR_FILES=oci.json make build-oci-oracle-linux-8
Build an Arm based image
Building an Arm based image requires some overrides to use the correct installation files . An example for an
oci.json
file for Arm is shown below. The parameters for containerd, crictl and Kubernetes
has to point to the corresponding URL for Arm. The containerd SHA can be changed appropriately, the containerd version
is defined in images/capi/packer/config/containerd.json.
{
"compartment_ocid": "Fill compartment OCID here",
"subnet_ocid": "Fill Subnet OCID here",
"availability_domain": "Fill Availability Domain here",
"shape": "VM.Standard.A1.Flex",
"containerd_url": "https://github.com/containerd/containerd/releases/download/v{{user `containerd_version`}}/cri-containerd-cni-{{user `containerd_version`}}-linux-arm64.tar.gz",
"crictl_url": "https://github.com/kubernetes-sigs/cri-tools/releases/download/v{{user `crictl_version`}}/crictl-v{{user `crictl_version`}}-linux-arm64.tar.gz",
"kubernetes_rpm_repo": "https://packages.cloud.google.com/yum/repos/kubernetes-el7-aarch64",
"containerd_sha256": "9ac616b5f23c1d10353bd45b26cb736efa75dfef31a2113baff2435dbc7becb8"
}
Building a Windows image
NOTE: In order to use Windows with CAPI a Baremetal instance is required. This means a Baremetal instance is required for building the image as well. The OCIDs for the 2019 Datacenter edition of Windows can be found in their documentation:
NOTE: It is important to make sure the shape used at image build time is used when launching an instance.
Example: If
BM.Standard2.52
is used to build, then onlyBM.Standard2.52
can be used for the newly created image.
Windows environment variables
Variable | Description | Default | Mandatory |
---|---|---|---|
OPC_USER_PASSWORD | The password to set the OPC user to when creating the image. This will be used for accessing instances using this image. | Yes |
NOTE: Your new password must be at least 12 characters long and must comply with Microsoft’s password policy. If the password doesn’t comply WinRM will fail to connect to the instance since the password failed to be updated.
NOTE: The
OPC_USER_PASSWORD
will be set in thewinrm_bootstrap.txt
file temporarily, while building the image. This is required in order for WinRM to access the instance building the image. Once the build process is complete the password will be deleted along with the fil so the password isn’t stored long term in a cleartext file.
Build a Windows based image
The following example JSON would use the Windows Server 2019 Datacenter Edition BM E4 image in the us-ashburn-1 region.
{
"build_name": "windows",
"base_image_ocid": "<image_OCID>",
"ocpus": "128",
"shape": "BM.Standard.E4.128",
"region": "us-ashburn-1",
"compartment_ocid": "Fill compartment OCID here",
"subnet_ocid": "Fill Subnet OCID here",
"availability_domain": "Fill Availability Domain here"
}
Building Raw Images (Baremetal)
Hypervisor
The image is built using KVM hypervisor.
Prerequisites for QCOW2
Execute the following command to install qemu-kvm and other packages if you are running Ubuntu 18.04 LTS.
Installing packages to use qemu-img
$ sudo -i
# apt install qemu-kvm libvirt-bin qemu-utils
If you’re on Ubuntu 20.04 LTS, then execute the following command to install qemu-kvm packages.
$ sudo -i
# apt install qemu-kvm libvirt-daemon-system libvirt-clients virtinst cpu-checker libguestfs-tools libosinfo-bin
Adding your user to the kvm group
$ sudo usermod -a -G kvm <yourusername>
$ sudo chown root:kvm /dev/kvm
Then exit and log back in to make the change take place.
Raw Images
Raw Dependencies
The build prerequisites for using image-builder
for
building raw images are managed by running:
cd image-builder/images/capi
make deps-raw
Build the Raw Image
From the images/capi
directory, run make build-raw-ubuntu-xxxx
. The image is built and located in images/capi/output/BUILD_NAME+kube-KUBERNETES_VERSION. Please replace xxxx with 2004
or 2004-efi
depending on the version you want to build the image for.
To build a Ubuntu 20.04-based CAPI image, run the following commands -
$ git clone https://github.com/kubernetes-sigs/image-builder.git
$ cd image-builder/images/capi/
$ make build-raw-ubuntu-2004
QCOW2 Images
Raw Dependencies
The build prerequisites for using image-builder
for
building raw images are managed by running:
cd image-builder/images/capi
make deps-qemu
Building QCOW2 Image
From the images/capi
directory, run make build-qemu-ubuntu-xxxx
. The image is built and located in images/capi/output/BUILD_NAME+kube-KUBERNETES_VERSION. Please replace xxxx with 1804
,2004
or 2204
depending on the version you want to build the image for.
For building a ubuntu-2204 based CAPI image, run the following commands -
$ git clone https://github.com/kubernetes-sigs/image-builder.git
$ cd image-builder/images/capi/
$ make build-qemu-ubuntu-2204
Building Images for vSphere
Hypervisor
The images may be built using one of the following hypervisors:
OS | Builder | Build target |
---|---|---|
Linux | VMware Workstation (vmware-iso) | build-node-ova-local- |
macOS | VMware Fusion (vmware-iso) | build-node-ova-local- |
vSphere | vSphere >= 6.5 | build-node-ova-vsphere- |
vSphere | vSphere >= 6.5 | build-node-ova-vsphere-base- |
vSphere Clone | vSphere >= 6.5 | build-node-ova-vsphere-clone- |
Linux | VMware Workstation (vmware-vmx) | build-node-ova-local-vmx- |
macOS | VMware Fusion (vmware-vmx) | build-node-ova-local-vmx- |
NOTE If you want to build all available OS’s, uses the -all
target. If you want to build them in parallel, use make -j
. For example, make -j build-node-ova-local-all
.
The vsphere
builder supports building against a remote VMware vSphere using standard API.
vmware-vmx builder
During the dev process it’s uncommon for the base OS image to change, but the image building process builds the base image from the ISO every time and thus adding a significant amount of time to the build process.
To reduce the image building times during development, one can use the build-node-ova-local-base-<OS>
target to build the base image from the ISO. By setting source_path
variable in vmx.json
to the *.vmx
file from the output, it can then be re-used with the build-node-ova-local-vmx-<OS>
build target to speed up the process.
vsphere-clone builder
vsphere-base
builder allows you to build one time base OVAs from iso images using the kickstart process. It leaves the user builder
intact in base OVA to be used by clone builder later. vSphere-clone
builder builds on top of base OVA by cloning it and ansiblizing it.
This saves time by allowing repeated iteration on base OVA without installing OS from scratch again and again. Also, it uses link cloning and create_snapshot
feature to clone faster.
Prerequisites for vSphere builder
Complete the vsphere.json
configuration file with credentials and informations specific to the remote vSphere hypervisor used to build the ova
file.
This file must have the following format (cluster
can be replace by host
):
{
"vcenter_server":"FQDN of vcenter",
"username":"vcenter_username",
"password":"vcenter_password",
"datastore":"template_datastore",
"folder": "template_folder_on_vcenter",
"cluster": "esxi_cluster_used_for_template_creation",
"network": "network_attached_to_template",
"insecure_connection": "false",
"template": "base_template_used_by_clone_builder",
"create_snbapshot": "creates a snaphot on base OVA after building",
"linked_clone": "Uses link cloning in vsphere-clone builder: true, by default"
}
If you prefer to use a different configuration file, you can create it with the same format and export PACKER_VAR_FILES
environment variable containing the full path to it.
Building Images
The build prerequisites for using image-builder
for
building OVAs are managed by running:
make deps-ova
From the images/capi
directory, run make build-node-ova-<hypervisor>-<OS>
, where <hypervisor>
is your target hypervisor (local
or vsphere
) and <OS>
is the desired operating system. The available choices are listed via make help
.
OVA Creation
When the final OVA is created, there are two methods that can be used for creation. By default, an OVF file is created, the manifest is created using SHA256 sums of the OVF and VMDK, and then tar
is used to create an OVA containing the OVF, VMDK, and the manifest.
Optionally, ovftool
can be used to create the OVA. This has the advantage of validating the created OVF, and has greater chances of producing OVAs that are compliant with more versions of VMware targets of Fusion, Workstation, and vSphere. To use ovftool
for OVA creation, set the env variable IB_OVFTOOL to any non-empty value. Optionally, args to ovftool
can be passed by setting the env var IB_OVFTOOL_ARGS like the following:
IB_OVFTOOL=1 IB_OVFTOOL_ARGS="--allowExtraConfig" make build-node-ova-<hypervisor>-<OS>
Configuration
In addition to the configuration found in images/capi/packer/config
, the ova
directory includes several JSON files that define the configuration for the images:
File | Description |
---|---|
centos-7.json | The settings for the CentOS 7 image |
flatcar.json | The settings for the Flatcar image |
photon-3.json | The settings for the Photon 3 image |
photon-4.json | The settings for the Photon 4 image |
rhel-7.json | The settings for the RHEL 7 image |
ubuntu-1804.json | The settings for the Ubuntu 18.04 image |
ubuntu-2004.json | The settings for the Ubuntu 20.04 image |
ubuntu-2204.json | The settings for the Ubuntu 22.04 image |
ubuntu-2204-efi.json | The settings for the Ubuntu 22.04 EFI image |
ubuntu-2404.json | The settings for the Ubuntu 24.04 image |
ubuntu-2404-efi.json | The settings for the Ubuntu 24.04 EFI image |
vsphere.json | Additional settings needed when building on a remote vSphere |
Photon specific options
Enabling .local lookups via DNS
Photon uses systemd-resolved defaults, which means that .local will be resolved using Multicast DNS. If you are deploying to
an environment where you require DNS resolution .local, then add leak_local_mdns_to_dns=yes
in ansible_user_vars
.
RHEL
When building the RHEL image, the OS must register itself with the Red Hat Subscription Manager (RHSM). To do this, the current supported method is to supply a username and password via environment variables. The two environment variables are RHSM_USER
and RHSM_PASS
. Although building RHEL images has been tested via this method, if an error is encountered during the build, the VM is deleted without the machine being unregistered with RHSM. To prevent this, it is recommended to build with the following command:
PACKER_FLAGS=-on-error=ask RHSM_USER=user RHSM_PASS=pass make build-node-ova-<hypervisor>-rhel-7
The addition of PACKER_FLAGS=-on-error=ask
means that if an error is encountered, the build will pause, allowing you to SSH into the machine and unregister manually.
Output
The images are built and located in images/capi/output/BUILD_NAME+kube-KUBERNETES_VERSION
Uploading Images
The images are uploaded to the GCS bucket capv-images
. The path to the image depends on the version of Kubernetes:
Build type | Upload location |
---|---|
CI | gs://capv-images/ci/KUBERNETES_VERSION/BUILD_NAME-kube-KUBERNETES_VERSION.ova |
Release | gs://capv-images/release/KUBERNETES_VERSION/BUILD_NAME-kube-KUBERNETES_VERSION.ova |
Uploading the images requires the gcloud
and gsutil
programs, an active Google Cloud account, or a service account with an associated key file. The latter may be specified via the environment variable KEY_FILE
.
hack/image-upload.py --key-file KEY_FILE BUILD_DIR
First the images are checksummed (SHA256). If a matching checksum already exists remotely then the image is not re-uploaded. Otherwise the images are uploaded to the GCS bucket.
Listing Available Images
Once uploaded the available images may be listed using the gsutil
program, for example:
gsutil ls gs://capv-images/release
Downloading Images
Images may be downloaded via HTTP:
Build type | Download location |
---|---|
CI | http://storage.googleapis.com/capv-images/ci/KUBERNETES_VERSION/BUILD_NAME-kube-KUBERNETES_VERSION.ova |
Release | http://storage.googleapis.com/capv-images/release/KUBERNETES_VERSION/BUILD_NAME-kube-KUBERNETES_VERSION.ova |
Testing Images
Accessing the Images
Accessing Local VMs
After the images are built, the VMs from they are built are prepped for local testing. Simply boot the VM locally with Fusion or Workstation and the machine will be initialized with cloud-init data from the cloudinit
directory. The VMs may be accessed via SSH by using the command hack/image-ssh.sh BUILD_DIR capv
.
Accessing Remote VMs
After deploying an image to vSphere, use hack/image-govc-cloudinit.sh VM
to snapshot the image and update it with cloud-init data from the cloudinit
directory. Start the VM and now it may be accessed with ssh -i cloudinit/id_rsa.capi capv@VM_IP
.
This hack necessitate the govc
utility from VMmare
Initialize a CNI
As root:
(copied from containernetworking/cni)
mkdir -p /etc/cni/net.d
curl -LO https://github.com/containernetworking/plugins/releases/download/v0.7.0/cni-plugins-amd64-v0.7.0.tgz
tar -xzf cni-plugins-amd64-v0.7.0.tgz --directory /etc/cni/net.d
cat >/etc/cni/net.d/10-mynet.conf <<EOF
{
"cniVersion": "0.2.0",
"name": "mynet",
"type": "bridge",
"bridge": "cni0",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"subnet": "10.22.0.0/16",
"routes": [
{ "dst": "0.0.0.0/0" }
]
}
}
EOF
cat >/etc/cni/net.d/99-loopback.conf <<EOF
{
"cniVersion": "0.2.0",
"name": "lo",
"type": "loopback"
}
EOF
Run the e2e node conformance tests
As a non-root user:
curl -LO https://dl.k8s.io/$(</etc/kubernetes-version)/kubernetes-test-linux-amd64.tar.gz
tar -zxvf kubernetes-test-linux-amd64.tar.gz
cd kubernetes/test/bin
sudo ./ginkgo --nodes=8 --flakeAttempts=2 --focus="\[Conformance\]" --skip="\[Flaky\]|\[Serial\]|\[sig-network\]|Container Lifecycle Hook" ./e2e_node.test -- --k8s-bin-dir=/usr/bin --container-runtime=remote --container-runtime-endpoint unix:///var/run/containerd/containerd.sock --container-runtime-process-name /usr/local/bin/containerd --container-runtime-pid-file= --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/containerd.service" --extra-log="{\"name\": \"containerd.log\", \"journalctl\": [\"-u\", \"containerd\"]}"
The cloudinit
Directory
The cloudinit
contains files that:
- Are example data used for testing
- Are not included in any of the images
- Should not be used in production systems
For more information about how the files in the cloudinit
directory are used, please refer to the section on accessing the images.
Building Images for Proxmox VE
Prerequisites
- A Proxmox cluster
- Set environment variables for
PROXMOX_URL
,PROXMOX_USERNAME
,PROXMOX_TOKEN
,PROXMOX_NODE
- Set optional environment variables
PROXMOX_ISO_POOL
,PROXMOX_BRIDGE
,PROXMOX_STORAGE_POOL
to override the default values
Networking Requirements
The image build process expects a few things to be in place before the build process will complete successfully.
- DHCP must be available to assign the packer VM an IP
- The packer proxmox integration currently does not support the ability to assign static IPs, thus DHCP is required.
- Access to internet hosts is optional, but the VM will not be able to apply any current updates and will need to be manually rebooted to get a clean cloud-init status.
- The build VM must be accessible via SSH from the host running
make build-proxmox...
- The build VM must have DHCP, DNS, HTTP, HTTPS and NTP accessibility to successfully update the OS packages.
Building Images
The build prerequisites for using image-builder
for
building Proxmox VM templates are managed by running the following command from images/capi directory.
make deps-proxmox
From the images/capi
directory, run make build-proxmox-<OS>
where <OS>
is
the desired operating system. The available choices are listed via make help
.
Configuration
In addition to the configuration found in images/capi/packer/config
, the proxmox
directory includes several JSON files that define the default configuration for
the different operating systems.
File | Description |
---|---|
ubuntu-2204.json | The settings for the Ubuntu 22.04 image |
ubuntu-2404.json | The settings for the Ubuntu 24.04 image |
The full list of available environment vars can be found in the variables
section of images/capi/packer/proxmox/packer.json
.
Each variable in this section can also be overridden via the PACKER_FLAGS
environment var.
export PACKER_FLAGS="--var 'kubernetes_rpm_version=1.29.6' --var 'kubernetes_semver=v1.29.6' --var 'kubernetes_series=v1.29' --var 'kubernetes_deb_version=1.29.6-1.1'"
make build-proxmox-ubuntu-2204
If different packages are desired then find the available dep packages here and here.
If using a proxmox API token the format of the PROXMOX_USERNAME and PROXMOX_TOKEN must look like so:
PROXMOX_USERNAME | PROXMOX_TOKEN |
---|---|
For example:
PROXMOX_USERNAME | PROXMOX_TOKEN |
---|---|
image-builder@pve!capi | 9db7ce4e-4c7f-46ed-8ab4-3c8e98e88c7e |
Then the user (not token) must be given the following permissions on the path /
and propagated:
- Datastore.*
- SDN.*
- Sys.AccessNetwork
- Sys.Audit
- VM.*
We suggest creating a new role, since no built-in PVE roles covers just these.
Example
Prior to building images you need to ensure you have set the required environment variables:
export PROXMOX_URL="https://pve.example.com:8006/api2/json"
export PROXMOX_USERNAME=<USERNAME>
export PROXMOX_TOKEN=<TOKEN_ID>
export PROXMOX_NODE="pve"
export PROXMOX_ISO_POOL="local"
export PROXMOX_BRIDGE="vmbr0"
export PROXMOX_STORAGE_POOL="local-lvm"
Build ubuntu 2204 template:
make build-proxmox-ubuntu-2204
Windows
Configuration
The images/capi/packer/config/windows
directory includes several JSON files that define the default configuration for the Windows images:
File | Description |
---|---|
packer/config/windows/ansible-args.json | A common set of variables that are sent to the Ansible playbook |
packer/config/windows/cloudbase-init.json | The version of Cloudbase Init to install |
packer/config/windows/common.json | Settings for things like which runtime (Docker or Containerd), pause image and other configuration |
packer/config/windows/kubernetes.json | The version of Kubernetes to install and its install path |
packer/config/windows/containerd.json | The version of containerd to install |
Service Manager
Image-builder provides you two ways to configure Windows services. The default is setup using nssm which configures a Windows service for kubelet by running {{ kubernetes_install_path }}\StartKubelet.ps1
allowing easy editing of command arguments in the startup file. The alternate is to use the Windows native sc.exe which uses the kubelet argument --windows-service
to install kubelet as a native Windows service with the command line arguments configured on the service. Nssm handles service restarts, if you are using sc.exe you may wish to configure the service restart options on kubelet. To avoid starting kubelet to early, image-builder sets the kubelet service to manual which you should consider changing once the node has joined a cluster.
Important: sc.exe does not support kubeadm KUBELET_KUBEADM_ARGS which is used by Cluster API to pass extra user args
Windows Updates
When building Windows images it is necessary to install OS and Security updates. Image Builder provides two variables to allow choosing which updates get installed which can be used together or separately (with individual KBs installed first).
To specify the update categories to check, provide a value for windows_updates_categories
in packer/config/windows/common.json
.
Example:
Install all available updates from all categories.
"windows_updates_categories": "CriticalUpdates SecurityUpdates UpdateRollups"
Published Cloud Provider images such as Azure or AWS are regularly updated so it may be preferable to specify individual patches to install. This can be achieved by specifying the KB numbers of required updates.
To choose individual updates, provide a value for windows_updates_kbs
in packer/config/windows/common.json
.
Example:
"windows_updates_kbs": "KB4580390 KB4471332"
.
OpenSSH Server
If a connection to the Microsoft Updates server is not possible, you may use the Win32 port of OpenSSH located on GitHub. To do this, you can set the ssh_source_url to the location of the desired OpenSSH Version from https://github.com/PowerShell/Win32-OpenSSH/releases/
Example:
"ssh_source_url": "https://github.com/PowerShell/Win32-OpenSSH/releases/download/V8.6.0.0p1-Beta/OpenSSH-Win64.zip"
Using the Ansible Scripts directly
Ansible doesn’t run on directly on Windows (wsl works) but can used to configure a remote Windows host. For faster development you can create a VM and run Ansible against the Windows VM directly without using Packer. This document gives the high level steps to use Ansible from Linux machines.
Set up Windows machine
Follow the WinRM Setup in the Ansible documentation for configuring WinRM on the Windows machine. Note the ConfigureRemotingForAnsible.ps1 is for development only. Refer to Ansible WinRM documentation for details for advance configuration.
After WinRM is installed you can edit the /etc/ansible/hosts
file with the following:
[winhost]
<windows ip>
[winhost:vars]
ansible_user=username
ansible_password=<your password>
ansible_connection=winrm
ansible_winrm_server_cert_validation=ignore
Then run: ansible-playbook -vvv node_windows.yml --extra-vars "@example.vars.yml
macOS with ansible
The WinRM connection plugin for Ansible on macOS causes connection issues which can result in ERROR! A worker was found in a dead state
. See https://docs.ansible.com/ansible/latest/user_guide/windows_winrm.html#what-is-winrm for more details.
To fix the issue on macOS is to set the no_proxy environment variable. Example:
'no_proxy=* make build-azure-vhd-windows-2019'
Annual Channel
The Windows Server Annual channel licensing requires users to host their own image.
In Azure this can be uploaded to an Azure SIG, and the following environment variables should be set to use the source shared image gallery.
export SOURCE_SIG_SUB_ID=<azure sub>
export SOURCE_SIG_RESOURCE_GROUP=<resource group>
export SOURCE_SIG_NAME=<sig name>
export SOURCE_SIG_IMAGE_NAME=<image name>
export SOURCE_SIG_IMAGE_VERSION=<image version>
Including ECR Credential Provider
Starting with Kuberentes v1.27 the cloud credential providers are no longer included in-tree and need to be included as external binaries and referenced by the Kubelet.
To do this with image-builder you enable the use of ecr-credential-provider by setting the ecr_credential_provider
packer variable to true
.
Once enabled, the ecr-credential-provider
binary will be downloaded, a CredentialProviderConfig
config will be created, and the kubelet flags will be updated to reference both of these.
In most setups, this should be all that is needed but the following vars can be set to override various properties:
variable | default | description |
---|---|---|
ecr_credential_provider_version | “v1.31.0” | The release version of cloud-provider-aws to use |
ecr_credential_provider_os | “linux” | The operating system |
ecr_credential_provider_arch | “amd64” | The architecture |
ecr_credential_provider_base_url | “https://storage.googleapis.com/k8s-artifacts-prod/binaries/cloud-provider-aws” | The base URL of where to get the binary from |
ecr_credential_provider_install_dir | “/opt/bin” | The location to install the binary into |
ecr_credential_provider_binary_filename | “ecr-credential-provider” | The filename to use for the downloaded binary |
ecr_credential_provider_match_images | [“.dkr.ecr..amazonaws.com”, “.dkr.ecr..amazonaws.com.cn”] | An array of globs to use for matching images that should use the credential provider. (If using gov-cloud you may need to change this) |
ecr_credential_provider_aws_profile | “default” | The AWS profile to use with the credential provider |
Testing CAPI Images
Goss
Goss is a YAML based serverspec alternative tool for validating a server’s configuration. It is used in conjunction with packer-provisioner-goss to test if the images have all requisite components to work with cluster API.
Support Matrix
*For stock server-specs shipped with repo
OS | Builder |
---|---|
Amazon Linux | aws |
Azure Linux | azure |
CentOS | aws, ova |
Flatcar Container Linux | aws, azure, ova |
PhotonOS | ova |
Ubuntu | aws, azure, gcp, ova |
Windows | aws, azure, ova |
Prerequisites for Running Goss
Goss runs as a part of image building through a Packer provisioner.
Supported arguments are passed through file: packer/config/goss-args.json
{
"goss_arch": "amd64",
"goss_download_path": "",
"goss_entry_file": "goss/goss.yaml",
"goss_format": "json",
"goss_inspect_mode": "true",
"goss_remote_folder": "",
"goss_remote_path": "",
"goss_skip_install": "",
"goss_tests_dir": "packer/goss",
"goss_url": "",
"goss_format_options": "pretty",
"goss_vars_file": "packer/goss/goss-vars.yaml",
"goss_version": "0.3.16"
}
Supported values for some of the arguments can be found here.
Enabling the
goss_inspect_mode
lets you build image even if Goss tests fail.
Manually setup Goss
- Start a VM from CAPI image
- Copy complete Goss dir
packer/goss
to remote machine - Download and setup Goss (use version from goss-args) on the remote machine. Instructions
- Custom Goss version can be installed if testing custom server-specs supported by higher version of Goss.
- All the variables used in Goss are declared in
packer/goss/goss-vars.yaml
- Add more custom serverspec to corresponding Goss files. Like,
goss-command.yaml
orgoss-kernel-params.yaml
some_cli --version: exit-status: 0 stdout: [{{ .Vars.some_cli_version }}] stderr: [] timeout: 0
- Add more custom variables to corresponding Goss file
goss-vars.yaml
.some_cli_version: "1.4.5+k8s-1"
- Fill the variable values in
goss-vars.yaml
or specify in--vars-inline
while executing Goss in below steps - Render the Goss template to fix any problems with parsing variable and serverspec YAMLs
sudo goss -g goss/goss.yaml --vars /tmp/goss/goss-vars.yaml --vars-inline '{"ARCH":"amd64","OS":"Ubuntu","PROVIDER":"aws", some_cli_version":"1.3.4"}' render
- Run the Goss tests
sudo goss -g goss/goss.yaml --vars /tmp/goss/goss-vars.yaml --vars-inline '{"ARCH":"amd64","OS":"Ubuntu","PROVIDER":"aws", some_cli_version":"1.3.4"}' validate --retry-timeout 0s --sleep 1s -f json -o pretty
Using a container image to build a custom image
This image building approach eliminates the need to manually install and maintain pre-requisite packages like Ansible, Packer, libraries etc. It requires only Docker installed on your machine. All dependencies are handled in Docker while building the container image. This stable container image can be used and reused as a basis for building your own custom images.
Image builder uses GCR to store promoted images in a central registry. Latest container images can be found here - Staging and GA
Building a Container Image
Run the docker build target of Makefile
make docker-build
Using a Container Image
The latest image-builder container image release is available here:
docker pull registry.k8s.io/scl-image-builder/cluster-node-image-builder-amd64:v0.1.39
Examples
-
AMI
- If the AWS CLI is already installed on your machine, you can simply mount the
~/.aws
folder that stores all the required credentials.
docker run -it --rm -v /Users/<user>/.aws:/home/imagebuilder/.aws registry.k8s.io/scl-image-builder/cluster-node-image-builder-amd64:v0.1.39 build-ami-ubuntu-2004
-
Another alternative is to use an
aws-creds.env
file to load the credentials and pass it during docker run.AWS_ACCESS_KEY_ID=xxxxxxx AWS_SECRET_ACCESS_KEY=xxxxxxxx AWS_DEFAULT_REGION=xxxxxx
docker run -it --rm --env-file aws-creds.env registry.k8s.io/scl-image-builder/cluster-node-image-builder-amd64:v0.1.39 build-ami-ubuntu-2004
- If the AWS CLI is already installed on your machine, you can simply mount the
-
AZURE
-
You’ll need an
az-creds.env
file to load environment variablesAZURE_SUBSCRIPTION_ID
,AZURE_TENANT_ID
,AZURE_CLIENT_ID
andAZURE_CLIENT_SECRET
AZURE_SUBSCRIPTION_ID=xxxxxxx AZURE_TENANT_ID=xxxxxxx AZURE_CLIENT_ID=xxxxxxxx AZURE_CLIENT_SECRET=xxxxxx
docker run -it --rm --env-file az-creds.env registry.k8s.io/scl-image-builder/cluster-node-image-builder-amd64:v0.1.39 build-azure-sig-ubuntu-2004
-
-
Proxmox
-
You’ll need a
proxmox.env
file to load environment variables such as:PROXMOX_BRIDGE=vmbr0 PROXMOX_ISO_POOL=tower PROXMOX_NODE=pve-c PROXMOX_STORAGE_POOL=cephfs PROXMOX_TOKEN=xxxxxxxx PROXMOX_URL=https://1.2.3.4:8006/api2/json PROXMOX_USERNAME=capmox@pve!capi
-
Docker’s
--net=host
option to ensure http server starts with the host IP and not the Docker container IP. This option is Linux specific and thus implies that it can be run only from a Linux machine. -
Proxmox provider requires a tmp folder to be mounted within the container to the downloaded_iso_path folder
docker run -it --rm --net=host --env-file proxmox.env \ -v /tmp:/home/imagebuilder/images/capi/downloaded_iso_path \ registry.k8s.io/scl-image-builder/cluster-node-image-builder-amd64:v0.1.38 build-proxmox-ubuntu-2204
-
-
vSphere OVA
-
vsphere.json
configuration file with user and hypervisor credentials. A template of this file can be found here -
Docker’s
--net=host
option to ensure http server starts with the host IP and not the Docker container IP. This option is Linux specific and thus implies that it can be run only from a Linux machine.
docker run -it --rm --net=host --env PACKER_VAR_FILES=/home/imagebuilder/vsphere.json -v <complete path of vsphere.json>:/home/imagebuilder/vsphere.json registry.k8s.io/scl-image-builder/cluster-node-image-builder-amd64:v0.1.39 build-node-ova-vsphere-ubuntu-2004
-
In addition to this, further customizations can be done as discussed here.
Customizing containerd
Running sandboxed containers using gVisor
For additional security in a Kubernetes cluster it can be useful to run certain containers in a restricted runtime environment known as a sandbox. One option for this is to use gVisor which provides a layer of separation between a running container and the host kernel.
To use gVisor, the necessary executables and containerd configuration can be added
to the image generated with image-builder by setting the containerd_gvisor_runtime
flag to true
. For example, in a packer configuration file:
{
"containerd_gvisor_runtime": "true",
"containerd_gvisor_version": "yyyymmdd",
}
This will tell image_builder to install runsc
, the executable for gVisor, as well as
the necessary configuration for containerd. Note that containerd_gvisor_version: yyyymmdd
can be used to install a specific
point release version. The version defaults to latest
.
Once you have built your cluster using the new image, you can then create a RuntimeClass
object
as follows:
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
# The name the RuntimeClass will be referenced by.
# RuntimeClass is a non-namespaced resource.
name: gvisor
handler: gvisor
Now, to run a pod in the sandboxed environment you just need to specify the name of the RuntimeClass
using runtimeClassName
in the Pod spec:
apiVersion: v1
kind: Pod
metadata:
name: test-sandboxed-pod
spec:
runtimeClassName: gvisor
containers:
- name: sandboxed-container
image: nginx
Once the pod is up and running, you can verify by using kubectl exec
to start a shell on the
pod and run dmesg
. If the container sandbox is running correctly you should see output similar
to the following:
root@sandboxed-container:/# dmesg
[ 0.000000] Starting gVisor...
[ 0.511752] Digging up root...
[ 0.910192] Recruiting cron-ies...
[ 1.075793] Rewriting operating system in Javascript...
[ 1.351495] Mounting deweydecimalfs...
[ 1.648946] Searching for socket adapter...
[ 2.115789] Checking naughty and nice process list...
[ 2.351749] Granting licence to kill(2)...
[ 2.627640] Creating bureaucratic processes...
[ 2.954404] Constructing home...
[ 3.396065] Segmenting fault lines...
[ 3.812981] Setting up VFS...
[ 4.164302] Setting up FUSE...
[ 4.224418] Ready!
You are running a sandboxed container.
Image Builder Releases
The current release of Image Builder is v0.1.39 (October 17, 2024). The corresponding container image is registry.k8s.io/scl-image-builder/cluster-node-image-builder-amd64:v0.1.39
.
Release Process
Releasing image-builder is a simple process: project maintainers should be able to follow the steps below in order to create a new release.
Create a tag
Releases in image-builder follow semantic versioning conventions. Currently the project tags only patch releases on the main branch.
- Check out the existing branch and make sure you have the latest changes:
git checkout main
git fetch upstream
- This assumes you have an “upstream” git remote pointing at github.com/kubernetes-sigs/image-builder
git rebase upstream/main
- If the HEAD commit isn’t meant for release, reset to the intended commit before proceeding.
- Ensure you can sign tags:
- Set up GPG, SSH, or S/MIME at GitHub if you haven’t already.
export GPG_TTY=$(tty)
- If signing tags with GPG, makes your key available to the
git tag
command.
- If signing tags with GPG, makes your key available to the
- Create a new tag:
export IB_VERSION=v0.1.x
- Replace
x
with the next patch version. For example:v0.1.40
.
- Replace
git tag -s -m "Image Builder ${IB_VERSION}" ${IB_VERSION}
git push upstream ${IB_VERSION}
Promote Image to Production
Pushing the tag in the previous step triggered a job to build the container image and publish it to the staging registry.
- Images are built by the post-image-builder-push-images job. This will push the image to a staging repository.
- Wait for the above post-image-builder-push-images job to complete and for the tagged image to exist in the staging directory.
- If you don’t have a GitHub token, create one via Personal access tokens. Make sure you give the token the
repo
scope. - Make sure you have a clone of k8s.io otherwise the next step will not work.
- Create a GitHub pull request to promote the image:
export GITHUB_TOKEN=<your GH token>
make -C images/capi promote-image
- Note: If your own fork isn’t used as the
origin
remote you’ll need to set theUSER_FORK
variable, e.g.make -C images/capi promote-image USER_FORK=AverageMarcus
This will create a PR in k8s.io and assign the image-builder maintainers. Example PR: https://github.com/kubernetes/k8s.io/pull/5262.
When reviewing this PR, confirm that the addition matches the SHA in the staging repository.
Publish GitHub Release
While waiting for the above PR to merge, create a GitHub draft release for the tag you created in the first step.
-
Visit the releases page and click the “Draft a new release” button.
- If you don’t see that button, you don’t have all maintainer permissions.
-
Choose the new tag from the drop-down list, and type it in as the release title too.
-
Click the “Generate release notes” button to auto-populate the release description.
-
At the top, before
## What's Changed
, insert a reference to the container artifact, replacingx
with the patch version:This release of the image-builder container is available at: `registry.k8s.io/scl-image-builder/cluster-node-image-builder-amd64:v0.1.x`
-
Proofread the release notes and make any necessary edits.
-
Click the “Save draft” button.
-
When the pull request from the previous step has merged, check that the image-builder container is actually available. This may take up to an hour after the PR merges.
docker pull registry.k8s.io/scl-image-builder/cluster-node-image-builder-amd64:${IB_VERSION}
-
When
docker pull
succeeds, return to the GitHub draft release, ensure “Set as the latest release” is true, and click the “Publish release” button.
Update Documentation
There are several files in image-builder itself that refer to the latest release (including this one).
Run make update-release-docs
and then create a pull request with the generated changes.
Wait for this PR to merge before communicating the release to users, so image-builder documentation is consistent.
Publicize Release
In the #image-builder channel on the Kubernetes Slack, post a message announcing the new release. Include a link to the GitHub release and a thanks to the contributors:
Image-builder v0.1.40 is now available: https://github.com/kubernetes-sigs/image-builder/releases/tag/v0.1.40
Thanks to all contributors!
Table of Contents
A
AWS
Amazon Web Services
AMI
C
CAPA
CAPG
CAPI
The Cluster API is a Kubernetes project to bring declarative, Kubernetes-style APIs to cluster creation, configuration, and management. It provides optional, additive functionality on top of core Kubernetes.
CAPIBM
Cluster API Provider IBM Cloud
CAPV
CAPZ
E
ESXi
ESXi (formerly ESX) is an enterprise-class, type-1 hypervisor developed by VMware. ESXi provides strong separation between VMs and itself, providing strong security boundaries between the guest and host operating systems. ESXi can be used as a standalone entity, without vCenter but this is extremely uncommon and feature limited as without a higher level manager (vCenter). ESXi cannot provide its most valuable features, like High Availability, vMotion, workload balancing and vSAN (a software defined storage stack).
G
Goss
Goss is a YAML based serverspec alternative tool for validating a server’s configuration. It is used in conjunction with packer-provisioner-goss to test if the images have all requisite components to work with cluster API.
gVisor
gVisor an application kernel that provides isolation between running applications and the host operating system. See also sandboxed container.
K
K8s
Kubernetes
Kubernetes
Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications.
O
OVA
Open Virtual Appliance
A single package containing a pre-configured virtual machine, usually based on OVF.
OVF
Open Virtualization Format
An open standard for packaging and distributing virtual appliances or, more generally, software to be run in virtual machines.
PowerVS
Power Systems Virtual Server
IBM Power Systems Virtual Server is a Power Systems offering.
S
Sandboxed container
A container run in a specialized environment that is isolated from the host kernel.
V
vCenter
vCenter can be thought of as the management layer for ESXi hosts. Hosts can be arranged into Datacenters, Clusters or resources pools, vCenter is the centralized monitoring and management control plane for ESXi hosts allow centralized management, integration points for other products in the VMware SDDC stack and third party solutions, like backup, DR or networking overlay applications, such as NSX. vCenter also provides all of the higher level features of vSphere such as vMotion, vSAN, HA, DRS, Distributed Switches and more.
VM
A VM is an abstraction of an operating system from the physical machine by creating a “virtual” representation of the physical hardware the OS expects to interact with, this includes but is not limited to CPU instruction sets, memory, BIOS, PCI buses, etc. A VM is an entirely self-contained entity and shares no components with the host OS. In the case of vSphere the host OS is ESXi (see below).
VPC
Virtual Private Cloud
A VPC is a public cloud offering that lets an enterprise establish its own private cloud-like computing environment on shared public cloud infrastructure.
vSphere
vSphere is the product name of the two core components of the VMware Software Defined Datacenter (SDDC) stack, they are vCenter and ESXi.