Kubernetes Image Builder
This is the official documentation for the Kubernetes Image Builder.
Introduction
The Kubernetes Image Builder is a SIG Cluster Lifecycle sponsored project with the purpose to consolidate several existing projects for building node images to run conformant Kubernetes clusters. Over time, the Image Builder will present a consistent, unified tool to create images across cloud and infrastructure providers, along with all the needed hooks for customization to meet business needs.
Summary
Tutorials
CAPI Images
The Image Builder can be used to build images intended for use with Kubernetes CAPI providers. Each provider has its own format of images that it can work with. For example, AWS instances use AMIs, and vSphere uses OVAs.
Prerequisites
Packer and Ansible are used for building these images. This tooling has been forked and extended from the Wardroom project.
- Packer version >= 1.6.0
- Goss plugin for Packer version >= 1.2.0
- Ansible version >= 2.10.0
If any needed binaries are not present, they can be installed to images/capi/.bin
with the make deps
command. This directory will need to be added to your $PATH
.
Providers
Make targets
Within this repo, there is a Makefile located at images/capi/Makefile
that can be used to create the default images.
Run make
or make help
to see the current list of targets. The targets are categorized into Dependencies
, Builds
, and Cleaning
. The Dependency targets will check that your system has the proper tools installed to run the build for your specific provider. If the dependencies are not present, they will be installed.
Configuration
The images/capi/packer/config
directory includes several JSON files that define the default configuration for the images:
File | Description |
---|---|
packer/config/ansible-args.json | A common set of variables that are sent to the Ansible playbook |
packer/config/cni.json | The version of Kubernetes CNI to install |
packer/config/containerd.json | The version of containerd to install and customizations specific to the containerd runtime |
packer/config/kubernetes.json | The version of Kubernetes to install. The default version is kept at n-2. See Customization section below for overriding this value |
Due to OS differences, Windows images has additional configuration in the packer/config/windows
folder. See Windows documentation for more details.
Customization
Several variables can be used to customize the image build.
Variable | Description | Default |
---|---|---|
custom_role | If set to "true" , this will cause image-builder to run a custom Ansible role right before the sysprep role to allow for further customization. | "false" |
custom_role_names | This must be set if custom_role is set to "true" , and is the space delimited string of the roles to run. If the role is placed in the ansible/roles directory, it can be referenced by name. Otherwise, it must be a fully qualified path to the role. | "" |
disable_public_repos | If set to "true" , this will disable all existing package repositories defined in the OS before doing any package installs. The extra_repos variable must be set for package installs to succeed. | "false" |
extra_debs | This can be set to a space delimited string containing the names of additional deb packages to install | "" |
extra_repos | A space delimited string containing the names of files to add to the image containing repository definitions. The files should be given as absolute paths. | "" |
extra_rpms | This can be set to a space delimited string containing the names of additional RPM packages to install | "" |
http_proxy | This can be set to URL to use as an HTTP proxy during the Ansible stage of building | "" |
https_proxy | This can be set to URL to use as an HTTPS proxy during the Ansible stage of building | "" |
no_proxy | This can be set to a comma-delimited list of domains that should be exluded from proxying during the Ansible stage of building | "" |
reenable_public_repos | If set to "false" , the package repositories disabled by setting disable_public_repos will remain disabled at the end of the build. | "true" |
remove_extra_repos | If set to "true" , the package repositories added to the OS through the use of extra_repos will be removed at the end of the build. | "false" |
containerd_pause_image | This can be used to override the default containerd pause image used to hold the network namespace and IP for the pod. | "k8s.gcr.io/pause:3.2" |
containerd_additional_settings | This is a string, base64 encoded, that contains additional configuration for containerd. It must be version 2 and not contain the pause image configuration block. See image-builder/images/capi/ansible/roles/containerd/templates/etc/containerd/config.toml for the template. | null |
The variables found in packer/config/*.json
or packer/<provider>/*.json
should not need to be modified directly. For customization it is better to create a JSON file with your changes and provide it via the PACKER_VAR_FILES
environment variable. Variables set in this file will override any previous values. Multiple files can be passed via PACKER_VAR_FILES
, with the last file taking precedence over any others.
Examples
Passing a single extra var file
PACKER_VAR_FILES=var_file_1.json make ...
Passing multiple extra var files
PACKER_VAR_FILES="var_file_1.json var_file_2.json" make ...
Passing in extra packages to the image
If you wanted to install the RPMs nfs-utils
and net-tools
, create a file called extra_vars.json
and populate with the following:
{
"extra_rpms": "\"nfs-utils net-tools\""
}
Note that since the extra_rpms
variable is a string, and we need the string to be quoted to preserve the space when placed on the command line, having the escaped double-quotes is required.
Then, execute the build (using a Photon OVA as an example) with the following:
PACKER_VAR_FILES=extra_vars.json make build-node-ova-local-photon-3
Disabling default repos and using an internal package mirror
A common use-case within enterprise environments is to have a package repository available on an internal network to install from rather than reaching out to the internet. To support this, you can inject custom repository definitions into the image, and optionally disable the use of the default ones.
For example, to build an image using only an internal mirror, create a file called internal_repos.json
and populate it with the following:
{
"disable_public_repos": "true",
"extra_repos": "/home/<user>/mirror.repo",
"remove_extra_repos": "true"
}
This example assumes that you have an RPM repository definition available at /home/<user>/mirror.repo
, and it is correctly configured to point to your internal mirror. It will be added to the image within /etc/yum.repos.d
, with all existing repositories found with /etc/yum.repos.d
disabled by setting disable_public_repos
to "true"
. Furthermore, the (optional) use of "remove_extra_repos"
means that at the end of the build, the repository definition that was added will be removed. This is useful if the image you are building will be shared externally and you do not wish to include a file with internal network services and addresses.
For Ubuntu images, the process works the same but you would need to add a .list
file pointing to your DEB package mirror.
Then, execute the build (using a Photon OVA as an example) with the following:
PACKER_VAR_FILES=internal_repos.json make build-node-ova-local-photon-3
Setting up an HTTP Proxy
The Packer tool itself honors the standard env vars of HTTP_PROXY
, HTTPS_PROXY
, and NO_PROXY
. If these variables are set and exported, they will be honored if Packer needs to download an ISO during a build. However, in order to use these proxies with Ansible (for use during package installation or binary download), we need to pass them via JSON file.
For example, to set the HTTP_PROXY env var for the Ansible stage of the build, create a proxy.json
and populate it with the following:
{
"http_proxy": "http://proxy.corp.com"
}
Then, execute the build (using a Photon OVA as an example) with the following:
PACKER_VAR_FILES=proxy.json make build-node-ova-local-photon-3
Quick Start
In this tutorial we will cover the basics of how to download and execute the Image Builder.
Installation
As a set of scripts and Makefiles that rely on Packer and Ansible, there is image builder binary/application to install. Rather we need to download the tooling from the GitHub repo and make sure that the Packer and Ansible are installed.
To get the latest image-builder source on your machine, choose one of the following methods:
Tarball download:
curl -L https://github.com/kubernetes-sigs/image-builder/tarball/master -o image-builder.tgz
mkdir image-builder
tar xzf image-builder.tgz --strip-components 1 -C image-builder
rm image-builder.tgz
cd image-builder/images/capi
Or, if you’d like to keep tracking the repo updates (requires git):
git clone git@github.com:kubernetes-sigs/image-builder.git
cd image-builder/images/capi
Dependencies
Once you are within the capi
directory, you can execute make
or make help
to see all the possible make targets. Before we can build an image, we need to make sure that Packer and Ansible are installed on your system. You may already have them, Mac users may have them installed via brew
, or you may have downloaded them directly.
If you want the image-builder to install these tools for you, they can be installed by executing make deps
. This will install dependencies into image-builder/images/capi/.bin
if they are not already on your system. make deps
will first check if Ansible and Packer are available and if they are, will use the existing installations.
Looking at the output from make deps
, if Ansible or Packer were installed into the .bin
directory, you’ll need to add that to your PATH
environment variable before they can be used. Assuming you are still in images/capi
, you can do that with the following:
export PATH=$PWD/.bin:$PATH
Builds
With the CAPI image builder installed and dependencies satisfied, you are now ready to build an image. In general, this is done via make
targets, and each provider (e.g. AWS, GCE, etc.) will have different requirements for what information needs to be provided (such as cloud provider authentication credentials). Certain providers may have dependencies that are not satisfied by make deps
, for example the vSphere provider needs access to a hypervisor (VMware Fusion on macOS, VMware Workstation on Linux). See the specific documentation for your desired provider for more details.
Building Images for AWS
Prerequisites for Amazon Web Services
- An AWS account
- The AWS CLI installed and configured
Building Images
The build prerequisites for using image-builder
for
building AMIs are managed by running:
make deps-ami
From the images/capi
directory, run make build-ami-<OS>
, where <OS>
is
the desired operating system. The available choices are listed via make help
.
To build all available OS’s, uses the -all
target. If you want to build them in parallel, use make -j
. For example, make -j build-ami-all
.
In the output of a successful make
command is a list of created AMIs. To
format them you can copy the output and pipe it through this to get a desired
table:
echo 'us-fake-1: ami-123
us-fake-2: ami-234' | column -t | sed 's/^/| /g' | sed 's/: //g' | sed 's/ami/| ami/g' | sed 's/$/ |/g'
| us-fake-1 | ami-123 |
| us-fake-2 | ami-234 |
Note: If making the images public (the default), you must use one of the Public CentOS images as a base rather than a Marketplace image.
Configuration
In addition to the configuration found in images/capi/packer/config
, the ami
directory includes several JSON files that define the default configuration for
the different operating systems.
File | Description |
---|---|
amazon-2.json | The settings for the Amazon 2 Linux image |
centos-7.json | The settings for the CentOS 7 image |
ubuntu-1804.json | The settings for the Ubuntu 18.04 image |
Common AWS options
This table lists several common options that a user may want to set via
PACKER_VAR_FILES
to customize their build behavior. This is not an exhaustive
list, and greater explanation can be found in the
Packer documentation for the Amazon AMI builder.
Variable | Description | Default |
---|---|---|
ami_groups | A list of groups that have access to launch the resulting AMI. | "all" |
ami_regions | A list of regions to copy the AMI to. | "ap-south-1,eu-west-3,eu-west-2,eu-west-1,ap-northeast-2,ap-northeast-1,sa-east-1,ca-central-1,ap-southeast-1,ap-southeast-2,eu-central-1,us-east-1,us-east-2,us-west-1,us-west-2" |
ami_users | A list of groups that have access to launch the resulting AMI. | "all" |
aws_region | The AWS region to build the AMI within. | "us-east-1" |
encrypted | Indicates whether or not to encrypt the volume. | "false" |
kms_key_id | ID, alias or ARN of the KMS key to use for boot volume encryption. | "" |
snapshot_groups | A list of groups that have access to create volumes from the snapshot. | "" |
snapshot_users | A list of groups that have access to create volumes from the snapshot. | "" |
In the below examples, the parameters can be set via variable file and the use
of PACKER_VAR_FILES
. See Customization for
examples.
Examples
Building private AMIs
Set ami_groups=""
and snapshot_groups=""
parameters to
ensure you end up with a private AMI. Both parameters default to "all"
.
Encrypted AMIs
Set encrypted=true
for encrypted AMIs to allow for use with EC2 instances
backed by encrypted root volumes. You must also set ami_groups=""
and
snapshot_groups=""
for this to work.
Sharing private AMIs with other AWS accounts
Set ami_users="012345789012,0123456789013"
to make your AMI visible to a
select number of other AWS accounts, and
snapshot_users="012345789012,0123456789013"
to allow the EBS snapshot backing
the AMI to be copied.
If you are using encrypted root volumes in multiple accounts, you will want to
build one unencrypted AMI in a root account, setting snapshot_users
, and then
use your own methods to copy the snapshot with encryption turned on into other
accounts.
Limiting AMI Regions
By default images are copied to many of the available AWS regions. See
ami_regions
in AWS options for the default list. The
list of all available regions can be obtained running:
aws ec2 describe-regions --query "Regions[].{Name:RegionName}" --output text | paste -sd "," -
To limit the regions, provide the ami_regions
variable as a comma-delimited list of AWS regions.
For example, to build all images in us-east-1 and copy only to us-west-2 set
ami_regions="us-west-2"
.
Required Permissions to Build the AWS AMIs
The Packer documentation for the Amazon AMI builder supplies a suggested set of minimum permissions.
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action" : [
"ec2:AttachVolume",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:CopyImage",
"ec2:CreateImage",
"ec2:CreateKeypair",
"ec2:CreateSecurityGroup",
"ec2:CreateSnapshot",
"ec2:CreateTags",
"ec2:CreateVolume",
"ec2:DeleteKeyPair",
"ec2:DeleteSecurityGroup",
"ec2:DeleteSnapshot",
"ec2:DeleteVolume",
"ec2:DeregisterImage",
"ec2:DescribeImageAttribute",
"ec2:DescribeImages",
"ec2:DescribeInstances",
"ec2:DescribeRegions",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSnapshots",
"ec2:DescribeSubnets",
"ec2:DescribeTags",
"ec2:DescribeVolumes",
"ec2:DetachVolume",
"ec2:GetPasswordData",
"ec2:ModifyImageAttribute",
"ec2:ModifyInstanceAttribute",
"ec2:ModifySnapshotAttribute",
"ec2:RegisterImage",
"ec2:RunInstances",
"ec2:StopInstances",
"ec2:TerminateInstances"
],
"Resource" : "*"
}]
}
Testing Images
Connect remotely to an instance created from the image and run the Node Conformance tests using the following commands:
Initialize a CNI
As root:
(copied from containernetworking/cni)
mkdir -p /etc/cni/net.d
wget -q https://github.com/containernetworking/cni/releases/download/v0.7.0/cni-amd64-v0.7.0.tgz
tar -xzf cni-amd64-v0.7.0.tgz --directory /etc/cni/net.d
cat >/etc/cni/net.d/10-mynet.conf <<EOF
{
"cniVersion": "0.2.0",
"name": "mynet",
"type": "bridge",
"bridge": "cni0",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"subnet": "10.22.0.0/16",
"routes": [
{ "dst": "0.0.0.0/0" }
]
}
}
EOF
cat >/etc/cni/net.d/99-loopback.conf <<EOF
{
"cniVersion": "0.2.0",
"name": "lo",
"type": "loopback"
}
EOF
Run the e2e node conformance tests
As a non-root user:
wget https://dl.k8s.io/$(< /etc/kubernetes_community_ami_version)/kubernetes-test.tar.gz
tar -zxvf kubernetes-test.tar.gz kubernetes/platforms/linux/amd64
cd kubernetes/platforms/linux/amd64
sudo ./ginkgo --nodes=8 --flakeAttempts=2 --focus="\[Conformance\]" --skip="\[Flaky\]|\[Serial\]|\[sig-network\]|Container Lifecycle Hook" ./e2e_node.test -- --k8s-bin-dir=/usr/bin --container-runtime=remote --container-runtime-endpoint unix:///var/run/containerd/containerd.sock --container-runtime-process-name /usr/local/bin/containerd --container-runtime-pid-file= --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/containerd.service" --extra-log="{\"name\": \"containerd.log\", \"journalctl\": [\"-u\", \"containerd\"]}"
Building Images for Azure
These images are designed for use with [Cluster API Provider Azure](Cluster API Provider Azure) (CAPZ). Learn more on using custom images with CAPZ.
Prerequisites for Azure
- An Azure account
- The Azure CLI installed and configured
- Set environment variables for
AZURE_SUBSCRIPTION_ID
,AZURE_CLIENT_ID
,AZURE_CLIENT_SECRET
Building Images
The build prerequisites for using image-builder
for
building Azure images are managed by running:
make deps-azure
Building Managed Images in Shared Image Galleries
From the images/capi
directory, run make build-azure-sig-ubuntu-1804
Building VHDs
From the images/capi
directory, run make build-azure-vhd-ubuntu-1804
If building the windows images from a Mac there is a known issue with connectivity. Please see details on running MacOS with ansible.
Developer
If you are adding features to image builder than it is sometimes useful to work with the images directly. This section gives some tips.
Provision a VM directly from a VHD
After creating a VHD, create a managed image using the url output from make build-azure-vhd-<image>
and use it to create the VM:
az image create -n testvmimage -g cluster-api-images --os-type <Windows/Linux> --source <storage url for vhd file>
az vm create -n testvm --image testvmimage -g cluster-api-images
Debugging packer scripts
There are several ways to debug packer scripts: https://www.packer.io/docs/other/debugging.html
Building Images for DigitalOcean
Prerequisites for DigitalOcean
- A DigitalOcean account
- The DigitalOcean CLI (doctl) installed and configured
- Set environment variables for
DIGITALOCEAN_ACCESS_TOKEN
,
Building Images
The build prerequisites for using image-builder
for
building Digital Ocean images are managed by running:
make deps-do
From the images/capi
directory, run make build-do-<OS>
where <OS>
is the desired operating system. The available choices are listed via make help
.
Configuration
In addition to the configuration found in images/capi/packer/config
, the digitalocean
directory includes several JSON files that define the default configuration for
the different operating systems.
File | Description |
---|---|
centos-7.json | The settings for the CentOS 7 image |
ubuntu-1804.json | The settings for the Ubuntu 18.04 image |
ubuntu-2004.json | The settings for the Ubuntu 20.04 image |
Building CAPI Images for Google Cloud Platform (GCP)
Prerequisites
Create Service Account
From your google cloud console, follow these instructions to create a new service account with Editor permissions. Thereafter, generate a JSON Key and store it somewhere safe.
Use cloud shell to install ansible, packer and proceed with building the CAPI compliant vm image.
Install Ansible and Packer
Start by launching the google cloud shell.
# Export the GCP project id you want to build images in
$ export GCP_PROJECT_ID=<project-id>
# Export the path to the service account credentials created in the step above
$ export GOOGLE_APPLICATION_CREDENTIALS=</path/to/serviceaccount-key.json>
$ git clone https://sigs.k8s.io/image-builder.git image-builder
$ cd image-builder/images/capi/
# Run the target make deps-gce to install ansible and packer
$ make deps-gce
Build Cluster API Compliant VM Image
# clone the image-builder repo if you haven't already
$ git clone https://sigs.k8s.io/image-builder.git image-builder
$ cd image-builder/images/capi/
Run the Make target to generate GCE images.
From images/capi
directory, run make build-gce-ubuntu-<version>
command depending on which ubuntu version you want to build the image for.
For instance, to build an image for ubuntu 18-04
, run
$ make build-gce-ubuntu-1804
To build all gce ubuntu images, run
make build-gce-all
Configuration
The gce
sub-directory inside images/capi/packer
stores JSON configuration files for Ubuntu OS.
File | Description |
---|---|
ubuntu-1804.json | Settings for Ubuntu 18-04 image |
ubuntu-2004.json | Settings for Ubuntu 20-04 image |
List Images
List all images by running the following command in the console
$ gcloud compute images list --project ${GCP_PROJECT_ID} --no-standard-images
NAME PROJECT FAMILY DEPRECATED STATUS
cluster-api-ubuntu-1804-v1-17-11-1603233313 myregistry-292303 capi-ubuntu-1804-k8s-v1-17 READY
cluster-api-ubuntu-2004-v1-17-11-1603233874 myregistry-292303 capi-ubuntu-2004-k8s-v1-17 READY
Delete Images
To delete images from gcloud shell, run following
$ gcloud compute images delete [image 1] [image2]
where [image 1]
and [image 2]
refer to the names of the images to be deleted.
Building Images for OpenStack
Hypervisor
The image is built using KVM hypervisor.
Prerequisites for QCOW2
Execute the following command to install qemu-kvm and other packages if you are running Ubuntu 18.04 LTS.
Installing packages to use qemu-img
$ sudo -i
# apt install qemu-kvm libvirt-bin qemu-utils
If you’re on Ubuntu 20.04 LTS, then execute the following command to install qemu-kvm packages.
$ sudo -i
# apt install qemu-kvm libvirt-daemon-system libvirt-clients virtinst cpu-checker libguestfs-tools libosinfo-bin
Adding your user to the kvm group
$ sudo usermod -a -G kvm <yourusername>
$ sudo chown root:kvm /dev/kvm
Then exit and log back in to make the change take place.
Building Images
The build prerequisites for using image-builder
for
building qemu images are managed by running:
cd image-builder/images/capi
make deps-qemu
Building QCOW2 Image
From the images/capi
directory, run make build-qemu-ubuntu-xxxx
. The image is built and located in images/capi/output/BUILD_NAME+kube-KUBERNETES_VERSION. Please replace xxxx with 1804
or 2004
depending on the version you want to build the image for.
For building a ubuntu-2004 based capi image, run the following commands -
$ git clone https://github.com/kubernetes-sigs/image-builder.git
$ cd image-builder/images/capi/
$ make build-qemu-ubuntu-2004
Building Images for vSphere
Hypervisor
The images may be built using one of the following hypervisors:
OS | Builder | Build target |
---|---|---|
Linux | VMware Workstation (vmware-iso) | build-node-ova-local- |
macOS | VMware Fusion (vmware-iso) | build-node-ova-local- |
ESXi | ESXi | build-node-ova-esx- |
vSphere | vSphere >= 6.5 | build-node-ova-vsphere- |
vSphere | vSphere >= 6.5 | build-node-ova-vsphere-base- |
vSphere Clone | vSphere >= 6.5 | build-node-ova-vsphere-clone- |
Linux | VMware Workstation (vmware-vmx) | build-node-ova-local-vmx- |
macOS | VMware Fusion (vmware-vmx) | build-node-ova-local-vmx- |
NOTE If you want to build all available OS’s, uses the -all
target. If you want to build them in parallel, use make -j
. For example, make -j build-node-ova-local-all
.
The esxi
builder supports building against a remote VMware ESX server with specific configuration (ssh access), but is untested with this project.
The vsphere
builder supports building against a remote VMware vSphere using standard API.
vmware-vmx builder
During the dev process it’s uncommon for the base OS image to change, but the image building process builds the base image from the ISO every time and thus adding a significant amount of time to the build process.
To reduce the image building times during development, one can use the build-node-ova-local-base-<OS>
target to build the base image from the ISO. By setting source_path
variable in vmx.json
to the *.vmx
file from the output, it can then be re-used with the build-node-ova-local-vmx-<OS>
build target to speed up the process.
vsphere-clone builder
vsphere-base
builder allows you to build one time base OVAs from iso images using the kickstart process. It leaves the user builder
intact in base OVA to be used by clone builder later. vSphere-clone
builder builds on top of base OVA by cloning it and ansiblizing it.
This saves time by allowing repeated iteration on base OVA without installing OS from scratch again and again. Also, it uses link cloning and create_snapshot
feature to clone faster.
Prerequisites for vSphere builder
Complete the vsphere.json
configuration file with credentials and informations specific to the remote vSphere hypervisor used to build the ova
file.
This file must have the following format (cluster
can be replace by host
):
{
"vcenter_server":"FQDN of vcenter",
"username":"vcenter_username",
"password":"vcenter_password",
"datastore":"template_datastore",
"folder": "template_folder_on_vcenter",
"cluster": "esxi_cluster_used_for_template_creation",
"network": "network_attached_to_template",
"insecure_connection": "false"
"template": "base_template_used_by_clone_builder",
"create_snbapshot": "creates a snaphot on base OVA after building",
"linked_clone": "Uses link cloning in vsphere-clone builder: true, by default"
}
If you prefer to use a different configuration file, you can create it with the same format and export PACKER_VAR_FILES
environment variable containing the full path to it.
Building Images
The build prerequisites for using image-builder
for
building OVAs are managed by running:
make deps-ova
From the images/capi
directory, run make build-node-ova-<hypervisor>-<OS>
, where <hypervisor>
is your target hypervisor (local
, vsphere
or esx
) and <OS>
is the desired operating system. The available choices are listed via make help
.
OVA Creation
When the final OVA is created, there are two methods that can be used for creation. By default, an OVF file is created, the manifest is created using SHA256 sums of the OVF and VMDK, and then tar
is used to create an OVA containing the OVF, VMDK, and the manifest.
Optionally, ovftool
can be used to create the OVA. This has the advantage of validating the created OVF, and has greater chances of producing OVAs that are compliant with more versions of VMware targets of Fusion, Workstation, and vSphere. To use ovftool
for OVA creation, set the env variable IB_OVFTOOL to any non-empty value, like the following:
IB_OVFTOOL=1 make build-node-ova-<hypervisor>-<OS>
Configuration
In addition to the configuration found in images/capi/packer/config
, the ova
directory includes several JSON files that define the configuration for the images:
File | Description |
---|---|
esx.json | Additional settings needed when building on a remote ESXi host |
centos-7.json | The settings for the CentOS 7 image |
photon-3.json | The settings for the Photon 3 image |
rhel-7.json | The settings for the RHEL 7 image |
ubuntu-1804.json | The settings for the Ubuntu 18.04 image |
ubuntu-2004.json | The settings for the Ubuntu 20.04 image |
vsphere.json | Additional settings needed when building on a remote vSphere |
RHEL
When building the RHEL image, the OS must register itself with the Red Hat Subscription Manager (RHSM). To do this, the current supported method is to supply a username and password via environment variables. The two environment variables are RHSM_USER
and RHSM_PASS
. Although building RHEL images has been tested via this method, if an error is encountered during the build, the VM is deleted without the machine being unregistered with RHSM. To prevent this, it is recommended to build with the following command:
PACKER_FLAGS=-on-error=ask RHSM_USER=user RHSM_PASS=pass make build-node-ova-<hypervisor>-rhel-7
The addition of PACKER_FLAGS=-on-error=ask
means that if an error is encountered, the build will pause, allowing you to SSH into the machine and unregister manually.
Output
The images are built and located in images/capi/output/BUILD_NAME+kube-KUBERNETES_VERSION
Uploading Images
The images are uploaded to the GCS bucket capv-images
. The path to the image depends on the version of Kubernetes:
Build type | Upload location |
---|---|
CI | gs://capv-images/ci/KUBERNETES_VERSION/BUILD_NAME-kube-KUBERNETES_VERSION.ova |
Release | gs://capv-images/release/KUBERNETES_VERSION/BUILD_NAME-kube-KUBERNETES_VERSION.ova |
Uploading the images requires the gcloud
and gsutil
programs, an active Google Cloud account, or a service account with an associated key file. The latter may be specified via the environment variable KEY_FILE
.
hack/image-upload.py --key-file KEY_FILE BUILD_DIR
First the images are checksummed (SHA256). If a matching checksum already exists remotely then the image is not re-uploaded. Otherwise the images are uploaded to the GCS bucket.
Listing Available Images
Once uploaded the available images may be listed using the gsutil
program, for example:
gsutil ls gs://capv-images/release
Downloading Images
Images may be downloaded via HTTP:
Build type | Download location |
---|---|
CI | http://storage.googleapis.com/capv-images/ci/KUBERNETES_VERSION/BUILD_NAME-kube-KUBERNETES_VERSION.ova |
Release | http://storage.googleapis.com/capv-images/release/KUBERNETES_VERSION/BUILD_NAME-kube-KUBERNETES_VERSION.ova |
Testing Images
Accessing the Images
Accessing Local VMs
After the images are built, the VMs from they are built are prepped for local testing. Simply boot the VM locally with Fusion or Workstation and the machine will be initialized with cloud-init data from the cloudinit
directory. The VMs may be accessed via SSH by using the command hack/image-ssh.sh BUILD_DIR capv
.
Accessing Remote VMs
After deploying an image to vSphere, use hack/image-govc-cloudinit.sh VM
to snapshot the image and update it with cloud-init data from the cloudinit
directory. Start the VM and now it may be accessed with ssh -i cloudinit/id_rsa.capi capv@VM_IP
.
This hack necessitate the govc
utility from VMmare
Initialize a CNI
As root:
(copied from containernetworking/cni)
mkdir -p /etc/cni/net.d
curl -LO https://github.com/containernetworking/plugins/releases/download/v0.7.0/cni-plugins-amd64-v0.7.0.tgz
tar -xzf cni-plugins-amd64-v0.7.0.tgz --directory /etc/cni/net.d
cat >/etc/cni/net.d/10-mynet.conf <<EOF
{
"cniVersion": "0.2.0",
"name": "mynet",
"type": "bridge",
"bridge": "cni0",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"subnet": "10.22.0.0/16",
"routes": [
{ "dst": "0.0.0.0/0" }
]
}
}
EOF
cat >/etc/cni/net.d/99-loopback.conf <<EOF
{
"cniVersion": "0.2.0",
"name": "lo",
"type": "loopback"
}
EOF
Run the e2e node conformance tests
As a non-root user:
curl -LO https://dl.k8s.io/$(</etc/kubernetes-version)/kubernetes-test-linux-amd64.tar.gz
tar -zxvf kubernetes-test-linux-amd64.tar.gz
cd kubernetes/test/bin
sudo ./ginkgo --nodes=8 --flakeAttempts=2 --focus="\[Conformance\]" --skip="\[Flaky\]|\[Serial\]|\[sig-network\]|Container Lifecycle Hook" ./e2e_node.test -- --k8s-bin-dir=/usr/bin --container-runtime=remote --container-runtime-endpoint unix:///var/run/containerd/containerd.sock --container-runtime-process-name /usr/local/bin/containerd --container-runtime-pid-file= --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/containerd.service" --extra-log="{\"name\": \"containerd.log\", \"journalctl\": [\"-u\", \"containerd\"]}"
The cloudinit
Directory
The cloudinit
contains files that:
- Are example data used for testing
- Are not included in any of the images
- Should not be used in production systems
For more information about how the files in the cloudinit
directory are used, please refer to the section on accessing the images.
Testing CAPI Images
GOSS
Goss is a YAML based serverspec alternative tool for validating a server’s configuration. It is used in conjunction with packer-provisioner-goss to test if the images have all requisite components to work with cluster API.
Support Matrix
*For stock server-specs shipped with repo
OS | Builder |
---|---|
Amazon Linux | aws |
PhotonOS | ova |
Ubuntu | aws , ova, azure |
CentOS | aws, ova |
Prerequisites for Running GOSS
GOSS runs as a part of image building through a packer provisioner.
Supported arguments are passed through file: packer/config/goss-args.json
{
"goss_arch": "amd64",
"goss_entry_file": "goss/goss.yaml",
"goss_format": "json",
"goss_inspect_mode": "true",
"goss_tests_dir": "packer/goss",
"goss_url": "",
"goss_format_options": "pretty",
"goss_vars_file": "packer/goss/goss-vars.yaml",
"goss_version": "0.3.13"
}
Supported values for some of the arguments can be found here.
Enabling the
goss_inspect_mode
lets you build image even if goss tests fail.
Manually setup GOSS
- Start a VM from capi image
- Copy complete goss dir
packer/goss
to remote machine - Download and setup GOSS (use version from goss-args) on the remote machine. Instructions
- Custom goss version can be installed if testing custom server-specs supported by higher version of GOSS.
- All the variables used in GOSS are declared in
packer/goss/goss-vars.yaml
- Add more custom serverspec to corresponding GOSS files. Like,
goss-command.yaml
orgoss-kernel-params.yaml
some_cli --version: exit-status: 0 stdout: [{{ .Vars.some_cli_version }}] stderr: [] timeout: 0
- Add more custom variables to corresponding GOSS file
goss-vars.yaml
.some_cli_version: "1.4.5+k8s-1"
- Fill the variable values in
goss-vars.yaml
or specify in--vars-inline
while executing GOSS in below steps - Render the goss template to fix any problems with parsing variable and serverspec yamls
sudo goss -g goss/goss.yaml --vars /tmp/goss/goss-vars.yaml --vars-inline '{"ARCH":"amd64","OS":"Ubuntu","PROVIDER":"aws", some_cli_version":"1.3.4"}' render
- Run the GOSS tests
sudo goss -g goss/goss.yaml --vars /tmp/goss/goss-vars.yaml --vars-inline '{"ARCH":"amd64","OS":"Ubuntu","PROVIDER":"aws", some_cli_version":"1.3.4"}' validate --retry-timeout 0s --sleep 1s -f json -o pretty
Table of Contents
A
AWS
Amazon Web Services
AMI
C
CAPA
CAPG
CAPI
The Cluster API is a Kubernetes project to bring declarative, Kubernetes-style APIs to cluster creation, configuration, and management. It provides optional, additive functionality on top of core Kubernetes.
CAPV
CAPZ
E
ESXi
ESXi (formerly ESX) is an enterprise-class, type-1 hypervisor developed by VMware. ESXi provides strong separation between VMs and itself, providing strong security boundaries between the guest and host operating systems. ESXi can be used as a standalone entity, without vCenter but this is extremely uncommon and feature limited as without a higher level manager (vCenter). ESXi cannot provide its most valuable features, like High Availability, vMotion, workload balancing and vSAN (a software defined storage stack).
G
GOSS
Goss is a YAML based serverspec alternative tool for validating a server’s configuration. It is used in conjunction with packer-provisioner-goss to test if the images have all requisite components to work with cluster API.
K
K8s
Kubernetes
Kubernetes
Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications.
O
OVA
Open Virtual Appliance
A single package containing a pre-configured virtual machine, usually based on OVF.
OVF
Open Virtualization Format
An open standard for packaging and distributing virtual appliances or, more generally, software to be run in virtual machines.
V
vCenter
vCenter can be thought of as the management layer for ESXi hosts. Hosts can be arranged into Datacenters, Clusters or resources pools, vCenter is the centralized monitoring and management control plane for ESXi hosts allow centralized management, integration points for other products in the VMware SDDC stack and third party solutions, like backup, DR or networking overlay applications, such as NSX. vCenter also provides all of the higher level features of vSphere such as vMotion, vSAN, HA, DRS, Distributed Switches and more.
VM
A VM is an abstraction of an operating system from the physical machine by creating a “virtual” representation of the physical hardware the OS expects to interact with, this includes but is not limited to CPU instruction sets, memory, BIOS, PCI buses, etc. A VM is an entirely self-contained entity and shares no components with the host OS. In the case of vSphere the host OS is ESXi (see below).
vSphere
vSphere is the product name of the two core components of the VMware Software Defined Datacenter (SDDC) stack, they are vCenter and ESXi.