Building the Next Generation Container OS

Use immutable infrastructure to deploy and scale your containerized applications. Project Atomic provides the best platform for your Linux Docker Kubernetes (LDK) application stack.

Project Atomic introduces Atomic Registry — a free and open source enterprise container registry. Manage your containers without third party hubs.

Learn more!


Atomic Host

Based on proven technology either from Red Hat Enterprise Linux or the CentOS and Fedora projects, Atomic Host is a lightweight, immutable platform, designed with the sole purpose of running containerized applications.

To balance the need between long-term stability and new features, we are providing different releases of Atomic Host for you to choose from.

Get Started

Atomic App and Nulecule

With Atomic App, use existing containers as building blocks for your new application product or project. Using existing containers to provide core infrastructure components lets you focus more on building the stuff that matters and less time packaging and setting up the common plumbing required.

Define your Atomic Apps with the Nulecule specification to compose and distribute complex applications.

Learn more about Atomic App

Learn more about Nulecule

Atomic Registry

An enterprise Docker container registry solution run on-premise or in the cloud.

Atomic Registry uses 100% open source technology to provide enterprise features such as role-based access control (RBAC), diverse authentication options, a rich web console, flexible storage integration and more.

Get started with Atomic Registry

Running Kubernetes and Friends in Containers on CentOS Atomic Host

The atomic hosts from CentOS and Fedora earn their “atomic” namesake by providing for atomic, image-based system updates via rpm-ostree, and atomic, image-based application updates via docker containers.

This “system” vs “application” division isn’t set in stone, however. There’s room for system components to move across from the somewhat rigid world of ostree commits to the freer-flowing container side.

In particular, the key atomic host components involved in orchestrating containers across multiple hosts, such as flannel, etcd and kubernetes, could run instead in containers, making life simpler for those looking to test out newer or different versions of these components, or to swap them out for alternatives.

The devel tree of CentOS Atomic Host, which features a trimmed-down system image that leaves out kubernetes and related system components, is a great place to experiment with alternative methods of running these components, and swapping between them.

Introduction to System Containers

As part of our effort to reduce the number of packages that are shipped with the Atomic Host image, we faced the problem of how to containerize services that are needed before Docker itself is running. The result: “system containers,” a way to run containers in production using read only images.

System containers use different technologies such as OSTree for the storage, Skopeo to pull images from a registry, runC to run the containers and systemd to manage their life cycle.

New CentOS Atomic Host with Package Layering Support

Last week, the CentOS Atomic SIG released an updated version of CentOS Atomic Host (tree version 7.20160818), featuring support for rpm-ostree package layering.

CentOS Atomic Host is available as a VirtualBox or libvirt-formatted Vagrant box; or as an installable ISO, qcow2, or Amazon Machine image. Check out the CentOS wiki for download links and installation instructions, or read on to learn more about what’s new in this release.

Project Atomic Docker Patches

Project Atomic’s version of the Docker-based container runtime has been carrying a series of patches on the upstream Docker project for a while now. Each time we carry a patch, it adds significant effort as we continue to track upstream, therefore we would prefer to never carry any patches. We always strive to get our patches upstream and do it in the open.

This post, and the accompanying document, will attempt to describe the patches we are currently carrying:

  • Explanation on types of patches.
  • Description of patches.
  • Links to GitHub discussions, and pull requests for upstreaming the patches to docker.

Some people have asserted that our docker repo is a fork of the upstream docker project.

What Does It Mean To Be a Fork?

I have been in open source for a long time, and my definition of a “fork” might be dated. I think of a “fork” as a hostile action taken by one group to get others to use and contribute to their version of an upstream project and ignore the “original” version. For example, LibreOffice forking off of OpenOffice or going way back Xorg forking off of Xfree86.

Nowadays, GitHub has changed the meaning. When a software repository exists on GitHub or a similar platform, everyone who wants to contribute has to hit the “fork” button, and start building their patches. As of this writing, Docker on GitHub has 9,860 forks, including ours. By this definition, however, all packages that distributions ship that include patches are forks. Red Hat ships the Linux Kernel, and I have not heard this referred to as a fork. But it would be considered a “fork” if you’re considering any upstream project shipped with patches a fork.

The Docker upstream even relies on Ubuntu carrying patches for AUFS that were never merged into the upstream kernel. Since Red Hat-based distributions don’t carry the AUFS patches, we contributed the support for Devicemapper, OverlayFS, and Btrfs backends, which are fully supported in the upstream kernel. This is what enterprise distributions should do: attempt to ship packages configured in a way that they can be supported for a long time.

At the end of the day, we continue to track the changes made to the upstream Docker Project and re-apply our patches to that project. We believe this is an important distinction to allow freedom in software to thrive while continually building stronger communities. It’s very different than a hostile fork that divides communities—we are still working very hard to maintain continuity around unified upstreams.

How Can I Find Out About Patches for a Particular Version of Docker?

All of the patches we ship are described in the README.md file on the appropriate branch of our docker repository. If you want to look at the patches for docker-1.12 you would look at the docker-1.12 branch.

You can then look on the docker patches list page for information about these patches.

What Kind of Patches does Project Atomic Include?

Here is a quick overview of the kinds of patches we carry, and then guidance on finding information on specific patches.

Upstream Fixes

The Docker Project upstream tends to fix issues in the next version of Docker. This means if a user finds an issue in docker-1.11 and we provide a fix for this to upstream, the patch gets merged in to the master branch, and it will probably not get back ported to docker-1.11.

Since Docker is releasing at such a rapid rate, they tell users to just install docker-1.12 when it is available. This is fine for people who want to be on the bleeding edge, but in a lot of cases the newer version of Docker comes with new issues along with the fixes.

For example, docker-1.11 split the docker daemon into three parts: docker daemon, containerd, and runc. We did not feel this was stable enough to ship to enterprise customers right when it came out, yet it had multiple fixes for the docker-1.10 version. Many users want to only get new fixes to their existing software and not have to re-certify their apps every two months.

Another issue with supporting stable software with rapidly changing dependencies is that developers on the stable projects must spend time ensuring that their product remains stable every time one of their dependencies is updated. This is an expensive process, dependencies end up being updated only infrequently. This causes us to “cherry-pick” fixes from upstream Docker and to ship these fixes on older versions so that we can get the benefits from the bug fixes without the cost of updating the entire dependency. This is the same approach we take in order to add capabilities to the Linux kernel, a practice that has proven to be very valuable to our users.

Proposed Patches for Upstream

We carry patches that we know our users require right now, but have not yet been merged into the upstream project. Every patch that we add to the Project Atomic repository also gets proposed to the upstream docker repository.

These sorts of patches remain on the Project Atomic repository briefly while they’re being considered upstream, or forever if the upstream community rejects them. If we don’t agree with upstream Docker and feel our users need these patches, we continue to carry them. In some cases we have worked out alternative solutions like building authorization plugins.

For example, users of RHEL images are not supposed to push these image onto public web sites. We wanted a way to prevent users from accidentally pushing RHEL based images to Docker Hub, so we originally created a patch to block the pushing. When authorization plugins were added we then created a plugin to protect users from pushing RHEL content to a public registry like Docker Hub, and no longer had to carry the custom patch.

Detailed List of Patches

Want to know more about specific patches? You can find the current table and list of patches on our new docker patches list page.

Vagrant Service Manager 1.3.0 Released

This version of vagrant-service-manager introduces support for displaying Kubernetes configuration information. This enable users to access the Kubernetes server that runs inside ADB virtual machine from their host machine.

This version also includes binary installation support for Kubernetes. This support is extended to users of the Red Hat Container Development Kit. For information about client binary installation, see the previous release announcement “Client Binary Installation Now Included in the ADB”.

The full list of features from this version are:

  • Configuration information for Kubernetes provided as part of the env command
  • Client binary installation support for Kubernetes added to the ADB
  • Client binary installation support for OpenShift, Kubernetes and Docker in the Red Hat Container Development Kit
  • Auto-detection of a previously downloaded oc executable binary on Windows operating systems
  • Unit and acceptance tests for the Kubernetes service
  • Option to enable Kubernetes from a Vagrantfile with the following command:
  config.servicemanager.services = 'kubernetes'

1. Install the kubernetes client binary

Run the following command to install the kubernetes binary, kubectl

$ vagrant service-manager install-cli kubernetes
# Binary now available at /home/budhram/.vagrant.d/data/service-manager/bin/kubernetes/1.2.0/kubectl
# run binary as:
# kubectl <command>
export PATH=/home/budhram/.vagrant.d/data/service-manager/bin/kubernetes/1.2.0:$PATH

# run following command to configure your shell:
# eval "$(VAGRANT_NO_COLOR=1 vagrant service-manager install-cli kubernetes | tr -d '\r')"

Run the following command to configure your shell

$ eval "$(VAGRANT_NO_COLOR=1 vagrant service-manager install-cli kubernetes | tr -d '\r')"

2. Enable access to the kubernetes server that runs inside of the ADB

Run the following command to display environment variable for kubernetes

$ vagrant service-manager env kubernetes
# Set the following environment variables to enable access to the
# kubernetes server running inside of the vagrant virtual machine:
export KUBECONFIG=/home/budhram/.vagrant.d/data/service-manager/kubeconfig

# run following command to configure your shell:
# eval "$(vagrant service-manager env kubernetes)"

Run the following command to configure your shell

eval "$(vagrant service-manager env kubernetes)"

For a full list of changes in version 1.3.0, see the release log.

Creating OCI configurations with the ocitools generate library

OCI runc is a cool new tool for running containers on Linux machines. It follows the OCI container runtime specification. As of docker-1.11 it is the main mechanism that docker uses for launching containers.

The really cool thing is that you can use runc without even using docker. First you create a rootfs on your disk: a directory that includes all of your software and usually follows the basic layout of /. There are several tools that can create a rootfs, including dnf or the atomic command. Once you have a rootfs, you need to create a config.json file which runc will read. config.json has all of the specifications for running a container, things like which namespaces to use, which capabilities to use in your container, and what is the pid 1 of your container. It is somewhat similar to the output of docker inspect.

Creating and editing the config.json is not for the faint of heart, so we developed a command line tool called ocitools generate that can do the hard work of creating the config.json file.

Creating OCI Configurations

This post will guide you through the steps of creating OCI configurations using the ocitools generate library for the go programming language.

There are four steps to create an OCI configuration using the ocitools generate library:

  1. Import the ocitools generate library into your project;
  2. Create an OCI specification generator;
  3. Modify the specification by calling different methods of the specification generator;
  4. Save the specification.

Download and Get Involved with Fedora Atomic 24

This week, the Fedora Project released updated images for its Fedora 24-based Atomic Host. Fedora Atomic Host is a leading-edge operating system designed around Kubernetes and Docker containers.

Fedora Atomic Host images are updated roughly every two weeks, rather than on the main six-month Fedora cadence. Because development is moving quickly, only the latest major Fedora release is supported.

Note: Due to an issue with the image-building process, the current Fedora Atomic Host images include an older version of the system tree. Be sure to atomic host upgrade to get the latest set of components. The next two-week media refresh will include an up-to-date tree.

Working with Containers' Images Made Easy Part 1: skopeo

This is the first part of a series of posts about containers’ images. In this first part we’re going to focus on skopeo.

Back in March, I published a post about skopeo, a new tiny binary to help people interact with Docker registries. Its job has been limited to inspect (skopeo is greek for looking for, observe) images on remote registries as opposed to docker inspect, which is working for locally pulled images.

Client Binary Installation Now Included in the ADB

As part of the effort to continually improve the developer experience and make getting started easier, the ADB now supports client binary downloads. These downloads are facilitated by a new feature in ‘vagrant-service-manger’, the install-cli command.

The vagrant-service-manager plugin enables easier access to the features and services provided by the Atomic Developer Bundle (ADB). More information can be found in the README of ‘vagrant-service-manager’ repo.

The install-cli command was released as part of ‘vagrant-service-manager’ version 1.2.0. This command installs the client binary for services provided by the ADB. Today it can download client binaries for docker and OpenShift. This feature allows developers to know they have the best client for use with the ADB services they are using.

subscribe via RSS