Blog posts

Running a Containerized Cockpit UI from Cloud-init

Fedora 22’s Atomic Host dropped most of packages for the web-based server UI, cockpit, from its system tree in favor of a containerized deployment approach. Matt Micene blogged about running cockpit-in-a-container with systemd, but people have expressed interest in learning how to start this container automatically, with cloud-init.

cloud-init and cockpit

Referencing the sample cockpitws.service file from Matt’s post, and using cloud-init’s cloud-config-write-files functionality, I started out with this service file:

Deploy Kubernetes with a Single Command Using Atomicapp

Kubernetes, the open source orchestration system for Docker containers, is a fast-moving project that can be somewhat complicated to install and configure, especially if you’re just getting started with it.

Fortunately, the project maintains some really well-done getting started guides, the simplest of which steps you through running Kubernetes, in Docker containers, on a single host.

The up-and-running part of the walkthrough amounts to issuing just three docker run commands:

# docker run --net=host -d gcr.io/google_containers/etcd:2.0.9 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data
# docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.21.2 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests
# docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.21.2 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2

Now, this isn’t as simple as rattling off a single command from memory, but we can’t have everything…

…or can we?

Through the magic of a couple tools we’ve been working on here at Project Atomic, we can get up and running with the upstream kubernetes project’s containerized install method using a single command like this:

# atomic run jasonbrooks/kubernetes-atomicapp

Why we don't let non-root users run Docker in CentOS, Fedora, or RHEL

I often get bug reports from users asking “why can’t I use docker as a non root user, by default?”

Docker has the ability to change the group ownership of the /run/docker.socket to have group permission of 660, with the group ownership the docker group. This would allow users added to the docker group to be able to run docker containers without having to execute sudo or su to become root. Sounds great…

El-Deko - Why Containers Are Worth the Hype

Video above from Kubernetes 1.0 Launch event at OSCON

In the above video, I attempted to put Red Hat’s container efforts into a bit of context, especially with respect to our history of Linux platform development. Having now watched the above video (they forced me to watch!) I thought it would be good to expound on what I discussed in the video.

Admit it, you’ve read one of the umpteen millions of articles breathlessly talking about the new Docker/Kubernetes/Flannel/CoreOS/whatever hotness and thought to yourself, “Wow, is this stuff overhyped.” There is some truth to that knee-jerk reaction, and the buzzworthiness of all things container-related should give one pause - “It’s turt^H^H^H^Hcontainers all the way down!”

Testing Nulecule on Debian

Testing Nulecule on Debian

Unless you’ve recently returned from a sabbatical year in a remote monastery with no internet, you know that Containers have arrived, and it’s a whole new world.

I’ll save you five minutes of reading, and 90 minutes of watching Disney’s Alladin and assume you know about containers. If not, take a look at Docker, rkt and the Open Container Project. For bonus points, watch How Docker Didn’t Invent Containers from the First Docker Meetup in my adopted hometown of Brno, Czech Republic. When you’re done singing the fantastic Disney songs, come back. I’ll wait.

Follow us, the Nulecule has moved!

The past weeks have been packed with preparations for Red Hat Summit 2015 and getting Atomic App and the Nulecule Specification into good shape. Now that we have finished that, we put at new release process in place and found a new home for the normative Nulecule Specification documents.

Additionally, the first extension of the Nulecule Specification has been started!

What are Docker <none>:<none> images?

The last few days, I have spent some time playing around with Docker’s <none>:<none> images. I’m writing this post to explain how they work, and how they affect docker users. This article will try to address questions like:

  1. What are <none>:<none> images ?
  2. What are dangling images ?
  3. Why do I see a lot of <none>:<none> images when I do docker images -a ?
  4. What is the difference between docker images and docker images -a ?

Before I start answering these questions, let’s take a moment to remember that there are two kinds of <none>:<none> images, the good and the bad.

Docker, CentOS 6, and You

Recently, I blogged about docker-on-loopback-storage woes and workarounds – a topic that came up during several conversations I had at last month’s Dockercon. Another frequently-discussed item from the conference involved Docker on CentOS 6, and whether and for how long users can count on running this combination.

Docker and CentOS 6 have never been a terrific fit, which shouldn’t be surprising considering that the version of the Linux kernel that CentOS ships was first released over three years before Docker’s first public release (0.1.0). The OS and kernel version you use matter a great deal, because with Docker, that’s where all your contained processes run.

With a hypervisor such as KVM, it’s not uncommon or problematic for an elder OS to host, through the magic of virtualization, all manner of bleeding-edge software components. In fact, if you’re attached to CentOS 6, virtualization is a solid option for running containers in a more modern, if virtual, host.

Project Atomic at ContainerCon

Attending ContainerCon in Seattle this year? Co-located with CloudOpen and LinuxCon, ContainerCon is focused on bringing contributors working with containers, the Linux kernel, and other components together to continue improving the Linux container ecosystem.

As you might expect, there’s quite a few talks on the schedule related to Project Atomic or components important to Atomic (like Kubernetes). Here’s a sample of talks you might want to plan on seeing:

Creating a Simple Bare Metal Atomic Host Cluster

Atomic host is a great technology for containerized applications. I like it especially on bare metal machines. In this post I will describe how to setup a simple Do-It-Yourself cluster consisting of three netbooted machines running docker over flannel. Flannel provides a NAT-less private network overlay. Through that network, application containers can easily reach any other containers within the cluster regardless of which machine they run on.

We use three machines called a1, a2, and a3. Let’s designate static IP addresses to them.

  • a1: 192.168.99.51
  • a2: 192.168.99.52
  • a3: 192.168.99.53

We install atomic host OS on these machines via netboot from another host. Let’s call that host boothost. It holds all installation and configuration files. We set up an unattended installation and configuration using kickstart and cloud-init.