Blog posts

Running Syslog Within a Docker Container

Recently I received a bug report on Docker complaining about using rsyslogd within a container.

The user ran a RHEL7 container, installed rsyslog, started the daemon, and then sent a logger message, and nothing happened.

# docker run -it --rm rhel /bin/bash
# yum -y install rsyslog
# /usr/sbin/rsyslogd
# logger "this is a test"

No message showed up in /var/log/messages within the container, or on the host machine for that matter.

The user then looked and noticed that /dev/log did not exist and this was where logger was writing the message. The user thought this was a bug.

The problem was that in RHEL7 and Fedora we now use journald, which listens on /dev/log for incoming messages. In RHEL7 and Fedora, rsyslog actually reads messages from the journal via its API by default.

But not all docker containers run systemd and journald. (Most don’t). In order to get the rsyslogd to work the way the user wanted, he would have to modify the configuration file, /etc/rsyslogd.conf:

  • In /etc/rsyslog.conf remove $ModLoad imjournal.
  • Set $OmitLocalLogging to off.
  • Make sure $ModLoad imuxsock is present.
  • Also comment out: $IMJournalStateFile imjournal.state.

After making these changes rsyslogd will start listening on /dev/log within the container and the logger messages will get accepted by rsyslogd and written to /var/log/messages within the container.

If you wanted to logging messages to go to the host logger, you could “volume” mount /dev/log into the container.

# docker run -v /dev/log:/dev/log -it --rm rhel /bin/bash
# logger "this is a test"

The message should show up in the host’s journalct log, and if you are running rsyslog on the host, the message should end up in /var/log/messages.

Keeping Up with Docker Security

I’ve been working on the Project Atomic team at Red Hat on Security for Docker containers. In order to get the word out I have been writing a series of blogs on Docker Security for OpenSource.com. I’ve written two so far, and hope to have the third done soon.

The first post covers the fact that a privileged process within a container is the same from a security point of view as the security of a privileged process outside of a container. The idea I am trying to get across is to set up your application services the same way inside of containers as out, and don’t rely on container technology to protect you.

The second post covers everything that has been put into Docker to try to control the privileged and unprivileged processes within a container. We have things like Read Only File Systems, Dropped capabilities, SELinux, Control over device nodes etc. The cool part of this is it adds a lots of nice new security over the containerized service, but (see “Are Docker containers really secure?”), you still want to only use trusted applications and drop privileges as quickly as possible.

The last post on OpenSource.com will cover the next group of features we want to add to Docker to make it more secure.

After publishing the first two articles SDTimes contacted me to do an interview on Docker Security, which they published today as “How Red Hat and the open-source community are fortifying Docker”.

Finally, the presentation I gave at DockerCon discussing Docker and SELinux is available on YouTube. Continue watching here for additional Docker security information!

CentOS Docker Images updated to 20140902

Some fresh Docker fun as we head into the weekend! The CentOS images in the Docker index have been bumped to 20140902.

Fixes

These updates bring the following fixes:

  1. Add CentOS-5 image, with SELinux patch (thanks to Dan Walsh and Miroslav Grepl!)

  2. CentOS-7 image includes a fakesystemd package instead of the distro provided systemd. This should resolve a number of the udev and/or pid-1 errors users were seeing. This package is only useful for docker, and will break other installs.

  3. Images now contain a new file, /etc/BUILDTIME, to reference when the image was created/published.

  4. Includes recent updates current to 20140902.

More info

For detailed information or to see the code differences used in building the images, please see: https://github.com/CentOS/sig-cloud-instance-build.

Bringing new security features to Docker

A great follow-up to my post about Jérôme Petazzoni’s post on Docker and security, Dan Walsh has a post up on OpenSource.com explaining just what’s being done about Docker security.

Says Dan, “Docker, Red Hat, and the open source community are working together to make Docker more secure. When I look at security containers, I am looking to protect the host from the processes within the container, and I’m also looking to protect containers from each other. With Docker we are using the layered security approach, which is ‘the practice of combining multiple mitigating security controls to protect resources and data.’

Basically, we want to put in as many security barriers as possible to prevent a break out. If a privileged process can break out of one containment mechanism, we want to block them with the next. With Docker, we want to take advantage of as many security mechanisms of Linux as possible.”

Read the full post over on OpenSource.com. »

Is It Safe? A Look at Docker and Security from LinuxCon

Running applications in Docker is easy. Developers and users are finding this out in droves, which is why Docker is a runaway success. But is it safe? The answer seems to be a resounding “it depends,” but trending more closely to “yes” as work continues on Docker and we learn more about how to secure workloads.

Jérôme Petazzoni, “tinkerer extraordinaire” at Docker, gave an excellent presentation at LinuxCon in Chicago that addressed the safety of running applications in Linux containers. (The presentation from SlideShare is embedded below.)

The short answer, in absolute terms, is “no” if you depend solely on Docker to ensure security. As Dan Walsh says (and Petazzoni pointed out) “containers do not contain.”

Currently, if you have root in a container, you potentially can have root on the entire box. Petazzoni suggests that there are a few solutions to that problem:

  • Don’t give root
  • If the application “needs” root, give “looks-like-root”
  • If that’s not sufficient, “give root, but build another wall”

Threat Models and Docker

Petazzoni then ran through different use cases / threat models that you might run into with Docker and fixes for the threats they may pose. For instance, if you’re worried about normal apps escalating from non-root to root, “defang” SUID binaries by removing the SUID bit and/or mount filesystems with nosuid. Worried about applications “leaking” to another container? Use user namespaces to map UIDs to different UIDs outside the container (e.g. UID 1000 in the container is 14298 outside).

Petazzoni continued with examples of potential fixes for scenarios where Docker might be attacked, up to situations where one might want to run kernel drivers or network stacks in Docker. His response? “Please stop trying to shoot yourself in the foot safely.” (In other words, anything that requires control over hardware isn’t going to be more secure in a container!)

You can, of course, get crazy and run Docker-within-Docker by using KVM within a container. But then again, maybe everything doesn’t need to be containerized.

One area that Petazzoni didn’t mention during the initial talk is image signing. Right now, a lot of people are sharing Docker images without signing to ensure that the code you’re running in a container actually is what was originally supplied or is actually from the source it purports to be from. This is a major concern, and Petazzoni says signing will be addressed eventually.

With some caveats, though, the security picture for Docker is pretty good – but not yet perfect. So it goes. At the rate Docker is improving, we’ll see many of the issues that Petazzoni discussed addressed by this time next year. And, in many cases, there are already workarounds.

The presentation (below) is well worth skimming through. Overall, Petazzoni delivered a great presentation – to a packed room, I might add. Interest in Docker at LinuxCon was quite high (not surprisingly). Last year, I recall Docker being discussed at LinuxCon but with little indication of how important it would be this year. Should be interesting to look back next year to see where we were in mid-2014 and how far it’s come.

If you’re interested in all things Docker, you probably want to follow Petazzoni on Twitter at @jpetazzo.

CentOS 7 Alpha Builds for Atomic

Yesterday, Karanbir Singh announced an alpha-quality build of CentOS 7 Atomic that’s suitable for developing rpm-ostree tools and helping the SIG get started.

As KB points out, the images contain unsigned content that’s produced outside the CentOS.org build system. You should be able to run Docker containers just fine, but it doesn’t yet include Cockpit or Kubernetes packages.

Also, there’s not an upstream ostree repo yet, but KB plans to set up a repo set up under cloud.centos.org soon. Even better, he plans to start running builds every two days as the content stabilizes, and eventually get the builds up on CentOS.org.

Please give it a whirl, though, and report any problems found to the CentOS-devel mailing list.

Build Your Own Atomic Image, Updated

When Project Atomic got off the ground in April, I wrote a blog post about how anyone could Build Your Own Atomic host, based on Fedora 20. Since that time, there have been some changes in the rpm-ostree tooling used to produce these images.

What’s more, there’s a new distro on the block, CentOS 7, that you may wish to build into an Atomic host. Part of what’s great about the Atomic model is the way it can apply to different distributions. Here’s our chance to play with that.

The tooling around creating Atomic images is still in flux, and will continue to change (for the better). For now, tough, here’s an updated guide to building your own Atomic host(s), based on Fedora 20 or on CentOS 7.

First, build and configure the builder:

Install Fedora 20 (CentOS 7 can work, too, with some tweaking, but here I’m stick with Fedora). You can build trees and images for Fedora or CentOS from the same builder.

Disable selinux by changing enforced to disabled in /etc/selinux/config and then systemctl reboot to complete selinux disabling. While we’re never happy about disabling SELinux, it’s necessary (for now) to disable it on your builder in order to enable it on the Atomic instances you build.

The rpm-ostree commands below need to be run as root or w/ sudo, but for some reason, the image-building part of the process is only working for me while running as root (not sudo), so I log in as root and work in /root.

# yum install -y git
# git clone https://github.com/jasonbrooks/byo-atomic.git
# mv byo-atomic/walters-rpm-ostree-fedora-20-i386.repo /etc/yum.repos.d/
# yum install -y rpm-ostree rpm-ostree-toolbox nss-altfiles yum-plugin-protectbase httpd

Now, edit /etc/nsswitch.conf change lines passwd: files and group: files to passwd: files altfiles and group: files altfiles (details).

Then, edit /etc/libvirt/qemu.conf to uncomment the line user = "root" and systemctl restart libvirtd.

Now, we’ll set up a repository from which our eventual Atomic hosts will fetch upgrades:

# mkdir -p /srv/rpm-ostree/repo && cd /srv/rpm-ostree/ && sudo ostree --repo=repo init --mode=archive-z2
# cat > /etc/httpd/conf.d/rpm-ostree.conf <<EOF
DocumentRoot /srv/rpm-ostree
<Directory "/srv/rpm-ostree">
Options Indexes FollowSymLinks
AllowOverride None
Require all granted
</Directory>
EOF
# systemctl daemon-reload &&
systemctl enable httpd &&
systemctl start httpd &&
systemctl reload httpd &&
firewall-cmd --add-service=http &&
firewall-cmd --add-service=http --permanent

Next, build the Atomic host:

The *.json files in the c7 and f20 directories contain the definitions for these Atomic hosts. The *-atomic-base.json file contains the list of repositories to include. The git repo I’ve pointed to includes the *.repo files you need. If you wish to add others, put them in the c7 or f20 folder and reference them in centos-atomic-base.json or fedora-atomic-base.json.

The *-atomic-server-docker-host.json files pull in the base json files, and add additional packages. To add or remove packages, edit fedora-atomic-server-docker-host.json or centos-atomic-server-docker-host.json.

For CentOS 7:

# cd /root/byo-atomic/c7
# rpm-ostree compose tree --repo=/srv/rpm-ostree/repo centos-atomic-server-docker-host.json
# rpm-ostree-toolbox create-vm-disk /srv/rpm-ostree/repo centos-atomic-host centos-atomic/7/x86_64/server/docker-host c7-atomic.qcow2

For Fedora 20:

# cd /root/byo-atomic/f20
# rpm-ostree compose tree --repo=/srv/rpm-ostree/repo fedora-atomic-server-docker-host.json
# rpm-ostree-toolbox create-vm-disk /srv/rpm-ostree/repo fedora-atomic-host fedora-atomic/20/x86_64/server/docker-host f20-atomic.qcow2

After you’ve created your image(s), future runs of the rpm-ostree compose tree command will add updated packages to your repo, which you can pull down to an Atomic instance. For more information on updating, see “Configuring your Atomic instance to receive updates,” below.

Converting images to .vdi (if desired)

These scripts produce qcow2 images, which are ready to use with OpenStack or with virt-manager/virsh. To produce *.vdi images, use qemu-img to convert:

qemu-img convert -f qcow2 c7-atomic.qcow2 -O vdi c7-atomic.vdi

How to log in?

Your atomic images will be born with no root password, so it’s necessary to supply a password or key to log in using cloud-init. If you’re using a virtualization application without cloud-init support, such as virt-manager or VirtualBox, you can create a simple iso image to provide a key or password to your image when it boots.

To create this iso image, you must first create two text files.

Create a file named “meta-data” that includes an “instance-id” name and a “local-hostname.” For instance:

instance-id: Atomic0
local-hostname: atomic-00

The second file is named “user-data,” and includes password and key information. For instance:

#cloud-config
password: atomic
chpasswd: {expire: False}
ssh_pwauth: True
ssh_authorized_keys:
  - ssh-rsa AAA...SDvz user1@yourdomain.com
  - ssh-rsa AAB...QTuo user2@yourdomain.com

Once you have completed your files, they need to packaged into an ISO image. For instance:

# genisoimage -output atomic0-cidata.iso -volid cidata -joliet -rock user-data meta-data

You can boot from this iso image, and the auth details it contains will be passed along to your Atomic instance.

For more information about creating these cloud-init iso images, see http://cloudinit.readthedocs.org/en/latest/topics/datasources.html#config-drive.

Configuring your Atomic instance to receive updates

As created using these instructions, your Atomic image won’t be configured to receive updates. To configure your image to receive updates from your build machine, edit (as root) the file /ostree/repo/config and add a section like this:

[remote "centos-atomic-host"]
url=http://$YOUR_BUILD_MACHINE/repo
branches=centos-atomic/7/x86_64/server;
gpg-verify=false

Or, for Fedora:

[remote "fedora-atomic-host"]
url=http://$YOUR_BUILD_MACHINE/repo
branches=fedora-atomic/20/x86_64/server;
gpg-verify=false

With your repo configured, you can check for updates with the command sudo rpm-ostree upgrade, followed by a reboot. Don’t like the changes? You can rollback with rpm-ostree rollback, followed by another reboot.

Till Next Time

If you run into trouble following this walkthrough, I’ll be happy to help you get up and running or get pointed in the right direction. Ping me at jbrooks in #atomic on freenode irc or @jasonbrooks on Twitter. Also, be sure to check out the Project Atomic Q&A site.

Cockpit Roadmap and Contributing

These days it’s easier than ever to contribute to Cockpit. Here’s how.

Make sure you have it installed and running. Then checkout the cockpit sources and link the modules directory into your home directory.

$ git clone https://github.com/cockpit-project/cockpit.git
$ mkdir -p ~/.local/share
$ ln -snf $(pwd)/cockpit/modules ~/.local/share/cockpit

Now log into Cockpit with your own user login. Any changes you make in the modules subdirectory of the cockpit javascript or HTML that you checked out, should be visible immediately after a refresh.

If you want to hack on other parts of Cockpit, such as the backend, there’s a handy guide here:

https://github.com/cockpit-project/cockpit/blob/master/HACKING.md

You can file issues you run into here:

https://github.com/cockpit-project/cockpit/issues/new

And finally you can see what we’re working on at our Trello board:

https://trello.com/b/mtBhMA1l/cockpit

Have fun!

Upstream Atomic: Vagrant Support for Kubernetes

One of the most interesting things about Project Atomic is how much work is going on, even as the project seems to be standing still. After the discussions Joe and I have had at OSCON this past week, I can safely say the work around containers is moving so fast that it almost seems that if you blink you will miss it.

Atomic is not the usual open source project, in that there’s not really code to download and install as a separate package. Rather, Project Atomic a combination of a lot of upstream projects that will be integrated within CentOS and Fedora. And, of course, Red Hat plans to build and distribute its own Red Hat Enterprise Linux Atomic Host.

Because Atomic’s small but growing community is using upstream projects like Apache Mesos, Google’s Kubernetes, and Docker, community members are submitting new code and features to those projects on almost a daily basis.

Case in point: yesterday Red Hat’s Derek Carr let us know that a new feature he was working on for Kubernetes had been merged into that project: the capability to manage Vagrant clusters with Kubernetes.

Kubernetes is just one of the orchestration tools that will be included in Atomic for container management, and the inclusion of Vagrant support is a key move to get more developer involvement. While developers have long coded Linux applications, many programmers prefer Apple’s hardware for their needs. Vagrant is very useful tool enabling them to have the best of both worlds.

Specifically, Kubernetes users will be able to spin up a local Vagrant cluster of Fedora machines running a single master with N minions. Kubernetes will reuse existing Salt configuration scripts to provision master and minions. Carr has also added support to run on Red Hat-based operating systems, where systemd manages installed services.

Carr has tested this on Vagrant 1.6.2, and it is recommended that users who want to test this feature use this version or higher of Vagrant. Head on over to GitHub and test this new feature today.