<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Project Atomic</title>
    <description>Write an awesome description for your new site here. You can edit this line in _config.yml. It will appear in your document head meta (for Google search results) and in your feed.xml site description.
</description>
    <link>http://garrettlesage.com/jekyll-springboard-atomic/</link>
    <atom:link href="http://garrettlesage.com/jekyll-springboard-atomic/feed.xml" rel="self" type="application/rss+xml"/>
    <pubDate>Thu, 27 Oct 2016 13:19:30 +0000</pubDate>
    <lastBuildDate>Thu, 27 Oct 2016 13:19:30 +0000</lastBuildDate>
    <generator>Jekyll v3.2.1</generator>
    
      <item>
        <title>Running Kubernetes and Friends in Containers on CentOS Atomic Host</title>
        <description>&lt;p&gt;The &lt;a href=&quot;http://www.projectatomic.io/docs/introduction/&quot;&gt;atomic hosts&lt;/a&gt; from CentOS and Fedora earn their “atomic” namesake by providing for atomic, image-based system updates via rpm-ostree, and atomic, image-based application updates via docker containers.&lt;/p&gt;

&lt;p&gt;This “system” vs “application” division isn’t set in stone, however. There’s room for system components to move across from the &lt;a href=&quot;http://www.projectatomic.io/blog/2016/07/hacking-and-extending-atomic-host/&quot;&gt;somewhat&lt;/a&gt; rigid world of ostree commits to the freer-flowing container side.&lt;/p&gt;

&lt;p&gt;In particular, the key atomic host components involved in orchestrating containers across multiple hosts, such as flannel, etcd and kubernetes, could run instead in containers, making life simpler for those looking to test out newer or different versions of these components, or to swap them out for alternatives.&lt;/p&gt;

&lt;p&gt;The &lt;a href=&quot;https://wiki.centos.org/SpecialInterestGroup/Atomic/Devel&quot;&gt;devel tree&lt;/a&gt; of CentOS Atomic Host, which features a trimmed-down system image that leaves out kubernetes and related system components, is a great place to experiment with alternative methods of running these components, and swapping between them.&lt;/p&gt;

&lt;p&gt;READMORE&lt;/p&gt;

&lt;h2 id=&quot;system-containers&quot;&gt;System Containers&lt;/h2&gt;

&lt;p&gt;Running system components in docker containers can be tricky, because these containers aren’t automatically integrated with systemd, like other system services, and because some components, such as flannel, need to modify docker configs, resulting in a bit of a chicken-and-egg situation.&lt;/p&gt;

&lt;p&gt;One solution is &lt;a href=&quot;http://www.projectatomic.io/blog/2016/09/intro-to-system-containers/&quot;&gt;system containers for atomic&lt;/a&gt;, which can be run independently from the docker daemon. &lt;a href=&quot;https://twitter.com/gscrivano&quot;&gt;Giuseppe Scrivano&lt;/a&gt; has built example containers &lt;a href=&quot;https://hub.docker.com/r/gscrivano/flannel/&quot;&gt;for flannel&lt;/a&gt; and &lt;a href=&quot;https://hub.docker.com/r/gscrivano/etcd/&quot;&gt;for etcd&lt;/a&gt;, and in this post, I’ll be using system containers to run flannel and etcd on my atomic hosts.&lt;/p&gt;

&lt;p&gt;You need a very recent version of the &lt;code class=&quot;highlighter-rouge&quot;&gt;atomic&lt;/code&gt; command. I used a pair of CentOS Atomic Hosts running the &lt;a href=&quot;https://wiki.centos.org/SpecialInterestGroup/Atomic/Devel&quot;&gt;“continuous”&lt;/a&gt; stream.&lt;/p&gt;

&lt;p&gt;The master host needs etcd and flannel:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# atomic install --system gscrivano/etcd

# systemctl start etcd
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;p&gt;With etcd running, we can use it to configure flannel:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# runc exec etcd etcdctl set /atomic.io/network/config '{&quot;Network&quot;:&quot;172.17.0.0/16&quot;}'

# atomic install --system gscrivano/flannel

# systemctl start flannel
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;p&gt;The worker node needs flannel as well:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# export MASTER_IP=YOUR-MASTER-IP

# atomic install --system --set FLANNELD_ETCD_ENDPOINTS=http://$MASTER_IP:2379 gscrivano/flannel

# systemctl start flannel
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;p&gt;On both the master and the worker, we need to make docker use flannel:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# echo &quot;/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker&quot; | runc exec flannel bash
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;p&gt;Also on both hosts, we need this docker tweak (&lt;a href=&quot;https://github.com/kubernetes/kubernetes/issues/4869&quot;&gt;because of this&lt;/a&gt;):&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# cp /usr/lib/systemd/system/docker.service /etc/systemd/system/

# sed -i s/MountFlags=slave/MountFlags=/g /etc/systemd/system/docker.service

# systemctl daemon-reload

# systemctl restart docker
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;p&gt;On both hosts, some context tweaks to make SELinux happy:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# mkdir -p /var/lib/kubelet/

# chcon -R -t svirt_sandbox_file_t /var/lib/kubelet/

# chcon -R -t svirt_sandbox_file_t /var/lib/docker/
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;h2 id=&quot;kube-in-containers&quot;&gt;Kube in Containers&lt;/h2&gt;

&lt;p&gt;With etcd and flannel in place, we can proceed with running kubernetes. We can run each of the kubernetes master and worker components in containers, using the rpms available in the CentOS repositories. Here I’m using containers built from &lt;a href=&quot;https://github.com/jasonbrooks/CentOS-Dockerfiles/tree/pr-kubernetes/kubernetes&quot;&gt;these dockerfiles&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id=&quot;on-the-master&quot;&gt;On the master&lt;/h3&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# export MASTER_IP=YOUR-MASTER-IP

# docker run -d --net=host jasonbrooks/kubernetes-apiserver:centos --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota --address=0.0.0.0 --insecure-bind-address=0.0.0.0

# docker run -d --net=host --privileged jasonbrooks/kubernetes-controller-manager:centos

# docker run -d --net=host jasonbrooks/kubernetes-scheduler:centos
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;h3 id=&quot;on-the-worker&quot;&gt;On the worker&lt;/h3&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# export WORKER_IP=YOUR-WORKER-IP

# atomic run --opt3=&quot;--master=http://$MASTER_IP:8080&quot; jasonbrooks/kubernetes-proxy:centos

# atomic run --opt1=&quot;-v /etc/kubernetes/manifests:/etc/kubernetes/manifests:ro&quot; --opt3=&quot;--address=$WORKER_IP --config=/etc/kubernetes/manifests --hostname_override=$WORKER_IP --api_servers=http://$MASTER_IP:8080 --cluster-dns=10.254.0.10 --cluster-domain=cluster.local&quot; jasonbrooks/kubernetes-kubelet:centos
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;h3 id=&quot;get-matching-kubectl&quot;&gt;Get matching kubectl&lt;/h3&gt;

&lt;p&gt;I like to test things out from my master node. We can grab the version of the kubernetes command line client, kubectl, that matches our rpms by extracting it from the CentOS rpm.&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# rpm2cpio http://mirror.centos.org/centos/7/extras/x86_64/Packages/kubernetes-client-1.2.0-0.13.gitec7364b.el7.x86_64.rpm | cpio -iv --to-stdout &quot;./usr/bin/kubectl&quot; &amp;gt; /usr/local/bin/kubectl &amp;amp;&amp;amp; chmod +x /usr/local/bin/kubectl
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;p&gt;We can then use kubectl to check on the status of our node(s):&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# kubectl get nodes

NAME            STATUS    AGE
10.10.171.216   Ready     3h
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;h3 id=&quot;dns-addon&quot;&gt;DNS Addon&lt;/h3&gt;

&lt;p&gt;The guestbookgo sample app that I like to use for testing requires the dns addon, so I always ensure that it’s installed. Here I’m following the directions from the docker-multinode page in the kube-deploy repository:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# curl -O https://raw.githubusercontent.com/kubernetes/kube-deploy/master/docker-multinode/skydns.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;p&gt;I skipped over setting up certificates for this cluster, so in order for the dns addon to work, the kube2sky container in the dns pod needs the argument &lt;code class=&quot;highlighter-rouge&quot;&gt;--kube-master-url=http://$MASTER_IP:8080&lt;/code&gt;. Also, I’m changing the &lt;code class=&quot;highlighter-rouge&quot;&gt;clusterIP&lt;/code&gt; to &lt;code class=&quot;highlighter-rouge&quot;&gt;10.254.0.10&lt;/code&gt; to match the default from the rpms.&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# vi skydns.yaml

...

- name: kube2sky
        image: gcr.io/google_containers/kube2sky-ARCH:1.15
        resources:
          limits:
            cpu: 100m
            memory: 50Mi
        args:
        # command = &quot;/kube2sky&quot;
        - --domain=cluster.local
		- --kube-master-url=http://$MASTER_IP:8080
...

spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.254.0.10

&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;p&gt;Now to start up the dns addon:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# kubectl create namespace kube-system

# export ARCH=amd64

# sed -e &quot;s/ARCH/${ARCH}/g;&quot; skydns.yaml | kubectl create -f -
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;h2 id=&quot;test-it&quot;&gt;Test it&lt;/h2&gt;

&lt;p&gt;It takes a few minutes for all the containers to get up and running. Once they are, you can start running kubernetes apps. I typically test with the &lt;a href=&quot;https://github.com/projectatomic/nulecule-library/tree/master/guestbookgo-atomicapp&quot;&gt;guestbookgo atomicapp&lt;/a&gt;:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# atomic run projectatomic/guestbookgo-atomicapp
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;p&gt;Wait a few minutes, until &lt;code class=&quot;highlighter-rouge&quot;&gt;kubectl get pods&lt;/code&gt; tells you that your guestbook and redis pods are running, and then:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# kubectl describe service guestbook | grep NodePort
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;p&gt;Visiting the &lt;code class=&quot;highlighter-rouge&quot;&gt;NodePort&lt;/code&gt; returned above at either my master or worker IP (these kube scripts configure both to serve as workers) gives me this:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;images/guestbook-ftw.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;a-newer-kube&quot;&gt;A Newer Kube&lt;/h2&gt;

&lt;p&gt;The kubernetes rpms in the CentOS repositories are currently at version 1.2. To try out the current, 1.3 version of kubernetes, you can swap in some newer rpms, running in Fedora or Rawhide-based containers, or you can use the latest kubernetes containers provided by the upstream project.&lt;/p&gt;

&lt;h3 id=&quot;cleaning-up&quot;&gt;Cleaning up&lt;/h3&gt;

&lt;p&gt;If you’ve already configured kubernetes using the above instructions, and want to start over with a different kubernetes version, you can blow your existing cluster away (while leaving the flannel and etcd pieces in place).&lt;/p&gt;

&lt;p&gt;First, remove all the docker containers running on your master and node(s). On each machine, run:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# docker rm -f $(docker ps -a -q)
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;p&gt;Run &lt;code class=&quot;highlighter-rouge&quot;&gt;docker ps&lt;/code&gt; to see if any containers are left over, if there are, run the command above again until they’re all gone. Since we’re running flannel and etcd using runc, killing all our docker containers won’t affect our system containers.&lt;/p&gt;

&lt;p&gt;Next, you can clear out the etcd keys associated with the cluster by running this on your master:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# runc exec -t etcd etcdctl rm --recursive /registry
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;p&gt;Then, on your worker node, clean up the &lt;code class=&quot;highlighter-rouge&quot;&gt;/var/lib/kubelet&lt;/code&gt; directory:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# rm -rf /var/lib/kubelet/*
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;p&gt;## Containers from Rawhide&lt;/p&gt;

&lt;p&gt;I mentioned that the CentOS repositories contain a v1.2 kubernetes. However, Fedora’s &lt;a href=&quot;https://fedoraproject.org/wiki/Releases/Rawhide&quot;&gt;Rawhide&lt;/a&gt; includes v1.3-based kubernetes packages. howI wanted to try out the most recent rpm-packaged kubernetes, so I rebuilt the containers I used above swapping in &lt;code class=&quot;highlighter-rouge&quot;&gt;fedora:rawhide&lt;/code&gt; for &lt;code class=&quot;highlighter-rouge&quot;&gt;centos:centos7&lt;/code&gt; in the &lt;code class=&quot;highlighter-rouge&quot;&gt;FROM:&lt;/code&gt; lines of the dockerfiles.&lt;/p&gt;

&lt;p&gt;You can run the same series of commands listed above, with the container tag changed from &lt;code class=&quot;highlighter-rouge&quot;&gt;:centos&lt;/code&gt; to &lt;code class=&quot;highlighter-rouge&quot;&gt;:rawhide&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;For instance:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# docker run -d --net=host jasonbrooks/kubernetes-scheduler:centos --master=http://$MASTER_IP:8080
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;p&gt;becomes:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# docker run -d --net=host jasonbrooks/kubernetes-scheduler:rawhide --master=http://$MASTER_IP:8080
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;h2 id=&quot;containers-from-upstream&quot;&gt;Containers from Upstream&lt;/h2&gt;

&lt;p&gt;With flannel and etcd running in system containers, and with docker configured properly, we can start up kubernetes using &lt;a href=&quot;https://github.com/kubernetes/kubernetes/tree/master/cluster/images/hyperkube&quot;&gt;the containers&lt;/a&gt; built by the upstream kubernetes project. I’ve pulled the following docker run commands from the &lt;a href=&quot;https://github.com/kubernetes/kube-deploy/tree/master/docker-multinode&quot;&gt;docker-multinode&lt;/a&gt; scripts in the kubernetes project’s kube-deploy repository.&lt;/p&gt;

&lt;p&gt;On the master:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# docker run -d \
    --net=host \
    --pid=host \
    --privileged \
    --restart=&quot;unless-stopped&quot; \
    --name kube_kubelet_$(date | md5sum | cut -c-5) \
    -v /sys:/sys:rw \
    -v /var/run:/var/run:rw \
    -v /run:/run:rw \
    -v /var/lib/docker:/var/lib/docker:rw \
    -v /var/lib/kubelet:/var/lib/kubelet:shared \
    -v /var/log/containers:/var/log/containers:rw \
    gcr.io/google_containers/hyperkube-amd64:$(curl -sSL &quot;https://storage.googleapis.com/kubernetes-release/release/stable.txt&quot;) \
    /hyperkube kubelet \
      --allow-privileged \
      --api-servers=http://localhost:8080 \
      --config=/etc/kubernetes/manifests-multi \
      --cluster-dns=10.0.0.10 \
      --cluster-domain=cluster.local \
      --hostname-override=${MASTER_IP} \
      --v=2
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;p&gt;On the worker:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# export WORKER_IP=YOUR-WORKER-IP

# docker run -d \
    --net=host \
    --pid=host \
    --privileged \
    --restart=&quot;unless-stopped&quot; \
    --name kube_kubelet_$(date | md5sum | cut -c-5) \
    -v /sys:/sys:rw \
    -v /var/run:/var/run:rw \
    -v /run:/run:rw \
    -v /var/lib/docker:/var/lib/docker:rw \
    -v /var/lib/kubelet:/var/lib/kubelet:shared \
    -v /var/log/containers:/var/log/containers:rw \
    gcr.io/google_containers/hyperkube-amd64:$(curl -sSL &quot;https://storage.googleapis.com/kubernetes-release/release/stable.txt&quot;) \
    /hyperkube kubelet \
      --allow-privileged \
      --api-servers=http://${MASTER_IP}:8080 \
      --cluster-dns=10.0.0.10 \
      --cluster-domain=cluster.local \
      --hostname-override=${WORKER_IP} \
      --v=2

# docker run -d \
    --net=host \
    --privileged \
    --name kube_proxy_$(date | md5sum | cut -c-5) \
    --restart=&quot;unless-stopped&quot; \
    gcr.io/google_containers/hyperkube-amd64:$(curl -sSL &quot;https://storage.googleapis.com/kubernetes-release/release/stable.txt&quot;) \
    /hyperkube proxy \
        --master=http://${MASTER_IP}:8080 \
        --v=2
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;h3 id=&quot;get-current-kubectl&quot;&gt;Get current kubectl&lt;/h3&gt;

&lt;p&gt;I usually test things out from the master node, so I’ll download the newest stable kubectl binary to there:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# curl -sSL https://storage.googleapis.com/kubernetes-release/release/$(curl -sSL &quot;https://storage.googleapis.com/kubernetes-release/release/stable.txt&quot;)/bin/linux/amd64/kubectl &amp;gt; /usr/local/bin/kubectl

# chmod +x /usr/local/bin/kubectl

# kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;p&gt;From here, you an scroll up to the subhead “Test it” to run the guestbookgo  app. It’s not necessary to start up the dns addon manually with these upstream containers, because this configuration starts that addon automatically.&lt;/p&gt;
</description>
        <pubDate>Thu, 15 Sep 2016 12:00:00 +0000</pubDate>
        <link>http://garrettlesage.com/jekyll-springboard-atomic/blog/2016/09/running-kubernetes-in-containers-on-atomic-html/</link>
        <guid isPermaLink="true">http://garrettlesage.com/jekyll-springboard-atomic/blog/2016/09/running-kubernetes-in-containers-on-atomic-html/</guid>
        
        <category>atomic,</category>
        
        <category>kubernetes,</category>
        
        <category>centos,</category>
        
        <category>docker,</category>
        
        <category>system</category>
        
        <category>containers</category>
        
        
      </item>
    
      <item>
        <title>Introduction to System Containers</title>
        <description>&lt;p&gt;As part of our effort to reduce the number of packages that are
shipped with the Atomic Host image, we faced the problem of how to
containerize services that are needed before Docker itself is running.
The result: “system containers,” a way to run containers in
production using read only images.&lt;/p&gt;

&lt;p&gt;System containers use different technologies such as OSTree for the
storage, Skopeo to pull images from a registry, runC to run the
containers and systemd to manage their life cycle.&lt;/p&gt;

&lt;p&gt;READMORE&lt;/p&gt;

&lt;p&gt;To use system containers you must have &lt;a href=&quot;https://github.com/projectatomic/atomic&quot;&gt;Atomic CLI&lt;/a&gt; version 1.12 or later
and the &lt;a href=&quot;https://github.com/ostreedev/ostree&quot;&gt;ostree utility&lt;/a&gt; installed.  Currently, this means you must be running the
&lt;a href=&quot;/blog/2016/07/new-centos-atomic-host-releases-available-for-download/&quot;&gt;CentOS Continuous Atomic&lt;/a&gt;,
but updates for Fedora Atomic should be coming soon.&lt;/p&gt;

&lt;h1 id=&quot;pull-an-image&quot;&gt;Pull an image&lt;/h1&gt;

&lt;p&gt;An image must be present in the OSTree system repository before we can
use it as a system container.  By using skopeo, the atomic tool can pull an image from
different locations, a registry, the local Docker engine or a tarball,
according to how the image is prefixed:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# atomic pull --storage=ostree gscrivano/etcd
Image gscrivano/etcd is being pulled to ostree ...
Pulling layer e4410b03d7db030dba502fef7bfd1dae56a6c48faae63a80fd82450322def2c5
Pulling layer 2176ad01d5670713218844201dc4edb36d2692fcc79ad7008003227a5f80097b
Pulling layer 9086967f25375e976260ad004a6ac3cc75ba020669042cb431904d2914ac1735
Pulling layer c0ee5e1cf412f1fd511aa1c7427c6fd825dfe4969d9ed7462ff8f989aceded7a
Pulling layer 024037bdea19132da059961b3ec58e2aff329fb2fe8ffd8030a65a27d7e7db5f

# atomic pull --storage=ostree dockertar:/tmp/etcd.tar
# atomic pull --storage=ostree docker:etcd
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;p&gt;Each layer in the image is stored as a separate OSTree branch, this
takes advantage of the layered model used by Docker images, since
&lt;code class=&quot;highlighter-rouge&quot;&gt;atomic pull&lt;/code&gt; will download only the layers that are not already
available.  All the images are stored into the OSTree system
repository.&lt;/p&gt;

&lt;p&gt;Using OSTree as storage has the advantage that if the same file is
present in more layers, it will be stored only once, just like for container
image layers.  A container is installed through hardlinks, the storage is
shared with the OSTree repository “hardlink farm”.&lt;/p&gt;

&lt;p&gt;&lt;code class=&quot;highlighter-rouge&quot;&gt;atomic images list&lt;/code&gt; shows the list of the available images:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# atomic images list
   REPOSITORY    TAG          IMAGE ID       CREATED            VIRTUAL SIZE   TYPE
gscrivano/etcd   latest       d7c1702506ff   2016-09-08 16:39                  system

&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;p&gt;&lt;code class=&quot;highlighter-rouge&quot;&gt;atomic images delete&lt;/code&gt; deletes one tag and &lt;code class=&quot;highlighter-rouge&quot;&gt;atomic images prune&lt;/code&gt;
removes the unused layers:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# atomic images delete -f gscrivano/etcd
# atomic images prune
Deleting ociimage/9086967f25375e976260ad004a6ac3cc75ba020669042cb431904d2914ac1735
Deleting ociimage/2176ad01d5670713218844201dc4edb36d2692fcc79ad7008003227a5f80097b
Deleting ociimage/e4410b03d7db030dba502fef7bfd1dae56a6c48faae63a80fd82450322def2c5
Deleting ociimage/c0ee5e1cf412f1fd511aa1c7427c6fd825dfe4969d9ed7462ff8f989aceded7a
Deleting ociimage/024037bdea19132da059961b3ec58e2aff329fb2fe8ffd8030a65a27d7e7db5f

&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;h1 id=&quot;installation&quot;&gt;Installation&lt;/h1&gt;

&lt;p&gt;System images are installed with &lt;code class=&quot;highlighter-rouge&quot;&gt;atomic install --system&lt;/code&gt; as:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# atomic install --system gscrivano/etcd
Extracting to /var/lib/containers/atomic/etcd.0
systemctl daemon-reload
systemd-tmpfiles --create /etc/tmpfiles.d/etcd.conf
systemctl enable etcd

# atomic install --system gscrivano/flannel
Extracting to /var/lib/containers/atomic/flannel.0
systemctl daemon-reload
systemd-tmpfiles --create /etc/tmpfiles.d/flannel.conf
systemctl enable flannel

# systemctl start etcd
# runc exec etcd etcdctl set /atomic.io/network/config '{&quot;Network&quot;:&quot;10.40.0.0/16&quot;}'
# systemctl start flannel
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;p&gt;The template mechanism allows us to configure settings for images.
For example, we could use the following command to
configure Flannel to use another Etcd endpoint instead of the default
&lt;code class=&quot;highlighter-rouge&quot;&gt;http://127.0.0.1:2379&lt;/code&gt;:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# atomic install --system --set ETCD_ENDPOINTS=http://192.168.122.2:2379 gscrivano/flannel
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;p&gt;The &lt;code class=&quot;highlighter-rouge&quot;&gt;atomic containers&lt;/code&gt; verb is used to see containers:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# atomic containers list -a
   CONTAINER ID IMAGE                COMMAND              CREATED          STATUS    RUNTIME
   etcd         gscrivano/etcd       /usr/bin/etcd-env.sh 2016-09-08 14:19 running   runc
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;h1 id=&quot;uninstallation&quot;&gt;Uninstallation&lt;/h1&gt;

&lt;p&gt;Similarly to &lt;code class=&quot;highlighter-rouge&quot;&gt;atomic install&lt;/code&gt;, &lt;code class=&quot;highlighter-rouge&quot;&gt;atomic uninstall&lt;/code&gt; is used to uninstall
an installed system container.&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# atomic uninstall etcd
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;h1 id=&quot;structure-of-a-system-image&quot;&gt;Structure of a System Image&lt;/h1&gt;

&lt;p&gt;System images are Docker images with a few extra files that are
exported as part of the image itself, under the directory ‘/exports’.
In other words, an existing &lt;code class=&quot;highlighter-rouge&quot;&gt;Dockerfile&lt;/code&gt; can be converted adding the
configuration files needed to run it as a system container (which
translate to an additional &lt;code class=&quot;highlighter-rouge&quot;&gt;ADD [files] /exports&lt;/code&gt; directive in the
&lt;code class=&quot;highlighter-rouge&quot;&gt;Dockerfile&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;These files are:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;config.json.template - template for the OCI configuration file that
will be used to launch the runC container.&lt;/li&gt;
  &lt;li&gt;manifest.json - used to define default values for configuration
variables.&lt;/li&gt;
  &lt;li&gt;service.template - template unit file for systemD.&lt;/li&gt;
  &lt;li&gt;tmpfiles.template - template configuration file for systemd-tmpfiles.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Not all of them are necessary for every image.&lt;/p&gt;

&lt;p&gt;All the files with a &lt;code class=&quot;highlighter-rouge&quot;&gt;.template&lt;/code&gt; suffix are preprocessed and every
variable in the form &lt;code class=&quot;highlighter-rouge&quot;&gt;$VARIABLE&lt;/code&gt; or &lt;code class=&quot;highlighter-rouge&quot;&gt;${VARIABLE}&lt;/code&gt; is replaced with
its value.  This allows to define variables that are set at
installation time (through the &lt;code class=&quot;highlighter-rouge&quot;&gt;--set&lt;/code&gt; option) as we saw with the
Flannel example.  It is possible to set a default value for these
settings using the &lt;code class=&quot;highlighter-rouge&quot;&gt;manifest.json&lt;/code&gt; file of the system container image.&lt;/p&gt;

&lt;p&gt;If any of these files are missing, atomic will provide a default one.
For instance, if &lt;code class=&quot;highlighter-rouge&quot;&gt;config.json.template&lt;/code&gt; is not included in the image,
the default configuration will launch the &lt;code class=&quot;highlighter-rouge&quot;&gt;run.sh&lt;/code&gt; script without any
tty.&lt;/p&gt;

&lt;p&gt;There are some variables that are always defined by the atomic tool,
without the need for an user to specify them via &lt;code class=&quot;highlighter-rouge&quot;&gt;--set&lt;/code&gt;.  Of those,
only &lt;code class=&quot;highlighter-rouge&quot;&gt;RUN_DIRECTORY&lt;/code&gt; and &lt;code class=&quot;highlighter-rouge&quot;&gt;STATE_DIRECTORY&lt;/code&gt; can be overriden with
&lt;code class=&quot;highlighter-rouge&quot;&gt;--set&lt;/code&gt;:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class=&quot;highlighter-rouge&quot;&gt;DESTDIR&lt;/code&gt; - path where the container is installed on the system&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;highlighter-rouge&quot;&gt;NAME&lt;/code&gt; - name of the container&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;highlighter-rouge&quot;&gt;EXEC_START&lt;/code&gt; - Start directive for the systemD unit file.&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;highlighter-rouge&quot;&gt;EXEC_STOP&lt;/code&gt; - Stop directive for the systemD unit file.&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;highlighter-rouge&quot;&gt;HOST_UID&lt;/code&gt; - uid of the user installing the container.&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;highlighter-rouge&quot;&gt;HOST_GID&lt;/code&gt; - gid of the user installing the container.&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;highlighter-rouge&quot;&gt;RUN_DIRECTORY&lt;/code&gt; - run directory.  &lt;code class=&quot;highlighter-rouge&quot;&gt;/run&lt;/code&gt; for system containers.&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;highlighter-rouge&quot;&gt;STATE_DIRECTORY&lt;/code&gt; - path to the storage directory. &lt;code class=&quot;highlighter-rouge&quot;&gt;/var/lib/&lt;/code&gt; for
system containers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We’re excited about the ability of system containers to greatly improve administration
and infrastructure service delivery for Atomic clusters.  Please give them a try
and tell us what you think.&lt;/p&gt;
</description>
        <pubDate>Mon, 12 Sep 2016 13:00:00 +0000</pubDate>
        <link>http://garrettlesage.com/jekyll-springboard-atomic/blog/2016/09/intro-to-system-containers/</link>
        <guid isPermaLink="true">http://garrettlesage.com/jekyll-springboard-atomic/blog/2016/09/intro-to-system-containers/</guid>
        
        <category>runc,</category>
        
        <category>oc,</category>
        
        <category>system-containers,</category>
        
        <category>atomic,</category>
        
        <category>skopeo,</category>
        
        <category>ostree</category>
        
        
      </item>
    
      <item>
        <title>New CentOS Atomic Host with Package Layering Support</title>
        <description>&lt;p&gt;Last week, the CentOS Atomic SIG &lt;a href=&quot;https://seven.centos.org/2016/08/announcing-a-new-release-of-centos-atomic-host-2/&quot;&gt;released&lt;/a&gt; an updated version of CentOS Atomic Host (tree version 7.20160818), featuring support for rpm-ostree package layering.&lt;/p&gt;

&lt;p&gt;CentOS Atomic Host is available as a VirtualBox or libvirt-formatted Vagrant box; or as an installable ISO, qcow2, or Amazon Machine image. Check out the &lt;a href=&quot;https://wiki.centos.org/SpecialInterestGroup/Atomic/Download&quot;&gt;CentOS wiki&lt;/a&gt; for download links and installation instructions, or read on to learn more about what’s new in this release.&lt;/p&gt;

&lt;p&gt;READMORE&lt;/p&gt;

&lt;p&gt;CentOS Atomic Host includes these core component versions:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;docker-1.10.3-46.el7.centos.10.x86_64&lt;/li&gt;
  &lt;li&gt;kubernetes-1.2.0-0.13.gitec7364b.el7.x86_64&lt;/li&gt;
  &lt;li&gt;kernel-3.10.0-327.28.2.el7.x86_64&lt;/li&gt;
  &lt;li&gt;atomic-1.10.5-7.el7.x86_64&lt;/li&gt;
  &lt;li&gt;flannel-0.5.3-9.el7.x86_64&lt;/li&gt;
  &lt;li&gt;ostree-2016.7-2.atomic.el7.x86_64&lt;/li&gt;
  &lt;li&gt;etcd-2.3.7-2.el7.x86_64&lt;/li&gt;
  &lt;li&gt;cloud-init-0.7.5-10.el7.centos.1.x86_64&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;package-layering&quot;&gt;Package Layering&lt;/h2&gt;

&lt;p&gt;Using the command rpm-ostree pkg-add, it’s now possible to layer new packages into an installed image that persist across reboots and upgrades, a topic that &lt;a href=&quot;https://github.com/jlebon&quot;&gt;Jonathan Lebon&lt;/a&gt; &lt;a href=&quot;http://www.projectatomic.io/blog/2016/07/hacking-and-extending-atomic-host/&quot;&gt;covered in some detail&lt;/a&gt; in a post last month.&lt;/p&gt;

&lt;p&gt;For instance, if I wanted to install ansible on an atomic host:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# rpm-ostree pkg-add epel-release
# reboot
# rpm-ostree pkg-add ansible
# reboot
# ansible --version
ansible 2.1.1.0
  config file = /etc/ansible/ansible.cfg
  configured module search path = Default w/o overrides
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;p&gt;I first installed the &lt;code class=&quot;highlighter-rouge&quot;&gt;epel-release&lt;/code&gt; package because ansible lives in EPEL. The intermediate reboot was required to boot into the new EPEL-i-fied tree. I could have instead added the repo file for EPEL in my &lt;code class=&quot;highlighter-rouge&quot;&gt;/etc/yum.repos.d/&lt;/code&gt; directory, and skipped the extra install and reboot operations. To learn about the work going on to make package layering more “live,” check out &lt;a href=&quot;https://bugzilla.gnome.org/show_bug.cgi?id=767977&quot;&gt;this issue&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;There are limitations to package layering. For instance, I’ve &lt;a href=&quot;http://www.projectatomic.io/blog/2015/01/running-ovirt-guest-agent-as-privileged-container/&quot;&gt;written in the past&lt;/a&gt; about running oVirt’s guest agent (which is not part of the standard atomic host image) in a docker container. Package layering won’t work for this scenario, because installing packages which contain files owned by users other than root is currently not supported:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# rpm-ostree pkg-add ovirt-guest-agent-common
notice: pkg-add is a preview command and subject to change.

Downloading metadata: [================================================] 100%
Resolving dependencies... done
Will download: 3 packages (209.2 kB)

  Downloading from epel: [=============================================] 100%

  Downloading from base: [=============================================] 100%

Importing: [===================                                        ]  33%
error: Unpacking ovirt-guest-agent-common-1.0.12-3.el7.noarch: Non-root ownership currently unsupported: path &quot;/var/log/ovirt-guest-agent&quot; marked as ovirtagent:ovirtagent)
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;h2 id=&quot;centos-atomic-host-alpha&quot;&gt;CentOS Atomic Host Alpha&lt;/h2&gt;

&lt;p&gt;While it’s not yet possible to pkg-add packages with files owned by users other than root on the current CentOS Atomic Host release, the host’s &lt;a href=&quot;https://wiki.centos.org/SpecialInterestGroup/Atomic/Devel&quot;&gt;Alpha stream &lt;/a&gt; includes a newer version of rpm-ostree that works just fine with these sorts of packages.&lt;/p&gt;

&lt;p&gt;Apart from its newer rpm-ostree version, the Alpha release of CentOS Atomic Host now features a &lt;a href=&quot;https://lists.projectatomic.io/projectatomic-archives/atomic-devel/2016-August/msg00104.html&quot;&gt;much slimmer package list&lt;/a&gt;, as the project begins to move toward containerization or package layering for system components such as kubernetes, flannel, and etcd.&lt;/p&gt;
</description>
        <pubDate>Tue, 30 Aug 2016 18:38:04 +0000</pubDate>
        <link>http://garrettlesage.com/jekyll-springboard-atomic/blog/2016/08/new-centos-atomic-host-with-package-layering-support-html/</link>
        <guid isPermaLink="true">http://garrettlesage.com/jekyll-springboard-atomic/blog/2016/08/new-centos-atomic-host-with-package-layering-support-html/</guid>
        
        <category>centos,</category>
        
        <category>rpm-ostree</category>
        
        
      </item>
    
      <item>
        <title>Project Atomic Docker Patches</title>
        <description>&lt;p&gt;Project Atomic’s version of the Docker-based container runtime has been carrying a series of patches on the upstream Docker project for a while now.  Each time we carry a patch, it adds significant effort as we continue to track upstream, therefore we would prefer to never carry any patches.  We always strive to get our patches upstream and do it in the open.&lt;/p&gt;

&lt;p&gt;This post, and the accompanying document, will attempt to describe the patches we are currently carrying:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Explanation on types of patches.&lt;/li&gt;
  &lt;li&gt;Description of patches.&lt;/li&gt;
  &lt;li&gt;Links to GitHub discussions, and pull requests for upstreaming the patches to docker.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Some people have asserted that &lt;a href=&quot;https://github.com/projectatomic/docker&quot;&gt;our docker repo&lt;/a&gt; is a fork of the upstream docker project.&lt;/p&gt;

&lt;h2 id=&quot;what-does-it-mean-to-be-a-fork&quot;&gt;What Does It Mean To Be a Fork?&lt;/h2&gt;

&lt;p&gt;I have been in open source for a long time, and my definition of a “fork” might be dated. I think of a “fork” as a hostile action taken by one group to get others to use and contribute to their version of an upstream project and ignore the “original” version. For example, LibreOffice forking off of OpenOffice or going way back Xorg forking off of Xfree86.&lt;/p&gt;

&lt;p&gt;Nowadays, GitHub has changed the meaning. When a software repository exists on GitHub or a similar platform, everyone who wants to contribute has to hit the “fork” button, and start building their patches. As of this writing, Docker on GitHub has 9,860 forks, including ours. By this definition, however, all packages that distributions ship that include patches are forks. Red Hat ships the Linux Kernel, and I have not heard this referred to as a fork. But it would be considered a “fork” if you’re considering any upstream project shipped with patches a fork.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The Docker upstream even relies on Ubuntu carrying patches for AUFS that were never merged into the upstream kernel.&lt;/em&gt; Since Red Hat-based distributions don’t carry the AUFS patches, we contributed the support for Devicemapper, OverlayFS, and Btrfs backends, which are fully supported in the upstream kernel.  This is what enterprise distributions should do: attempt to ship packages configured in a way that they can be supported for a long time.&lt;/p&gt;

&lt;p&gt;At the end of the day, we continue to track the changes made to the upstream Docker Project and re-apply our patches to that project. We believe this is an important distinction to allow freedom in software to thrive while continually building stronger communities.  It’s very different than a hostile fork that divides communities—we are still working very hard to maintain continuity around unified upstreams.&lt;/p&gt;

&lt;h2 id=&quot;how-can-i-find-out-about-patches-for-a-particular-version-of-docker&quot;&gt;How Can I Find Out About Patches for a Particular Version of Docker?&lt;/h2&gt;

&lt;p&gt;All of the patches we ship are described in the README.md file on the appropriate branch of &lt;a href=&quot;https://github.com/projectatomic/docker&quot;&gt;our docker  repository&lt;/a&gt;. If you want to look at the patches for docker-1.12 you would look at &lt;a href=&quot;https://github.com/projectatomic/docker/tree/docker-1.12&quot;&gt;the docker-1.12 branch&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You can then look on the &lt;a href=&quot;/docs/docker_patches&quot;&gt;docker patches list page&lt;/a&gt; for information about these patches.&lt;/p&gt;

&lt;h2 id=&quot;what-kind-of-patches-does-project-atomic-include&quot;&gt;What Kind of Patches does Project Atomic Include?&lt;/h2&gt;

&lt;p&gt;Here is a quick overview of the kinds of patches we carry, and then guidance on finding information on specific patches.&lt;/p&gt;

&lt;h3 id=&quot;upstream-fixes&quot;&gt;Upstream Fixes&lt;/h3&gt;

&lt;p&gt;The Docker Project upstream tends to fix issues in the &lt;strong&gt;next&lt;/strong&gt; version of Docker. This means if a user finds an issue in docker-1.11 and we provide a fix for this to upstream, the patch gets merged in to the master branch, and it will probably not get back ported to docker-1.11.&lt;/p&gt;

&lt;p&gt;Since Docker is releasing at such a rapid rate, they tell users to just install docker-1.12 when it is available. This is fine for people who want to be on the bleeding edge, but in a lot of cases the newer version of Docker comes with new issues along with the fixes.&lt;/p&gt;

&lt;p&gt;For example, docker-1.11 split the docker daemon into three parts: docker daemon, containerd, and runc.  We did not feel this was stable enough to ship to enterprise customers right when it came out, yet it had multiple fixes for the docker-1.10 version. Many users want to only get new fixes to their existing software and not have to re-certify their apps every two months.&lt;/p&gt;

&lt;p&gt;Another issue with supporting stable software with rapidly changing dependencies is that developers on the stable projects must spend time ensuring that their product remains stable every time one of their dependencies is updated. This is an expensive process, dependencies end up being updated only infrequently. This causes us to “cherry-pick” fixes from upstream Docker and to ship these fixes on older versions so that we can get the benefits from the bug fixes without the cost of updating the entire dependency. This is the same approach we take in order to add capabilities to the Linux kernel, a practice that has proven to be very valuable to our users.&lt;/p&gt;

&lt;h3 id=&quot;proposed-patches-for-upstream&quot;&gt;Proposed Patches for Upstream&lt;/h3&gt;

&lt;p&gt;We carry patches that we know our users require right now, but have not yet been merged into the upstream project.  Every patch that we add to the Project Atomic repository also gets proposed to the upstream docker repository.&lt;/p&gt;

&lt;p&gt;These sorts of patches remain on the Project Atomic repository briefly while they’re being considered upstream, or forever if the upstream community rejects them. If we don’t agree with upstream Docker and feel our users need these patches, we continue to carry them. In some cases we have worked out alternative solutions like building authorization plugins.&lt;/p&gt;

&lt;p&gt;For example, users of RHEL images are not supposed to push these image onto public web sites. We wanted a way to prevent users from accidentally pushing RHEL based images to Docker Hub, so we originally created a patch to block the pushing.  When authorization plugins were added we then created a plugin to protect users from pushing RHEL content to a public registry like Docker Hub, and no longer had to carry the custom patch.&lt;/p&gt;

&lt;h2 id=&quot;detailed-list-of-patches&quot;&gt;Detailed List of Patches&lt;/h2&gt;

&lt;p&gt;Want to know more about specific patches? You can find the current table and list of patches on our new &lt;a href=&quot;/docs/docker_patches&quot;&gt;docker patches list page&lt;/a&gt;.&lt;/p&gt;
</description>
        <pubDate>Thu, 18 Aug 2016 12:00:00 +0000</pubDate>
        <link>http://garrettlesage.com/jekyll-springboard-atomic/blog/2016/08/docker-patches-html/</link>
        <guid isPermaLink="true">http://garrettlesage.com/jekyll-springboard-atomic/blog/2016/08/docker-patches-html/</guid>
        
        <category>docker,</category>
        
        <category>patches,</category>
        
        <category>development</category>
        
        
      </item>
    
      <item>
        <title>Vagrant Service Manager 1.3.0 Released</title>
        <description>&lt;p&gt;This version of &lt;a href=&quot;https://github.com/projectatomic/vagrant-service-manager&quot;&gt;vagrant-service-manager&lt;/a&gt; introduces support for displaying Kubernetes configuration information. This enable users to access the Kubernetes server that runs inside ADB virtual machine from their host machine.&lt;/p&gt;

&lt;p&gt;This version also includes binary installation support for Kubernetes. This support is extended to users of the &lt;a href=&quot;http://developers.redhat.com/products/cdk/overview&quot;&gt;Red Hat Container Development Kit&lt;/a&gt;. For information about client binary installation, see the previous release announcement &lt;a href=&quot;../../../../blog/2016/07/vagrant-service-manager-install-cli&quot;&gt;“Client Binary Installation Now Included in the ADB”&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The full list of features from this version are:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Configuration information for Kubernetes provided as part of the &lt;code class=&quot;highlighter-rouge&quot;&gt;env&lt;/code&gt; command&lt;/li&gt;
  &lt;li&gt;Client binary installation support for Kubernetes added to the ADB&lt;/li&gt;
  &lt;li&gt;Client binary installation support for OpenShift, Kubernetes and Docker in the Red Hat Container Development Kit&lt;/li&gt;
  &lt;li&gt;Auto-detection of a previously downloaded &lt;code class=&quot;highlighter-rouge&quot;&gt;oc&lt;/code&gt; executable binary on Windows operating systems&lt;/li&gt;
  &lt;li&gt;Unit and acceptance tests for the Kubernetes service&lt;/li&gt;
  &lt;li&gt;Option to enable Kubernetes from a Vagrantfile  with the following command:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;  config.servicemanager.services = 'kubernetes'
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;h2 id=&quot;install-the-kubernetes-client-binary&quot;&gt;1. Install the kubernetes client binary&lt;/h2&gt;

&lt;h3 id=&quot;run-the-following-command-to-install-the-kubernetes-binary-kubectl&quot;&gt;Run the following command to install the kubernetes binary, &lt;code class=&quot;highlighter-rouge&quot;&gt;kubectl&lt;/code&gt;&lt;/h3&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;$ vagrant service-manager install-cli kubernetes
# Binary now available at /home/budhram/.vagrant.d/data/service-manager/bin/kubernetes/1.2.0/kubectl
# run binary as:
# kubectl &amp;lt;command&amp;gt;
export PATH=/home/budhram/.vagrant.d/data/service-manager/bin/kubernetes/1.2.0:$PATH

# run following command to configure your shell:
# eval &quot;$(VAGRANT_NO_COLOR=1 vagrant service-manager install-cli kubernetes | tr -d '\r')&quot;

&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;h3 id=&quot;run-the-following-command-to-configure-your-shell&quot;&gt;Run the following command to configure your shell&lt;/h3&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;$ eval &quot;$(VAGRANT_NO_COLOR=1 vagrant service-manager install-cli kubernetes | tr -d '\r')&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;h2 id=&quot;enable-access-to-the-kubernetes-server-that-runs-inside-of-the-adb&quot;&gt;2. Enable access to the kubernetes server that runs inside of the ADB&lt;/h2&gt;

&lt;h3 id=&quot;run-the-following-command-to-display-environment-variable-for-kubernetes&quot;&gt;Run the following command to display environment variable for kubernetes&lt;/h3&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;$ vagrant service-manager env kubernetes
# Set the following environment variables to enable access to the
# kubernetes server running inside of the vagrant virtual machine:
export KUBECONFIG=/home/budhram/.vagrant.d/data/service-manager/kubeconfig

# run following command to configure your shell:
# eval &quot;$(vagrant service-manager env kubernetes)&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;h3 id=&quot;run-the-following-command-to-configure-your-shell-1&quot;&gt;Run the following command to configure your shell&lt;/h3&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;eval &quot;$(vagrant service-manager env kubernetes)&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;p&gt;For a full list of changes in version 1.3.0, see &lt;a href=&quot;https://github.com/projectatomic/vagrant-service-manager/releases/tag/v1.3.0&quot;&gt;the release log&lt;/a&gt;.&lt;/p&gt;
</description>
        <pubDate>Tue, 16 Aug 2016 12:54:00 +0000</pubDate>
        <link>http://garrettlesage.com/jekyll-springboard-atomic/blog/2016/08/vagrant-service-manager-1-3-0-release-html/</link>
        <guid isPermaLink="true">http://garrettlesage.com/jekyll-springboard-atomic/blog/2016/08/vagrant-service-manager-1-3-0-release-html/</guid>
        
        <category>vagrant,</category>
        
        <category>devtools,</category>
        
        <category>releases,</category>
        
        <category>kubernetes</category>
        
        
      </item>
    
      <item>
        <title>Creating OCI configurations with the ocitools generate library</title>
        <description>&lt;p&gt;&lt;a href=&quot;https://github.com/opencontainers/runc&quot;&gt;OCI runc&lt;/a&gt; is a cool new tool for running containers on Linux machines. It follows the OCI container runtime specification. As of docker-1.11 it is the main mechanism that docker uses for launching containers.&lt;/p&gt;

&lt;p&gt;The really cool thing is that you can use runc without even using docker. First you create a rootfs on your disk: a directory that includes all of your software and usually follows the basic layout of &lt;code class=&quot;highlighter-rouge&quot;&gt;/&lt;/code&gt;. There are several tools that can create a rootfs, including dnf or the &lt;code class=&quot;highlighter-rouge&quot;&gt;atomic&lt;/code&gt; command. Once you have a rootfs, you need to create a &lt;code class=&quot;highlighter-rouge&quot;&gt;config.json&lt;/code&gt; file which runc will read. &lt;code class=&quot;highlighter-rouge&quot;&gt;config.json&lt;/code&gt; has all of the specifications for running a container, things like which namespaces to use, which capabilities to use in your container, and what is the pid 1 of your container. It is somewhat similar to the output of &lt;code class=&quot;highlighter-rouge&quot;&gt;docker inspect&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Creating and editing the config.json is not for the faint of heart, so we developed a command line tool called &lt;code class=&quot;highlighter-rouge&quot;&gt;ocitools generate&lt;/code&gt; that can do the hard work of creating the config.json file.&lt;/p&gt;

&lt;h2 id=&quot;creating-oci-configurations&quot;&gt;Creating OCI Configurations&lt;/h2&gt;

&lt;p&gt;This post will guide you through the steps of creating &lt;a href=&quot;https://github.com/opencontainers/runtime-spec/&quot;&gt;OCI&lt;/a&gt; configurations
using the &lt;a href=&quot;https://github.com/opencontainers/ocitools/tree/master/generate&quot;&gt;ocitools generate library&lt;/a&gt;
for the go programming language.&lt;/p&gt;

&lt;p&gt;There are four steps to create an OCI configuration using the ocitools generate library:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Import the ocitools generate library into your project;&lt;/li&gt;
  &lt;li&gt;Create an OCI specification generator;&lt;/li&gt;
  &lt;li&gt;Modify the specification by calling different methods of the specification generator;&lt;/li&gt;
  &lt;li&gt;Save the specification.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;READMORE&lt;/p&gt;

&lt;h2 id=&quot;overview&quot;&gt;Overview&lt;/h2&gt;

&lt;p&gt;The ocitools generate library defines a struct type, &lt;em&gt;Generator&lt;/em&gt;, to enclose a
pointer to an &lt;a href=&quot;https://github.com/opencontainers/runtime-spec/blob/master/specs-go/config.go&quot;&gt;OCI
Spec&lt;/a&gt;.
Different methods are defined on the &lt;em&gt;Generator&lt;/em&gt; type to allow the user to
customize the specification pointed by the Generator.  Once a Generator object
is created, different fields of the specification can be modified by calling
the corresponding methods of Generator.  After finishing modifying the
specification, &lt;em&gt;Generator.Save&lt;/em&gt; can be called to save the specification into a
local file.&lt;/p&gt;

&lt;h2 id=&quot;create-an-oci-configurations-using-the-ocitools-generate-library&quot;&gt;Create an OCI configurations using the ocitools generate library&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Step 1: import the ocitools generate library in your go project:&lt;/strong&gt;&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;import &quot;github.com/opencontainers/ocitools/generate&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Step 2: create a specification Generator:&lt;/strong&gt;&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;specgen := generate.New()
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;p&gt;&lt;em&gt;generate.New&lt;/em&gt; creates a spec Generator with the default spec.
You can also create a spec Generator with the spec from a local file using &lt;em&gt;generate.NewFromFile&lt;/em&gt;:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;specgen := generate.NewFromFile(&quot;/data/myspec.json&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Step 3: modify the specification:&lt;/strong&gt;&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;specgen.SetRootPath(&quot;rootfs&quot;)
specgen.SetProcessGID(1000)
specgen.SetProcessTerminal(true)
specgen.SetLinuxResourcesCPUShares(512)
specgen.SetupPrivileged(false)

specgen.ClearAnnotations()
specgen.AddAnnotation(&quot;owner&quot;, &quot;hmeng&quot;)
specgen.RemoveAnnotation(&quot;createdat&quot;)

specgen.AddLinuxUIDMapping(0, 1000, 50)
specgen.AddPreStartHook(&quot;/tmp/install.sh&quot;, []string{&quot;--mode&quot;, &quot;silent&quot;})
specgen.AddTmpfsMount(&quot;/tmp&quot;, &quot;ro&quot;)
specgen.AddCgroupsMount(&quot;rw&quot;)
specgen.AddBindMount(&quot;/home/test/file1&quot;, &quot;/file2&quot;, &quot;rw&quot;)
specgen.DropProcessCapability(&quot;audit_read&quot;)

specgen.AddOrReplaceLinuxNamespace(&quot;pid&quot;, &quot;/proc/28341/ns/pid&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;p&gt;For the fields of the OCI spec which have basic types, such as numbers, strings, and booleans, the methods modifying them are
named in the format of &lt;em&gt;SetFieldName&lt;/em&gt;.
A good example is &lt;em&gt;SetRootPath&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;For the fields whose types are slices and maps, the library provides the following three categories of methods:
&lt;em&gt;ClearFieldName&lt;/em&gt; clears the fields,
&lt;em&gt;AddFieldName&lt;/em&gt; adds a new data object into the field,
and &lt;em&gt;RemoveFieldName&lt;/em&gt; removes an existing data object from the field.
A good example is &lt;em&gt;ClearAnnotations&lt;/em&gt;, &lt;em&gt;AddAnnotation&lt;/em&gt;, and &lt;em&gt;RemoveAnnotation&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;There are exceptions. For example, &lt;em&gt;Spec.Process.Args&lt;/em&gt; is a slice of string, however, the library only provides a single method &lt;em&gt;SetProcessArgs&lt;/em&gt;
to set the process args, because it makes more sense to set the process args in a bundle than adding each argument incrementally.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: save the specification:&lt;/strong&gt;&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;specgen.SaveToFile(&quot;/data/runc/busybox/config.json&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;p&gt;And you’re done!  You now have a generator for creating spec files for your runc containers.&lt;/p&gt;
</description>
        <pubDate>Tue, 09 Aug 2016 14:00:00 +0000</pubDate>
        <link>http://garrettlesage.com/jekyll-springboard-atomic/blog/2016/08/ocitools-libgen-html/</link>
        <guid isPermaLink="true">http://garrettlesage.com/jekyll-springboard-atomic/blog/2016/08/ocitools-libgen-html/</guid>
        
        <category>OCI,</category>
        
        <category>ocitools,</category>
        
        <category>runc</category>
        
        
      </item>
    
      <item>
        <title>Download and Get Involved with Fedora Atomic 24</title>
        <description>&lt;p&gt;This week, the Fedora Project released updated images for its Fedora 24-based Atomic Host. Fedora Atomic Host is a leading-edge operating system designed around Kubernetes and Docker containers.&lt;/p&gt;

&lt;p&gt;Fedora Atomic Host images are updated roughly every two weeks, rather than on the main six-month Fedora cadence. Because development is moving quickly, only the latest major Fedora release is supported.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Due to an issue with the image-building process, the current Fedora Atomic Host images include an older version of the system tree. Be sure to &lt;code class=&quot;highlighter-rouge&quot;&gt;atomic host upgrade&lt;/code&gt; to  get the latest set of components. The next two-week media refresh will include an up-to-date tree.&lt;/p&gt;

&lt;p&gt;READMORE&lt;/p&gt;

&lt;p&gt;Fedora Atomic Host includes these core component versions:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;kernel-4.6.4-301.fc24.x86_64&lt;/li&gt;
  &lt;li&gt;docker-1.10.3-24.git29066b4.fc24.x86_64&lt;/li&gt;
  &lt;li&gt;kubernetes-1.2.0-0.24.git4a3f9c5.fc24.x86_64&lt;/li&gt;
  &lt;li&gt;atomic-1.10.5-1.gitce09e40.fc24.x86_64&lt;/li&gt;
  &lt;li&gt;rpm-ostree-2016.4-2.fc24.x86_64&lt;/li&gt;
  &lt;li&gt;flannel-0.5.5-6.fc24.x86_64&lt;/li&gt;
  &lt;li&gt;etcd-2.2.5-5.fc24.x86_64&lt;/li&gt;
  &lt;li&gt;cloud-init-0.7.6-8.20150813bzr1137.fc24.noarch&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;READMORE&lt;/p&gt;

&lt;h2 id=&quot;upgrading&quot;&gt;Upgrading&lt;/h2&gt;

&lt;p&gt;Upgrading from an existing Atomic Host to Fedora Atomic 24 involves replacing the Fedora 23-based fedora-atomic remote with the current one, and then rebasing on the new tree. Due to &lt;a href=&quot;https://bugzilla.redhat.com/show_bug.cgi?id=1309075&quot;&gt;this issue&lt;/a&gt;, it may be necessary to put SELinux into permissive mode for the rebase operation:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;$ sudo setenforce 0
$ sudo ostree remote delete fedora-atomic
$ sudo ostree remote add fedora-atomic --set=gpg-verify=false https://dl.fedoraproject.org/pub/fedora/linux/atomic/24
$ sudo rpm-ostree rebase fedora-atomic:fedora-atomic/24/x86_64/docker-host
$ sudo reboot
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;h2 id=&quot;atomic-images&quot;&gt;Atomic Images&lt;/h2&gt;

&lt;p&gt;Fedora Atomic Host is available as a virtualbox or libvirt vagrant image, as an installable iso image, as a raw or qcow2-formatted cloud image, or as an Amazon AMI.&lt;/p&gt;

&lt;p&gt;To bring up Fedora Atomic Host in a vagrant box, issue a command like:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;vagrant init fedora/24-atomic-host &amp;amp;&amp;amp; vagrant up
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;p&gt;If you’ve previously used vagrant to run a Fedora Atomic 24 VM, first run &lt;code class=&quot;highlighter-rouge&quot;&gt;vagrant box update --box=fedora/24-atomic-host&lt;/code&gt; to ensure that you have the latest version.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Due to &lt;a href=&quot;https://pagure.io/pungi-fedora/issue/26&quot;&gt;this issue&lt;/a&gt;, you’ll need to add a line to your Vagrantfile like &lt;code class=&quot;highlighter-rouge&quot;&gt;config.vm.synced_folder &quot;./&quot;, &quot;/vagrant&quot;, disabled: 'true'&lt;/code&gt; to disable folder sync.&lt;/p&gt;

&lt;p&gt;Fedora Atomic Host is available as a &lt;a href=&quot;https://getfedora.org/en/cloud/download/atomic.html&quot;&gt;qcow2 or raw-formatted image&lt;/a&gt;, both of which require a cloud-init data source, be it from your cloud or virtualization provider, or from a &lt;a href=&quot;http://www.projectatomic.io/blog/2014/10/getting-started-with-cloud-init/&quot;&gt;local source&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The Fedora Project maintains Atomic Host images for Amazon EC2 in both GP2 (SSD-based) and standard formats. Check out the atomic host &lt;a href=&quot;https://getfedora.org/en/cloud/download/atomic.html&quot;&gt;download page&lt;/a&gt; for AMI IDs specific to your desired region.&lt;/p&gt;

&lt;p&gt;There’s also an anaconda-based &lt;a href=&quot;https://getfedora.org/en/cloud/download/atomic.html&quot;&gt;ISO installer&lt;/a&gt; for use with bare metal or as an alternative to configuring cloud-init for virtual machines.&lt;/p&gt;

&lt;h2 id=&quot;get-involved&quot;&gt;Get Involved&lt;/h2&gt;

&lt;p&gt;To get involved with Fedora Atomic Host, get in touch with the &lt;a href=&quot;https://fedoraproject.org/wiki/Cloud_SIG&quot;&gt;Fedora Cloud SIG&lt;/a&gt;. The SIG meets each week on Wednesdays at 17:00 UTC in the #fedora-meetings-1 channel, and hangs out in the #fedora-cloud channel and on the &lt;a href=&quot;http://lists.fedoraproject.org/pipermail/cloud/&quot;&gt;Fedora Cloud mailing list&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;One of the best ways help out with Fedora Atomic is to participate in testing core atomic host components using Fedora’s &lt;a href=&quot;https://fedoraproject.org/wiki/Bodhi&quot;&gt;Bodhi&lt;/a&gt;. Following &lt;a href=&quot;https://bodhi.fedoraproject.org/updates/?packages=kubernetes%20docker%20rpm-ostree%20atomic%20flannel%20etcd%20cloud-init&amp;amp;status=testing&amp;amp;release=F24&quot;&gt;this link&lt;/a&gt; will provide a list of key atomic packages currently in need of testing for Fedora 24.&lt;/p&gt;

&lt;p&gt;The Fedora Project maintains a version of the Fedora Atomic system tree that includes packages from the updates-testing repo. Rebasing an atomic host to this tree is a handy way to run the latest packages in need of testing:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;$ sudo rpm-ostree rebase fedora-atomic:fedora-atomic/24/x86_64/testing/docker-host
$ sudo systemctl reboot
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;p&gt;If you have questions about how best to test one of these packages, ask on the Fedora Cloud mailing list or in the #fedora-cloud in irc.&lt;/p&gt;
</description>
        <pubDate>Fri, 29 Jul 2016 07:00:00 +0000</pubDate>
        <link>http://garrettlesage.com/jekyll-springboard-atomic/blog/2016/07/download-and-get-involved-with-fedora-atomic-24-html/</link>
        <guid isPermaLink="true">http://garrettlesage.com/jekyll-springboard-atomic/blog/2016/07/download-and-get-involved-with-fedora-atomic-24-html/</guid>
        
        <category>fedora,</category>
        
        <category>docker,</category>
        
        <category>kubernetes</category>
        
        
      </item>
    
      <item>
        <title>Atomic App 0.6.2 released with new index CLI command</title>
        <description>&lt;p&gt;This release of Atomic App introduces the new &lt;code class=&quot;highlighter-rouge&quot;&gt;atomicapp index&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;We add this command in order to give a quick overview of all available featured and tested Nuleculized applications on &lt;a href=&quot;https://github.com/projectatomic/nulecule-library&quot;&gt;github.com/projectatomic/nulecule-library&lt;/a&gt;. The ability to generate your own list is available as well with the &lt;code class=&quot;highlighter-rouge&quot;&gt;atomicapp index generate&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;READMORE&lt;/p&gt;

&lt;p&gt;The main features of this release are:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Addition of the &lt;code class=&quot;highlighter-rouge&quot;&gt;atomicapp index&lt;/code&gt; command&lt;/li&gt;
  &lt;li&gt;Correct file permissions are now when extracting Nuleculized containers&lt;/li&gt;
  &lt;li&gt;OpenShift connection issue bugfix&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;atomicapp-index&quot;&gt;&lt;code class=&quot;highlighter-rouge&quot;&gt;atomicapp index&lt;/code&gt;&lt;/h2&gt;

&lt;p&gt;This release adds the addition of the &lt;code class=&quot;highlighter-rouge&quot;&gt;atomicapp index&lt;/code&gt; command. By using the &lt;code class=&quot;highlighter-rouge&quot;&gt;atomicapp index list&lt;/code&gt; command, Atomic App will retrieve a container containing a valid &lt;code class=&quot;highlighter-rouge&quot;&gt;index.yml&lt;/code&gt; and output all available Nulecule containers. This index can also be updated by using &lt;code class=&quot;highlighter-rouge&quot;&gt;atomicapp index update&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;atomicapp index list&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Outputs the list of available containers located at &lt;code class=&quot;highlighter-rouge&quot;&gt;~/.atomicapp/index.yml&lt;/code&gt;.&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;▶ atomicapp index list
INFO   :: Atomic App: 0.6.2 - Mode: Index
ID                        VER      PROVIDERS  LOCATION                                             
postgresql-atomicapp      1.0.0    {D,O,K}    docker.io/projectatomic/postgresql-centos7-atomicapp 
flask_redis_nulecule      0.0.1    {D,K}      docker.io/projectatomic/flask-redis-centos7-atomicapp
redis-atomicapp           0.0.1    {D,O,K}    docker.io/projectatomic/redis-centos7-atomicapp      
gocounter                 0.0.1    {D,K}      docker.io/projectatomic/gocounter-scratch-atomicapp  
mariadb-atomicapp         1.0.0    {D,O,K}    docker.io/projectatomic/mariadb-centos7-atomicapp    
helloapache-app           0.0.1    {D,K,M}    docker.io/projectatomic/helloapache                  
mongodb-atomicapp         1.0.0    {D,O,K}    docker.io/projectatomic/mongodb-centos7-atomicapp    
etherpad-app              0.0.1    {D,O,K}    docker.io/projectatomic/etherpad-centos7-atomicapp   
apache-centos7-atomicapp  0.0.1    {D,K,M}    docker.io/projectatomic/apache-centos7-atomicapp     
wordpress-atomicapp       2.0.0    {D,O,K}    docker.io/projectatomic/wordpress-centos7-atomicapp  
skydns-atomicapp          0.0.1    {K}        docker.io/projectatomic/skydns-atomicapp             
guestbookgo-atomicapp     0.0.1    {O,K}      docker.io/projectatomic/guestbookgo-atomicapp        
mariadb-app               0.0.1    {D,K}      docker.io/projectatomic/mariadb-fedora-atomicapp     
gitlab-atomicapp          1.2.0    {D,K}      docker.io/projectatomic/gitlab-centos7-atomicapp 
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;atomicapp index update&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Updates the &lt;code class=&quot;highlighter-rouge&quot;&gt;index.yml&lt;/code&gt; file.&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;▶ atomicapp index update
INFO   :: Atomic App: 0.6.2 - Mode: Index
INFO   :: Updating the index list
INFO   :: Pulling latest index image...
INFO   :: Skipping pulling docker image: projectatomic/nulecule-library
INFO   :: Copying files from image projectatomic/nulecule-library:/index.yaml to /home/wikus/.atomicapp/index.yaml
INFO   :: Index updated
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;atomicapp index generate&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Generates a valid &lt;code class=&quot;highlighter-rouge&quot;&gt;index.yml&lt;/code&gt; file to use in listing all available containers.&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;▶ atomicapp index generate ./nulecule-library
INFO   :: Atomic App: 0.6.1 - Mode: Index
INFO   :: Generating index.yaml from ./nulecule-library
INFO   :: index.yaml generated
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;p&gt;Want to get started using Atomic App? Have a look at our extensive &lt;a href=&quot;https://github.com/projectatomic/atomicapp/blob/master/docs/start_guide.md&quot;&gt;start guide&lt;/a&gt;, or use Atomic App as part of the Atomic CLI on an &lt;a href=&quot;http://www.projectatomic.io/download/&quot;&gt;Atomic Host&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For a full list of changes between 0.6.1 and the 0.6.2 please see &lt;a href=&quot;https://github.com/projectatomic/atomicapp/commits/0.6.2&quot;&gt;the commit log&lt;/a&gt;.&lt;/p&gt;
</description>
        <pubDate>Wed, 27 Jul 2016 16:55:00 +0000</pubDate>
        <link>http://garrettlesage.com/jekyll-springboard-atomic/blog/2016/07/atomic-app-0-6-2-release-html/</link>
        <guid isPermaLink="true">http://garrettlesage.com/jekyll-springboard-atomic/blog/2016/07/atomic-app-0-6-2-release-html/</guid>
        
        <category>atomicapp,</category>
        
        <category>Nulecule,</category>
        
        <category>releases</category>
        
        
      </item>
    
      <item>
        <title>Working with Containers' Images Made Easy Part 1: skopeo</title>
        <description>&lt;p&gt;This is the first part of a series of posts about containers’ images. In this first part we’re going to focus on &lt;code class=&quot;highlighter-rouge&quot;&gt;skopeo&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Back in March, I published a &lt;a href=&quot;http://www.projectatomic.io/blog/2016/03/skopeo-inspect-remote-images/&quot;&gt;post&lt;/a&gt; about &lt;a href=&quot;https://github.com/projectatomic/skopeo&quot;&gt;skopeo&lt;/a&gt;, a new tiny binary to help people interact with Docker registries. Its job has been limited to &lt;em&gt;inspect&lt;/em&gt; (&lt;em&gt;skopeo&lt;/em&gt; is greek for &lt;em&gt;looking for&lt;/em&gt;, &lt;em&gt;observe&lt;/em&gt;) images on remote registries as opposed to &lt;code class=&quot;highlighter-rouge&quot;&gt;docker inspect&lt;/code&gt;, which is working for locally pulled images.&lt;/p&gt;

&lt;p&gt;READMORE&lt;/p&gt;

&lt;h2 id=&quot;the-tool&quot;&gt;The Tool&lt;/h2&gt;

&lt;p&gt;Since then, we’ve been adding more features to &lt;code class=&quot;highlighter-rouge&quot;&gt;skopeo&lt;/code&gt;, such as downloading image layers (via &lt;code class=&quot;highlighter-rouge&quot;&gt;skopeo layers&lt;/code&gt;), and eventually we came up with a nice abstraction to the problem of &lt;em&gt;downloading&lt;/em&gt;, &lt;em&gt;inspecting&lt;/em&gt;, and &lt;em&gt;uploading&lt;/em&gt; images (without even the need to have Docker installed on the system). We called this new abstraction &lt;code class=&quot;highlighter-rouge&quot;&gt;copy&lt;/code&gt; and here is a straightforward example about how to use it:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# skopeo copy &amp;lt;source&amp;gt; &amp;lt;destination&amp;gt;&lt;/span&gt;

&lt;span class=&quot;gp&quot;&gt;$ &lt;/span&gt;id -u
1000

 &lt;span class=&quot;c&quot;&gt;# let's say we want to download the Fedora 24 docker image from the Docker Hub and store it on a local directory&lt;/span&gt;
&lt;span class=&quot;gp&quot;&gt;$ &lt;/span&gt;mkdir fedora-24
&lt;span class=&quot;gp&quot;&gt;$ &lt;/span&gt;skopeo copy docker://fedora:24 dir:fedora-24
&lt;span class=&quot;gp&quot;&gt;$ &lt;/span&gt;tree fedora-24
fedora-24
├── 7c91a140e7a1025c3bc3aace4c80c0d9933ac4ee24b8630a6b0b5d8b9ce6b9d4.tar
├── f9873d530588316311ac1d3d15e95487b947f5d8b560e72bdd6eb73a7831b2c4.tar
└── manifest.json

0 directories, 3 files
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;p&gt;You can see from the output  above that &lt;code class=&quot;highlighter-rouge&quot;&gt;skopeo copy&lt;/code&gt; successfully downloaded the Fedora 24 image—as in, it downloaded all its layers plus the image manifest.&lt;/p&gt;

&lt;p&gt;You can also notice the whole operation has been done with an unprivileged user—while Docker needs you to be &lt;code class=&quot;highlighter-rouge&quot;&gt;root&lt;/code&gt; to even do a &lt;code class=&quot;highlighter-rouge&quot;&gt;docker pull&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;What can you do with that &lt;code class=&quot;highlighter-rouge&quot;&gt;fedora-24&lt;/code&gt; directory now? There comes the fun. A nice new addition to the &lt;a href=&quot;https://github.com/projectatomic/atomic&quot;&gt;atomic&lt;/a&gt; tool has been the so-called &lt;em&gt;system containers&lt;/em&gt;—they are containers meant to be working before the Docker daemon comes up and they’re powered by the community project &lt;a href=&quot;https://github.com/opencontainers/runc&quot;&gt;runc&lt;/a&gt; which is part of the &lt;a href=&quot;https://www.opencontainers.org/&quot;&gt;Open Container Initiative&lt;/a&gt;. Basically, those containers are set up with these steps:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Download the image layers and manifest with &lt;code class=&quot;highlighter-rouge&quot;&gt;skopeo&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;Setup a new &lt;a href=&quot;https://wiki.gnome.org/action/show/Projects/OSTree?action=show&amp;amp;redirect=OSTree&quot;&gt;ostree&lt;/a&gt; repository which will be the root filesystem of our system container&lt;/li&gt;
  &lt;li&gt;Import downloaded layers in the order they come in the image manifest&lt;/li&gt;
  &lt;li&gt;Create a systemd unit file to run &lt;code class=&quot;highlighter-rouge&quot;&gt;runc&lt;/code&gt; with said filesystem&lt;/li&gt;
  &lt;li&gt;Spawn the service&lt;/li&gt;
  &lt;li&gt;Enjoy&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If the above sounds rather complex, the &lt;code class=&quot;highlighter-rouge&quot;&gt;atomic&lt;/code&gt; tool already provides a &lt;code class=&quot;highlighter-rouge&quot;&gt;--system&lt;/code&gt; flag which can be used to create a system container:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;gp&quot;&gt;$ &lt;/span&gt;sudo atomic install --system --name system-container gscrivano/spc-helloworld
Missing layer b037963d9b4419ffe09694c450bd33a06d24945416109aeb2937c7a8595252d9
Missing layer a1b129b466881845cdf628321bf7ed597b3d0cad0b8dd01564f78a4417c750fe
Missing layer a8a1c0600345270e055477e8f282d1318f0cef0debaed032cd1ba1e20eb2a35e
Missing layer 236608c7b546e2f4e7223526c74fc71470ba06d46ec82aeb402e704bfdee02a2
Extracting to /var/lib/containers/atomic/spc.0
systemctl &lt;span class=&quot;nb&quot;&gt;enable &lt;/span&gt;spc
Created symlink from /etc/systemd/system/multi-user.target.wants/spc.service to /etc/systemd/system/spc.service.
systemctl start spc

&lt;span class=&quot;c&quot;&gt;# verify the service is running smoothly&lt;/span&gt;
&lt;span class=&quot;gp&quot;&gt;$ &lt;/span&gt;sudo systemctl status spc  
● spc.service - Hello World System Container
   Loaded: loaded &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;/etc/systemd/system/spc.service; enabled; vendor preset: disabled&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
   Active: active &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;running&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; since Fri 2016-07-22 09:27:47 CEST; 5s ago
 Main PID: 10405 &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;runc&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
    Tasks: 10 &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;limit: 512&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
   CGroup: /system.slice/spc.service
           ├─10405 /bin/runc start spc
           └─spc
             ├─10416 /bin/sh /usr/bin/run.sh
             └─10435 nc -k -l 8081 --sh-exec /usr/bin/greet.sh

Jul 22 09:27:47 localhost.localdomain systemd[1]: Started Hello World System Container.

&lt;span class=&quot;c&quot;&gt;# we know our system container is listening on port 8081 so let's test it out!&lt;/span&gt;
&lt;span class=&quot;gp&quot;&gt;$ &lt;/span&gt;nc localhost 8081                 
HTTP/1.1 200 OK
Connection: Close

Hi World

&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;p&gt;The new &lt;code class=&quot;highlighter-rouge&quot;&gt;skopeo copy&lt;/code&gt; command isn’t just limited to download to local directories. Instead, it can do almost all sort of download/upload between:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;docker registries -&amp;gt; local diretories&lt;/li&gt;
  &lt;li&gt;docker registries -&amp;gt; docker registries (&lt;code class=&quot;highlighter-rouge&quot;&gt;skopeo copy docker://myimage docker://anotherrepo/myimage&lt;/code&gt;)&lt;/li&gt;
  &lt;li&gt;docker registries -&amp;gt; &lt;a href=&quot;http://www.projectatomic.io/registry/&quot;&gt;Atomic registry&lt;/a&gt; (&lt;code class=&quot;highlighter-rouge&quot;&gt;atomic:&lt;/code&gt; prefix)&lt;/li&gt;
  &lt;li&gt;docker registries -&amp;gt; &lt;a href=&quot;https://github.com/opencontainers/image-spec/blob/master/image-layout.md&quot;&gt;OCI image-layout&lt;/a&gt; directories (&lt;code class=&quot;highlighter-rouge&quot;&gt;oci:&lt;/code&gt; prefix)&lt;/li&gt;
  &lt;li&gt;and viceversa in any combination!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Possibilities seems endless, being able to pull as an unprivileged user also open the doors to work with unprivileged &lt;em&gt;sanboxes&lt;/em&gt; like &lt;a href=&quot;http://flatpak.org/&quot;&gt;Flatpak&lt;/a&gt;, &lt;a href=&quot;https://github.com/projectatomic/bubblewrap&quot;&gt;bubblewrap&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;As part of this unprileged capability, we’re working on &lt;a href=&quot;https://github.com/projectatomic/atomic/pull/483&quot;&gt;a new feature in the Atomic tool&lt;/a&gt;, which will be able to pull images to the calling user home directory  and later run any containers from those images (by also remapping files ownership to the one of the calling user).&lt;/p&gt;

&lt;p&gt;Supporting the &lt;a href=&quot;https://www.opencontainers.org/&quot;&gt;Open Container Initiative&lt;/a&gt; by early implementing the &lt;a href=&quot;https://github.com/opencontainers/image-spec&quot;&gt;image specification&lt;/a&gt; had also great advantages as we eventually helped the specification itself where things weren’t totally clear and defined. We’re continuosly adding wider support as the specification moves along also in areas like images signing and layers federation.&lt;/p&gt;

&lt;p&gt;The next post will take care of explaining how we extracted some core components from &lt;code class=&quot;highlighter-rouge&quot;&gt;skopeo&lt;/code&gt; and moved them to a set of reusable libraries.&lt;/p&gt;
</description>
        <pubDate>Mon, 25 Jul 2016 08:58:20 +0000</pubDate>
        <link>http://garrettlesage.com/jekyll-springboard-atomic/blog/2016/07/working-with-containers-image-made-easy-html/</link>
        <guid isPermaLink="true">http://garrettlesage.com/jekyll-springboard-atomic/blog/2016/07/working-with-containers-image-made-easy-html/</guid>
        
        <category>docker,</category>
        
        <category>containers,</category>
        
        <category>skopeo,</category>
        
        <category>OCI,</category>
        
        <category>kubernetes</category>
        
        
      </item>
    
      <item>
        <title>Client Binary Installation Now Included in the ADB</title>
        <description>&lt;p&gt;As part of the effort to continually improve the developer experience and make getting started easier, the ADB now supports client binary downloads. These downloads are facilitated by a new feature in ‘vagrant-service-manger’, the &lt;code class=&quot;highlighter-rouge&quot;&gt;install-cli&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;The &lt;a href=&quot;https://github.com/projectatomic/vagrant-service-manager&quot;&gt;vagrant-service-manager&lt;/a&gt; plugin enables easier access to the features and services provided by the &lt;a href=&quot;https://github.com/projectatomic/adb-atomic-developer-bundle&quot;&gt;Atomic Developer Bundle (ADB)&lt;/a&gt;. More information can be found in the README of ‘vagrant-service-manager’ repo.&lt;/p&gt;

&lt;p&gt;The &lt;code class=&quot;highlighter-rouge&quot;&gt;install-cli&lt;/code&gt; command was released as part of ‘vagrant-service-manager’ version 1.2.0. This command installs the client binary for services provided by the ADB. Today it can download client binaries for docker and OpenShift. This feature allows developers to know they have the best client for use with the ADB services they are using.&lt;/p&gt;

&lt;p&gt;READMORE&lt;/p&gt;

&lt;p&gt;To use the ‘install-cli’ command, you must have version 1.2.0 or later of ‘vagrant-service-manager’ installed. You can verify the version you have installed with the following command:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;$ vagrant plugin list
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;p&gt;You can install the plugin with the following command:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;$ vagrant plugin install vagrant-service-manager
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;p&gt;The usage of the ‘install-cli’ command is very straightforward:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;vagrant service-manager install-cli [service]

Where 'service' can be 'docker' and 'openshift'.
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;p&gt;Here is how you can use the command. In this example, we will use an ADB set up and running OpenShift Origin. To setup the ADB and start OpenShift, use the following commands:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;$ mkdir adb-openshift
$ cd adb-openshift
$ curl -o Vagrantfile https://raw.githubusercontent.com/projectatomic/adb-atomic-developer-bundle/master/components/centos/centos-openshift-setup/Vagrantfile
$ vagrant up
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;p&gt;&lt;code class=&quot;highlighter-rouge&quot;&gt;vagrant up&lt;/code&gt; will take a few minutes (and longer in the case of a slow network connection) to finish as it has to download the OpenShift Origin images from the Docker Hub. Once everything is ready you will see the information about how to access OpenShift Origin.&lt;/p&gt;

&lt;p&gt;Now, the OpenShift origin server is ready and you may need a client to access it and perform your desired operations. You can manually download the client binary from &lt;a href=&quot;https://github.com/openshift/origin/releases&quot;&gt;OpenShift repository&lt;/a&gt; but we recommend you to use ‘install-cli’ command provided.&lt;/p&gt;

&lt;p&gt;To get started, let’s review the help:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;$ vagrant service-manager install-cli --help
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;p&gt;Now, if you want to install the OpenShift client binary, ‘oc’, run the following command:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;$ vagrant service-manager install-cli openshift
# Binary now available at /home/budhram/.vagrant.d/data/service-manager/bin/openshift/1.1.1/oc
# run binary as:
# oc &amp;lt;command&amp;gt;
export PATH=/home/budhram/.vagrant.d/data/service-manager/bin/openshift/1.1.1:$PATH

# run following command to configure your shell:
# eval &quot;$(VAGRANT_NO_COLOR=1 vagrant service-manager install-cli openshift | tr -d '\r')&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;p&gt;Now, the binary has been downloaded and made available in the listed directory. You can configure your shell with the command mentioned in the output. This will make sure that the ‘oc’ binary is in your executable path.&lt;/p&gt;

&lt;p&gt;You can verify everything worked by running the &lt;code class=&quot;highlighter-rouge&quot;&gt;oc version&lt;/code&gt; command as is shown below:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;$ oc version
oc v1.1.1
kubernetes v1.1.0-origin-1107-g4c8e6f4
&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;p&gt;Great!&lt;/p&gt;

&lt;p&gt;Now the OpenShift binary client has been set up and you can play around with it. You may wish to watch &lt;a href=&quot;https://www.youtube.com/watch?v=HiE7TgjLjAk&quot;&gt;OpenShift Origin quickstart with Atomic Developer Bundle&lt;/a&gt; as a next step.&lt;/p&gt;

&lt;iframe width=&quot;560&quot; height=&quot;315&quot; src=&quot;https://www.youtube.com/embed/HiE7TgjLjAk&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;&quot;&gt;&lt;/iframe&gt;
</description>
        <pubDate>Fri, 22 Jul 2016 09:56:00 +0000</pubDate>
        <link>http://garrettlesage.com/jekyll-springboard-atomic/blog/2016/07/vagrant-service-manager-install-cli-html/</link>
        <guid isPermaLink="true">http://garrettlesage.com/jekyll-springboard-atomic/blog/2016/07/vagrant-service-manager-install-cli-html/</guid>
        
        <category>atomic-developer-bundle,</category>
        
        <category>vagrant-service-manager</category>
        
        
      </item>
    
  </channel>
</rss>
