Blog posts

Recent RDO blogs, September 12, 2016

Here’s what RDO enthusiasts have been blogging about in the last few weeks.

LinuxCon talk slides: “A Practical Look at QEMU’s Block Layer Primitives” by Kashyap Chamarthy

Last week I spent time at LinuxCon (and the co-located KVM Forum) Toronto. I presented a talk on QEMU’s block layer primitives. Specifically, the QMP primitives block-commit, drive-mirror, drive-backup, and QEMU’s built-in NBD (Network Block Device) server.

… read more at http://tm3.org/9x

Complex data transformations with nested Heat intrinsic functions by Steve Hardy

Disclaimer, what follows is either pretty neat, or pure-evil depending your your viewpoint ;) But it’s based on a real use-case and it works, so I’m posting this to document the approach, why it’s needed, and hopefully stimulate some discussion around optimizations leading to a improved/simplified implementation in the future.

… read more at http://tm3.org/9y

Red Hat OpenStack Platform 9 is here! So what’s new? by Marcos Garcia

This week we released the latest version of our OpenStack product, Red Hat OpenStack Platform 9. This release contains more than 500 downstream enhancements, bug fixes, documentation changes, and security updates. It’s based on the upstream OpenStack Mitaka release. We have worked hard to reduce the time to release new versions and have successfully done so with this release! Red Hat OpenStack Platform 9 contains new Mitaka features and functionality, as well as the additional hardening, stability, and certifications Red Hat is known for. Of course, there continues to be tight integration with other key portfolio products, as well as comprehensive documentation.

… read more at http://tm3.org/9z

Deploying Server on Ironic Node Baseline by Adam Young

My team is working on the ability to automatically enroll servers launched from Nova in FreeIPA. Debugging the process has proven challenging; when things fail, the node does not come up, and there is little error reporting. This article posts a baseline of what things look like prior to any changes, so we can better see what we are breaking.

… read more at http://tm3.org/9-

A retrospective of the OpenStack Telemetry project Newton cycle by Julien Danjou

A few weeks ago, I recorded an interview with Krishnan Raghuram about what was discussed for this development cycle for OpenStack Telemetry at the Austin summit.

… read more at http://tm3.org/a0

Deploying Fernet on the Overcloud by Adam Young

Here is a proof of concept of deploying an OpenStack Tripleo Overcloud using the Fernet token Provider.

… read more at http://tm3.org/a1

OpenStack Infra: Understanding Zuul by Arie Bregman

Recently I had the time to explore Zuul. I decided to gather everything I learned here in this post. Perhaps you’ll find it useful for your understanding of Zuul.

… read more at http://tm3.org/a2

OpenStack Infra: How to deploy Zuul by Arie Bregman

This is the second post on Zuul, which focuses on deploying it and its services. To learn what is Zuul and how it works, I recommend to read the previous post.

… read more at http://tm3.org/a3

Scaling-up TripleO CI coverage with scenarios by Emilien Macchi

When the project OpenStack started, it was “just” a set of services with the goal to spawn a VM. I remember you run everything on your laptop and test things really quickly. The project has now grown, and thousands of features have been implemented, more backends / drivers are supported and new projects joined the party. It makes testing very challenging because everything can’t be tested in CI environment.

… read more at http://tm3.org/a4

Introducing patches to RDO CloudSIG packages by Jakub Ruzicka

RDO infrastructure and tooling has been changing/improving with each OpenStack release and we now have our own packaging workflow powered by RPM factory at review.rdoproject.org, designed to keep up with supersonic speed of upstream development.

… read more at http://tm3.org/a5

From decimal to timestamp with MySQL by Julien Danjou

When working with timestamps, one question that often arises is the precision of those timestamps. Most software is good enough with a precision up to the second, and that’s easy. But in some cases, like working on metering, a finer precision is required.

… read more at http://tm3.org/a6

Generating Token Request JSON from Environment Variables by Adam Young

When working with New APIS we need to test them with curl prior to writing the python client. I’ve often had to hand create the JSON used for the token request, as I wrote about way back here. Here is a simple bash script to convert the V3 environment variables into the JSON for a token request.

… read more at http://tm3.org/a7

Actionable CI by Assaf Muller

I’ve observed a persistent theme across valuable and successful CI systems, and that is actionable results. A CI system for a project as complicated as OpenStack requires a staggering amount of energy to maintain and improve. Often times the responsible parties are focused on keeping it green and are buried under a mountain of continuous failures, legit or otherwise. So much so that they don’t have time to focus on the following questions:

… read more at http://tm3.org/a8

Thoughts on Red Hat OpenStack Platform and certification of Tesora Database as a Service Platform by Ken Rugg, Chief Executive Officer, Tesora

When I think about open source software, Red Hat is first name that comes to mind. At Tesora, we’ve been working to make our Database as a Service Platform available to Red Hat OpenStack Platform users, and now it is a Red Hat certified solution. Officially collaborating with Red Hat in the context of OpenStack, one of the fastest growing open source projects ever, is a tremendous opportunity.

… read more at http://tm3.org/a9

Introducing patches to RDO CloudSIG packages

RDO infrastructure and tooling has been changing/improving with each OpenStack release and we now have our own packaging workflow powered by RPM factory at review.rdoproject.org, designed to keep up with supersonic speed of upstream development.

Let’s see what it takes to land a patch in RDO CloudSIG repos with the new workflow!

The Quest

This is a short story about backporting an upstream OpenStack Swift patch into RDO Mitaka openstack-swift package.

Please consult RDO Packaging docs for additional information.

First Things First

Make sure you have latest rdopkg from jruzicka/rdopkg copr. This is a new code added alongside existing functionality and it isn’t well tested yet, bugs need to be ironed out. If you encounter rdopkg bug, please report how it broke.

Inspect rdoinfo package metadata including various URLs using rdopkg info:

$ rdopkg info openstack-swift

name: openstack-swift
project: swift
conf: rpmfactory-core
upstream: git://git.openstack.org/openstack/swift
patches: http://review.rdoproject.org/r/p/openstack/swift.git
distgit: http://review.rdoproject.org/r/p/openstack/swift-distgit.git
master-distgit: http://review.rdoproject.org/r/p/openstack/swift-distgit.git
review-origin: ssh://review.rdoproject.org:29418/openstack/swift-distgit.git
review-patches: ssh://review.rdoproject.org:29418/openstack/swift.git
tags:
  liberty: null
  mitaka: null
  newton: null
  newton-uc: null
maintainers: 
- zaitcev@redhat.com

Yeah, that’s the Swift we want. Let’s use rdopkg clone to clone the distgit and also setup remotes according to rdoinfo entry above:

$ rdopkg clone [-u githubnick] openstack-swift

Which results in following remotes:

* origin:          http://review.rdoproject.org/r/p/openstack/swift-distgit.git
* patches:         http://review.rdoproject.org/r/p/openstack/swift.git
* review-origin:   ssh://githubnick@review.rdoproject.org:29418/openstack/swift-distgit.git
* review-patches:  ssh://githubnick@review.rdoproject.org:29418/openstack/swift.git
* upstream:        git://git.openstack.org/openstack/swift

Send patch for review

Patches are now stored as open gerrit review chains on top of upstream version tags so patches remote is now obsolete legacy.

Start with inspecting distgit:

$ git checkout mitaka-rdo
$ rdopkg pkgenv

Package:   openstack-swift
NVR:       2.7.0-1
Version:   2.7.0
Upstream:  2.9.0
Tag style: X.Y.Z

Patches style:          review
Dist-git branch:        mitaka-rdo
Local patches branch:   mitaka-patches
Remote patches branch:  patches/mitaka-patches 
Remote upstream branch: upstream/master
Patches chain:          unknown

OS dist:                RDO
RDO release/dist guess: mitaka/el7

rdopkg patchlog doesn’t support review workflow yet, sorry.

Next, use rdopkg get-patches to create local patches branch from associated gerrit patches chain and switch to it:

$ rdopkg get-patches

Cherry-pick the patch into newly created mitaka-patches branch. Upstream source is available in upstream remote.

$ git cherry-pick -x deadbeef

Finally, send the patch for review with rdopkg review-patch which is just a convenience shortcut to git review -r review-origin $BRANCH:

$ rdopkg review-patch

This will print an URL to patch review such as https://review.rdoproject.org/r/#/c/1145/.

Get +2 +1V on the patch review

Patches are never merged, they are kept as open review chains in order to preserve full patch history.

You need to get +2 from a reviewer and +1 Verified from the CI.

Update .spec and send it for review

Once the patch has been reviewed, update the .spec file in mitaka-rdo:

$ git checkout mitaka-rdo
$ rdopkg patch

You can also select specific patches chain by review number with -g/--gerrit-patches-chain:

$ rdopkg patch -g 1337

Inspect the newly created commit which should contain all necessary changes. If you need to adjust something, do so and use rdopkg amend to git commit -a --amend with nice commit message generated from changelog.

Finally, submit distgit change for review with

$ rdopkg review-spec

Review URL is printed. This is a regular review and once it’s merged, you’re done.

Happy packaging!

Recent RDO blogs, August 29, 2016

It’s been a few weeks since I posted a blog update, and we’ve had some great posts in the meantime. Here’s what RDO enthusiasts have been blogging about for the last few weeks.

Native DHCP support in OVN by Numan Siddique

Recently native DHCP support has been added to OVN. In this post we will see how native DHCP is supported in OVN and how it is used by OpenStack Neutron OVN ML2 driver. The code which supports native DHCP can be found here.

… read more at http://tm3.org/8d

Manual validation of Cinder A/A patches by Gorka Eguileor

In the Cinder Midcycle I agreed to create some sort of document explaining the manual tests I’ve been doing to validate the work on Cinder’s Active-Active High Availability -as a starting point for other testers and for the automation of the tests- and writing a blog post was the most convenient way for me to do so, so here it is.

… read more at http://tm3.org/8e

Exploring YAQL Expressions by Lars Kellogg-Stedman

The Newton release of Heat adds support for a yaql intrinsic function, which allows you to evaluate yaql expressions in your Heat templates. Unfortunately, the existing yaql documentation is somewhat limited, and does not offer examples of many of yaql’s more advanced features.

… read more at http://tm3.org/8f

Tripleo HA Federation Proof-of-Concept by Adam Young

Keystone has supported identity federation for several releases. I have been working on a proof-of-concept integration of identity federation in a TripleO deployment. I was able to successfully login to Horizon via WebSSO, and want to share my notes.

… read more at http://tm3.org/8g

TripleO Deploy Artifacts (and puppet development workflow) by Steve Hardy

For a while now, TripleO has supported a “DeployArtifacts” interface, aimed at making it easier to deploy modified/additional files on your overcloud, without the overhead of frequently rebuilding images.

… read more at http://tm3.org/8h

TripleO deep dive session #6 (Overcloud - Physical network) by Carlos Camacho

This is the sixth video from a series of “Deep Dive” sessions related to TripleO deployments.

… read more at http://tm3.org/8i

Improving QEMU security part 7: TLS support for migration by Daniel Berrange

This blog is part 7 of a series I am writing about work I’ve completed over the past few releases to improve QEMU security related features.

… read more at http://tm3.org/8j

Running Unit Tests on Old Versions of Keystone by Adam Young

Just because Icehouse is EOL does not mean no one is running it. One part of my job is back-porting patches to older versions of Keystone that my Company supports.

… read more at http://tm3.org/8k

BAND-AID for OOM issues with TripleO manual deployments by Carlos Camacho

First in the Undercloud, when deploying stacks you might find that heat-engine (4 workers) takes lot of RAM, in this case for specific usage peaks can be useful to have a swap file. In order to have this swap file enabled and used by the OS execute the following instructions in the Undercloud:

… read more at http://tm3.org/8l

Debugging submissions errors in TripleO CI by Carlos Camacho

Landing upstream submissions might be hard if you are not passing all the CI jobs that try to check that your code actually works. Let’s assume that CI is working properly without any kind of infra issue or without any error introduced by mistake from other submissions. In which case, we might ending having something like:

… read more at http://tm3.org/8m

Ceph, TripleO and the Newton release by Giulio Fidente

Time to roll up some notes on the status of Ceph in TripleO. The majority of these functionalities were available in the Mitaka release too but the examples work with code from the Newton release so they might not apply identical to Mitaka.

… read more at http://tm3.org/8n

Recent RDO blogs, August 8, 2016

Here’s what RDO enthusiasts have been blogging about this week:

Customizing a Tripleo Quickstart Deploy by Adam Young

Tripleo Heat Templates allow the deployer to customize the controller deployment by setting values in the controllerExtraConfig section of the stack configuration. However, Quickstart already makes use of this in the file /tmp/deploy_env.yaml, so if you want to continue to customize, you need to work with this file.

… read more at http://tm3.org/88

fedora-review tool for reviewing RDO packages by Chandan Kumar

This tool makes reviews of rpm packages for Fedora easier. It tries to automate most of the process. Through a bash API the checks can be extended in any programming language and for any programming language.

… read more at http://tm3.org/89

OpenStack operators, developers, users… It’s YOUR summit, vote! by David Simard

Once again, the OpenStack Summit is nigh and this time it’ll be in Barcelona. The OpenStack Summit event is an opportunity for Operators, Developers and Users alike to gather, discuss and learn about OpenStack. What we know is that there’s going to be keynotes, design sessions for developers to hack on things and operator sessions for discussing and exchanging around the challenges of operating OpenStack. We also know there’s going to be a bunch of presentations on a wide range of topics from the OpenStack community.

… read more at http://tm3.org/8a

TripleO Composable Services 101 by Steve Hardy

Over the newton cycle, we’ve been working very hard on a major refactor of our heat templates and puppet manifiests, such that a much more granular and flexible “Composable Services” pattern is followed throughout our implementation.

… read more at http://tm3.org/8b

TripleO deep dive session #5 (Undercloud - Under the hood) by Carlos Camacho

This is the fifth video from a series of “Deep Dive” sessions related to TripleO deployments.

… watch at http://tm3.org/8c

fedora-review tool for reviewing RDO packages

This tool makes reviews of rpm packages for Fedora easier. It tries to automate most of the process. Through a bash API the checks can be extended in any programming language and for any programming language.

We can also use it for also reviewing RDO packages on Centos 7/Fedora-24.

Install fedora-review and DLRN

[1.] Install fedora-review and Mock

For Centos 7

Enable epel repos on centos

$ sudo yum -y install epel-release

Download fedora-review el7 build from Fedora Koji

$ sudo yum -y install https://kojipkgs.fedoraproject.org//packages/fedora-review/0.5.3/2.el7/noarch/fedora-review-0.5.3-2.el7.noarch.rpm
$ sudo yum -y install mock

On Fedora 24

$ sudo dnf -y install fedora-review mock

[2.] Add the user you intend to run as to the mock group:

$ sudo usermod -a -G mock $USER
$ newgrp mock
$ newgrp $USER

[3.] Install DLRN:

On Centos 7

$ sudo yum -y install mock rpm-build git createrepo python-virtualenv python-pip openssl-devel gcc libffi-devel

On Fedora 24

$ sudo dnf -y install mock rpm-build git createrepo python-virtualenv python-pip openssl-devel gcc libffi-devel

The below steps works on both distros.

$ virtualenv rdo
$ source .rdo/bin/activate
$ git clone https://github.com/openstack-packages/DLRN.git
$ cd DLRN
$ pip install -r requirements.txt
$ python setup.py develop

[4.] Generate dlrn.cfg (RDO trunk mock config)

$ dlrn --config-file projects.ini --package-name python-keystoneclient
$ ls <path to cloned DLRN repo>/data/dlrn.cfg

[5.] Add dlrn.cfg to mock config.

Add mock config is in /etc/mock directory.

$ sudo cp <path to cloned DLRN repo>/data/dlrn.cfg /etc/mock
$ ls /etc/mock/dlrn.cfg

Now, everything is set, we are now ready to review any RDO package reviews using fedora-review.

Run Fedora-review tool

$ fedora-review -b <RH bug number for RDO Package Review> -m <mock config to use>

Let’s review ‘python-osc-lib’ using dlrn.cfg.

$ fedora-review -b 1346412 -m dlrn

Happy Reviewing!

Recent RDO blogs, August 1, 2016

Just a few blog posts from the RDO community this week:

ControllerExtraConfig and Tripleo Quickstart by Adam Young

Once I have the undercloud deployed, I want to be able to quickly deploy and redeploy overclouds. However, my last attempt to affect change on the overcloud did not modify the Keystone config file the way I intended. Once again, Steve Hardy helped me to understand what I was doing wrong.

… read more at http://tm3.org/85

OPENSTACK 6TH BIRTHDAY, LEXINGTON, KY by Rich Bowen

Yesterday I spent the day at the University of Kentucky at the OpenStack 6th Birthday Meetup. The day was arranged by Cody Bumgardner and Kathryn Wong from the UK College of Engineering.

… read more at http://tm3.org/86

TripleO deep dive session #4 (Puppet modules) by Carlos Camacho

This is the fourth video from a series of “Deep Dive” sessions related to TripleO deployments.

… read more at http://tm3.org/87

Recent RDO blogs, July 25, 2016

Here’s what RDO enthusiasts have been writing about over the past week:

TripleO deep dive session #3 (Overcloud deployment debugging) by Carlos Camacho

This is the third video from a series of “Deep Dive” sessions related to TripleO deployments.

… read (and watch) more at http://tm3.org/81

How connection tracking in Open vSwitch helps OpenStack performance by Jiri Benc

By introducing a connection tracking feature in Open vSwitch, thanks to the latest Linux kernel, we greatly simplified the maze of virtual network interfaces on OpenStack compute nodes and improved its networking performance. This feature will appear soon in Red Hat OpenStack Platform.

… read more at http://tm3.org/82

Introduction to Red Hat OpenStack Platform Director by Marcos Garcia

Those familiar with OpenStack already know that deployment has historically been a bit challenging. That’s mainly because deployment includes a lot more than just getting the software installed – it’s about architecting your platform to use existing infrastructure as well as planning for future scalability and flexibility. OpenStack is designed to be a massively scalable platform, with distributed components on a shared message bus and database backend. For most deployments, this distributed architecture consists of Controller nodes for cluster management, resource orchestration, and networking services, Compute nodes where the virtual machines (the workloads) are executed, and Storage nodes where persistent storage is managed.

… read more at http://tm3.org/83

Cinder Active-Active HA – Newton mid-cycle by Gorka Eguileor

Last week took place the OpenStack Cinder mid-cycle sprint in Fort Collins, and on the first day we discussed the Active-Active HA effort that’s been going on for a while now and the plans for the future. This is a summary of that session.

… read more at http://tm3.org/84

Recent RDO blogs, July 19, 2016

Here’s what RDO enthusiasts have been blogging about in the last week.

OpenStack 2016.1-1 release Haïkel Guémar

The RDO Community is pleased to announce a new release of openstack-utils.

… read more at http://tm3.org/7x

Improving RDO packaging testing coverage by David Simard

DLRN builds packages and generates repositories in which these packages will be hosted. It is the tool that is developed and used by the RDO community to provide the repositories on trunk.rdoproject.org. It continuously builds packages for every commit for projects packaged in RDO.

… read more at http://tm3.org/7y

TripleO deep dive session #2 (TripleO Heat Templates by Carlos Camacho

This is the second video from a series of “Deep Dive” sessions related to TripleO deployments.

… watch at http://tm3.org/7z

How to build new OpenStack packages by Chandan Kumar

Building new OpenStack packages for RDO is always tough. Let’s use DLRN to make our life simpler.

… read more at http://tm3.org/7-

OpenStack Swift mid-cycle hackathon summary by cschwede

Last week more than 30 people from all over the world met at the Rackspace office in San Antonio, TX for the Swift mid-cycle hackathon. All major companies contributing to Swift sent people, including Fujitsu, HPE, IBM, Intel, NTT, Rackspace, Red Hat, and Swiftstack. As always it was a packed week with a lot of deep technical discussions around current and future changes within Swift.

… read more at http://tm3.org/80

OpenStack Swift mid-cycle hackathon summary

OpenStack Swift mid-cycle hackathon summary

Last week more than 30 people from all over the world met at the Rackspace office in San Antonio, TX for the Swift mid-cycle hackathon. All major companies contributing to Swift sent people, including Fujitsu, HPE, IBM, Intel, NTT, Rackspace, Red Hat, and Swiftstack. As always it was a packed week with a lot of deep technical discussions around current and future changes within Swift.

There are always way more topics to discuss than time, therefore we collected topics first and everyone voted afterwards. We came up with the following major discussions that are currently most interesting within our community:

  • Hummingbird replication
  • Crypto - what’s next
  • Partition power increase
  • High-latency media
  • Container sharding
  • Golang - how to get it accepted in master
  • Policy migration

There were a lot more topics, and I like to highlight a few of them.

H9D aka Hummingbird / Golang

This was a big topic - as expected. It has been shown by Rackspace already that H9D improves the performance of the object servers and replication significantly compared to the current Python implementation. There were also some investigations if it would be possible to improve the speed using PyPy and other improvements; however the major problem is that Python blocks processes on file I/O, no matter if it is async IO or not. Sam wrote a very nice summary about this earlier on [1].

NTT also benchmarked H9D, and showed some impressive numbers as well. Shortly summarized, throughput increased 5-10x depending on parameters like object size and the like. It seems disks are no longer the bottleneck - now the proxy CPU is the new bottleneck. That said, inode cache memory seems to be even more important because with H9D one can do many more disk requests.

Of course there were also discussions about another proposal to accept golang within OpenStack and discussions will continue [2]. My personal view is that the H9D implementation has some major advantages and hopefully (a refactored subset) will be accepted to be merged to master.

Crypto retro & what’s next

Swift 2.9.0 has been released the past week and includes the merged crypto branch [3]. Kudos to everyone involved, especially Janie and Alistair! This middleware make it possible for operators to fully encrypt object data on disk.

We did a retro on the work done so far; it has been the third time that we used a feature branch and a final soft-freeze to land a major change within Swift. There are pros and cons for this, but overall it worked pretty well again. It also made sense that reviewers stepped in late in the process, because this added new sights onto the whole work. Soft freezes also enforce more reviewers to contribute to it and get it merged finally.

Swiftstack benchmarked the crypto branch; as expected the throughput decreases somewhat with crypto enabled (especially with small objects), while proxy CPU usage increases. There were some discussions about improving the performance, and it seems the impact from checksumming is significant here.

Next steps to improve the crypto middleware is to work on some external key master implementations (for example using Barbican) as well as key rotation.

Partition power increase

Finally there is a patch ready for review now, that will allow an operator to increase the partition power without downtime for end users [4].

I gave an overview about the implementation, and also showcased a demo how this works. Based on discussions during the last week I spotted some minor eventualities that have been fixed meanwhile, and I hope to get this merged before Barcelona. We somewhat dreamed about a future Swift that might be usable with automatic partition power increase, where an operator needs to think about this much less than today.

Various middlewares

There are some proposed middlewares that are important to their authors, and we discussed quite a few of them. This includes:

  • High-latency media (aka archiving)
  • symlinks
  • notifications
  • versioning

The idea to support high-latency media is to use cold storage (like tape or other public cloud object storage with a possible multi-hour latency) for less frequently accessed data and especially to offer a low-cost long-term archival solution based on Swift [5]. This is somewhat challenging for the upstream community, because most contributors don’t have access to large enterprise tape libraries for testing. In the end this middleware needs to be supported by the community, and a stand-alone repository outside of Swift itself might make most sense therefore (similar to the swift3 middleware [6]).

A new proposal to implement true history-based versioning has been proposed earlier on, and some open questions have been talked about. This should land hopefully soon, adding an improved way to versioning compared to today’s stack-based versioning [7].

Sending out notifications based on writes to Swift have been discussed earlier on, and thankfully Zaqar now supports temporary signed urls, solving some of the issues we faced earlier on. I’ll update my patch shortly [8]. There is also another option to use oslo.messaging. All in all, the whole idea will be to use a best-effort approach - it’s simply not possible to guarantee a notification has been delivered successfully without blocking requests.

Container sharding

As of today it’s a good idea to avoid billions of objects in a single container in Swift, because writes to that container can get slow then. Matt started working on container sharding sometime ago [9], and iterated once again because he faced new problems with the previous ideas. My impression is that the new idea is getting much closer to something that will eventually be merged, thanks to Matt’s persistence on this topic.

Summary

There were a lot more (smaller) topics that have been discussed, but this should give you an overview of the current work going on in the Swift community and the interesting new features that we’ll see hopefully soon in Swift itself. Thanks everyone who contributed and participated and special thanks to Richard for organizing the hackathon - it was a great week and I’m looking forward to the next months!

How to build new OpenStack packages

Building new OpenStack packages for RDO is always tough. Let’s use DLRN to make our life simpler.

DLRN is the RDO Continuous Delivery platform that pulls upstream git, rebuild them as RPM using template spec files, and ships them in repositories consumable by CI (e.g upstream puppet/Tripleo/packstack CI).

We can use DLRN to build a new RDO python package before sending them for package review.

Install DLRN

[1.] Install required dependencies for DLRN on Fedora/CentOS system:

$ sudo yum install git createrepo python-virtualenv mock gcc \
              redhat-rpm-config rpmdevtools libffi-devel \
              openssl-devel

[2.] Create a virtualenv and activate it

$ virtualenv dlrn-venv
$ source dlrn-venv/bin/activate

[3.] Clone DLRN git respository from github

$ git clone https://github.com/openstack-packages/DLRN.git

[4.] Install the required python dependencies for DLRN

$ cd DLRN
$ pip install -r requirements.txt

[5.] Install DLRN

$ python setup.py develop

Now your system is ready to use DLRN.

Let us package “congress” OpenStack project for RDO

[1.] create a project “congress-distgit” and initialize the project using git init

$ mkdir congress-distgit
$ cd congress-distgit
$ git init

[2.] create a branch “rpm-master”

$ git checkout -b rpm-master

[3.] Create openstack-congress.spec file using RDO spec template and commit it into rpm-master branch.

$ git add openstack-congress.spec
$ git commit -m "<your commit message>"

Add a package entry in rdoinfo

[1.] Copy rdoinfo directory somewhere locally and make changes there.

$ rdopkg info && cp -r ~/.rdopkg/rdoinfo $SOMEWHERE_LOCAL
$ cd $SOMEWHERE_LOCAL/.rdopkg/rdoinfo

[2.] Edit the rdoinfo.yml file and add package entry in the last

$ vim rdoinfo.yml
- project: congress # project name
  name: openstack-congress # RDO package name
  upstream: git://github.com/openstack/%(project)s # Congress project source code git repository
  master-distgit: <path to project spec file git repo>.git # path to congress-distgit git directory
  maintainers:
  - < maintainer email > # your email address

[3.] save the rdo.yml and run

$ ./verify.py

This will check rdo.yml sanity.

Run DLRN to build openstack-congress package

[1.] Go to DLRN project directory.

[2.] Run the following command to build the package

$ dlrn --config-file projects.ini \
        --info-repo $SOMEWHERE_LOCAL/.rdopkg/rdoinfo \ # --info-repo flag for pointing local rdoinfo repo
        --package-name openstack-congress \ --package flag to build openstack-congress
        --head-only \ To build the package using latest commit

It will clone the project code “openstack-congress” and spec under “openstack-congress_distro” folder.

[3.] Once done, you can rebuilding the package by passing the –dev flag.

$ dlrn --config-file projects.ini \
        --info-repo ~/.rdopkg/rdoinfo \ # --info-repo flag for pointing local rdoinfo repo
        --package-name <openstack-congress> \ # --package flag to build openstack-congress
        --head-only \ # To build the package using latest commit
        --dev \ # to build the package locally

[4.] Once build is completed, you can find the rpms and srpms in this folder

$ # path to packaged rpms and srpms
$ <path to DLRN>/data/repos/current/

Now grab the rpms and feel free to test it.