Blog posts

Blog posts last week

We’ve had more followup blog posts from OpenStack Summit, along with some more from the RDO community.

Querying haproxy data using socat from CLI by Carlos Camacho

Currently (most users) I don’t have any way to check the haproxy status in a TripleO virtual deployment (via web-browser) if not previously created some tunnels enabled for that purpose.

Read more at http://tm3.org/c3

Keystone Domains are Projects by Adam Young

Yesterday, someone asked me about inherited role assignments in Keystone projects. Here is what we worked out.

Read more at http://tm3.org/c4

OpenStack Summit: An evening with Ceph and RDO by Rich Bowen

Last Tuesday in Barcelona, we gathered with the Ceph community for an evening of food, drinks, and technical sessions.

Read more at http://tm3.org/c5

OpenStack Summit Barcelona, 3 of N by rbowen

Continuing the saga of OpenStack Summit Barcelona …

Read more at http://tm3.org/c6

Red Hat Virtualization: Bridging the Gap with the Cloud and Hyperconverged Infrastructure by Ted Brunell

Red Hat Virtualization offers a flexible technology for high-intensive performance and secure workloads. Red Hat Virtualization 4.0 introduced new features that enable customers to further extend the use case of traditional virtualization in hybrid cloud environments. The platform now easily incorporates third party network providers into the existing environment along with other technologies found in next generation cloud platforms such as Red Hat OpenStack Platform and Red Hat Enterprise Linux Atomic Host. Additionally, new infrastructure models are now supported including selected support for hyperconverged infrastructure; the native integration of compute and storage across a cluster of hosts in a Red Hat Virtualization environment.

Read more at http://tm3.org/c7

Running Tempest on RDO OpenStack Newton by chandankumar

Tempest is a set of integration tests to run against an OpenStack cluster.

Read more at http://tm3.org/bk

OpenStack Summit: An evening with Ceph and RDO

Last Tuesday in Barcelona, we gathered with the Ceph community for an evening of food, drinks, and technical sessions.

There were 215 in attendance at last count, and we had 12 presentations, from members of both communities.

Alfredo

A huge thank you to all of the speakers, and to all of the people who turned out for this great evening.

pool

More photos HERE.

Some of the presentations from the event are available HERE

Blog posts last week

With OpenStack Summit last week, we have a lot of summit-focused blog posts today, and expect more to come in the next few days.

Attending OpenStack Summit Ocata by Julien Danjou

For the last time in 2016, I flew out to the OpenStack Summit in Barcelona, where I had the chance to meet (again) a lot of my fellow OpenStack contributors there.

Read more at http://tm3.org/bu

OpenStack Summit, Barcelona, 2 of n by rbowen

Tuesday, the first day of the main event, was, as always, very busy. I spent most of the day working the Red Hat booth. We started at 10 setting up, and the mob came in around 10:45.

Read more at http://tm3.org/bx

OpenStack Summit, Barcelona, 1 of n by rbowen

I have the best intentions of blogging every day of an event. But every day is always so full, from morning until the time I drop into bed exhausted.

Read more at http://tm3.org/by

TripleO composable/custom roles by Steve Hardy

This is a follow-up to my previous post outlining the new composable services interfaces , which covered the basics of the new for Newton composable services model.

Read more at http://tm3.org/bo

Integrating Red Hat OpenStack 9 Cinder Service With Multiple External Red Hat Ceph Storage Clusters by Keith Schincke

This post describes how to manually integrate Red Hat OpenStack 9 (RHOSP9) Cinder service with multiple pre-existing external Red Hat Ceph Storage 2 (RHCS2) clusters. The final configuration goals are to have Cinder configuration with multiple storage backends and support …

Read more at http://tm3.org/bz

On communities: Sometimes it’s better to over-communicate by Flavio Percoco

Communities, regardless of their size, rely mainly on the communication there is between their members to operate. The existing processes, the current discussions, and the future growth depend heavily on how well the communication throughout the community has been established. The channels used for these conversations play a critical role in the health of the communication (and the community) as well.

Read more at http://tm3.org/c0

Full Stack Automation with Ansible and OpenStack by Marcos Garcia - Principal Technical Marketing Manager

Ansible offers great flexibility. Because of this the community has figured out many useful ways to leverage Ansible modules and playbook structures to automate frequent operations on multiple layers, including using it with OpenStack.

Read more at http://tm3.org/bs

Next week in Barcelona

Join us next week in Barcelona for OpenStack Summit. We’ll be gathering from around the world to celebrate the Newton release, and plan for the Ocata cycle.

RDO will have a table in the Red Hat booth, where we’ll be answering your questions about RDO. And we’ll have ducks, as usual.

duck

On Tuesday evening, join us for an evening with RDO and Ceph, with technical presentations about both projects, as well as drinks and light snacks.

And, throughout the week, RDO enthusiasts are giving a wide variety of talks about all things OpenStack.

If you’re using RDO, please stop by and tell us about it. We’d love to meet you, and find out what we, as a project, can do better for you and your organization.

See you in Barcelona!

RDO blog posts this week

Here’s what RDO enthusiasts have been blogging about in the last few days.

RDO Newton Released by Rich Bowen

The RDO community is pleased to announce the general availability of the RDO build for OpenStack Newton for RPM-based distributions, CentOS Linux 7 and Red Hat Enterprise Linux. RDO is suitable for building private, public, and hybrid clouds. Newton is the 14th release from the OpenStack project, which is the work of more than 2700 contributors from around the world (source).

Read more at http://tm3.org/bm

How to run Rally on Packstack environment by mkopec

Rally is a benchmarking tool that automates and unifies multi-node OpenStack deployment, cloud verification, benchmarking & profiling. For OpenStack deployment I used packstack tool.

Read more at http://tm3.org/bn

TripleO Composable Services 101 by Steve Hardy

Over the newton cycle, we’ve been working very hard on a major refactor of our heat templates and puppet manifiests, such that a much more granular and flexible “Composable Services” pattern is followed throughout our implementation.It’s been a lot of work, but it’s been a frequently requested feature for some time, so I’m excited to be in a position to say it’s complete for Newton (kudos to everyone involved in making that happen!) :)This post aims to provide an introduction to this work, an overview of how it works under the hood, some simple usage examples and a roadmap for some related follow-on work.

Read more at http://tm3.org/8b

TripleO composable/custom roles by Steve Hardy

This is a follow-up to my previous post outlining the new composable services interfaces , which covered the basics of the new for Newton composable services model.The final piece of the composability model we’ve been developing this cycle is the ability to deploy user-defined custom roles, in addition to (or even instead of) the built in TripleO roles (where a role is a group of servers, e.g “Controller”, which runs some combination of services).What follows is an overview of this new functionality, the primary interfaces, and some usage examples and a summary of future planned work.

Read more at http://tm3.org/bo

Ceph/RDO meetup in Barcelona at OpenStack Summit by Rich Bowen

If you’ll be in Barcelona later this month for OpenStack Summit, join us for an evening with RDO and Ceph.

Read more at http://tm3.org/bp

Translating Between RDO/RHOS and upstream releases Redux by Adam Young

I posted this once before, but we’ve moved on a bit since then. So, an update.

Read more at http://tm3.org/bq

RDO Newton Released

The RDO community is pleased to announce the general availability of the RDO build for OpenStack Newton for RPM-based distributions, CentOS Linux 7 and Red Hat Enterprise Linux. RDO is suitable for building private, public, and hybrid clouds. Newton is the 14th release from the OpenStack project, which is the work of more than 2700 contributors from around the world (source).

The RDO community project curates, packages, builds, tests, and maintains a complete OpenStack component set for RHEL and CentOS Linux and is a member of the CentOS Cloud Infrastructure SIG. The Cloud Infrastructure SIG focuses on delivering a great user experience for CentOS Linux users looking to build and maintain their own on-premise, public or hybrid clouds. At latest count, RDO contains 1157 packages.

All work on RDO, and on the downstream release, Red Hat OpenStack Platform, is 100% open source, with all code changes going upstream first.

Getting Started

There are three ways to get started with RDO.

To spin up a proof of concept cloud, quickly, and on limited hardware, try the All-In-One Quickstart. You can run RDO on a single node to get a feel for how it works.

For a production deployment of RDO, use the TripleO Quickstart and you’ll be running a production cloud in short order.

Finally, if you want to try out OpenStack, but don’t have the time or hardware to run it yourself, visit TryStack, where you can use a free public OpenStack instance, running RDO packages, to experiment with the OpenStack management interface and API, launch instances, configure networks, and generally familiarize yourself with OpenStack

Getting Help

The RDO Project participates in a Q&A service at ask.openstack.org, for more developer-oriented content we recommend joining the rdo-list mailing list. Remember to post a brief introduction about yourself and your RDO story. You can also find extensive documentation on the RDO docs site.

The #rdo channel on Freenode IRC is also an excellent place to find help and give help.

We also welcome comments and requests on the CentOS mailing lists and the CentOS and TripleO IRC channels (#centos, #centos-devel, and #tripleo on irc.freenode.net), however we have a more focused audience in the RDO venues.

Getting Involved

To get involved in the OpenStack RPM packaging effort, see the RDO community pages and the CentOS Cloud SIG page. See also the RDO packaging documentation.

Join us in #rdo on the Freenode IRC network, and follow us at @RDOCommunity on Twitter. If you prefer Facebook, we’re there too, and also Google+.

And, if you’re going to be in Barcelona for the OpenStack Summit two weeks from now, join us on Tuesday evening at the Barcelona Princess, 5pm - 8pm, for an evening with the RDO and Ceph communities. If you can’t make it in person, we’ll be streaming it on YouTube.

How to run Rally on Packstack environment

Rally is a benchmarking tool that automates and unifies multi-node OpenStack deployment, cloud verification, benchmarking & profiling. For OpenStack deployment I used packstack tool.

##Install Rally

[1.] Install rally:

$ sudo yum install openstack-rally

[2.] After the installation is complete set up the Rally database:

$ sudo rally-manage db recreate

##Register an OpenStack deployment

You have to provide Rally with an OpenStack deployment it is going to benchmark. To do that, we’re going to use keystone configuration file generated by packstack installation.

[1.] Evaluate the configuration file:

$ source keystone_admin

[2.] Create rally deployment and let’s name it “existing”

$ rally deployment create --fromenv --name=existing
+--------------------------------------+----------------------------+----------+------------------+--------+
| uuid                                 | created_at                 | name     | status           | active |
+--------------------------------------+----------------------------+----------+------------------+--------+
| 6973e349-739e-41af-947a-34230b7383f8 | 2016-10-05 08:24:27.939523 | existing | deploy->finished |        |
+--------------------------------------+----------------------------+----------+------------------+--------+

[3.] You can verify that your current deployment is healthy and ready to be benchmarked by the deployment check command:

$ rally deployment check
+-------------+--------------+-----------+
| services    | type         | status    |
+-------------+--------------+-----------+
| ceilometer  | metering     | Available |
| cinder      | volume       | Available |
| glance      | image        | Available |
| gnocchi     | metric       | Available |
| keystone    | identity     | Available |
| neutron     | network      | Available |
| nova        | compute      | Available |
| swift       | object-store | Available |
+-------------+--------------+-----------+

##Run Rally

The sequence of benchmarks to be launched by Rally should be specified in a benchmark task configuration file (either in JSON or in YAML format). Let’s create one of the sample benchmark task, for example task for boot and delete server.

[1.] Create a new file and name it boot-and-delete.json

[2.] Copy this to the boot-and-delete.json file:

{% set flavor_name = flavor_name or "m1.tiny" %}
{% set image_name = image_name or "cirros" %}
{
    "NovaServers.boot_and_delete_server": [
        {
            "args": {
                "flavor": {
                    "name": "{{flavor_name}}"
                },
                "image": {
                    "name": "{{image_name}}"
                },
                "force_delete": false
            },
            "runner": {
                "type": "constant",
                "times": 10,
                "concurrency": 2
            },
            "context": {
                "users": {
                    "tenants": 3,
                    "users_per_tenant": 2
                }
            }
        },
        {
            "args": {
                "flavor": {
                    "name": "{{flavor_name}}"
                },
                "image": {
                    "name": "{{image_name}}"
                },
                "auto_assign_nic": true
            },
            "runner": {
                "type": "constant",
                "times": 10,
                "concurrency": 2
            },
            "context": {
                "users": {
                    "tenants": 3,
                    "users_per_tenant": 2
                },
                "network": {
                    "start_cidr": "10.2.0.0/24",
                    "networks_per_tenant": 2
                }
            }
        }
    ]
}

[3.] Run the task:

$ rally task start boot-and-delete.json

After successfull ran you’ll see information such as: Task ID, Response Times, duration, … Note that the Rally input task above uses cirros as image name and ‘m1.tiny’ as flavor name. If this benchmark task fails, then the reason for that might be a non-existing image/flavor specified in the task. To check what images/flavors are available in the deployment you are currently benchmarking, you might use the rally show command:

$ rally show images
$ rally show flavors

More about Rally tasks templates can be found on Rally documentation

Ceph/RDO meetup in Barcelona at OpenStack Summit

If you’ll be in Barcelona later this month for OpenStack Summit, join us for an evening with RDO and Ceph.

Tuesday evening, October 25th, from 5 to 8pm (17:00 - 20:00) we’ll be at the Barcelona Princess, right across the road from the Summit venue. We’ll have drinks, light snacks, and presentations from both Ceph and RDO.

If you can’t make it in person, we’ll also be streaming the event on YouTube

Topics we expect to be covered include (not necessarily in this order):

  • RDO release status (aarch64, repos, workflow)
  • RDO repos overview (CBS vs Trunk, and what goes where)
  • RDO and Ceph (maybe TripleO and Ceph?)
  • Quick look at new rpmfactory workflow with rdopkg
  • CI in RDO - what are we testing?
  • CERN – How to replace several petabytes of Ceph hardware without downtime
  • Ceph at SUSE
  • Ceph on ARM
  • 3D Xpoint & 3D NAND with OpenStack and Ceph
  • Bioinformatics – Openstack and Ceph used in large scale cancer research projects

If you expect to be at the event, please consider signing up on Eventbrite so we have an idea of how many people to expect. Thanks!

RDO blog posts this week

Here’s what RDO enthusiasts are blogging about lately.

Gnocchi 3.0 release by Julien Danjou

After a few weeks of hard work with the team, here is the new major version of Gnocchi, stamped 3.0.0. It was very challenging, as we wanted to implement a few big changes in it.

Read more at http://tm3.org/bf

# of DB connections in OpenStack services by geguileo

The other day someone asked me if the SQLAlchemy connections to the DB where per worker or shared among all workers, and what was the number of connections that should be expected from an OpenStack service. Maybe you have also wondered about this at some point, wonder no more, here’s a quick write up summarizing […]

Read more at http://tm3.org/bg

Hyperthreading in the cloud by Tim Bell

The cloud at CERN is used for a variety of different purposes from running personal VMs for development/test, bulk throughput computing to analyse the data from the Large Hadron Collider to long running services for the experiments and the organisation.

Read more at http://tm3.org/bh

OVS 2.6 and The First Release of OVN by russellbryant

In January of 2015, the Open vSwitch team announced that they planned to start a new project within OVS called OVN (Open Virtual Network).  The timing could not have been better for me as I was looking around for a new project.  I dove in with a goal of figuring out whether OVN could be a promising next generation of Open vSwitch integration for OpenStack and have been contributing to it ever since.

Read more at http://tm3.org/bi

Deployment tips for puppet-tripleo changes by Carlos Camacho

This post will describe different ways of debugging puppet-tripleo changes.

Read more at http://tm3.org/bj

Running Tempest on RDO OpenStack Newton by chandankumar

Tempest is a set of integration tests to run against an OpenStack cluster.

Read more at http://tm3.org/bk

Running Tempest on RDO OpenStack Newton

Tempest is a set of integration tests to run against an OpenStack cluster.

What does RDO provides for Tempest?

RDO provides three packages for running tempest against any OpenStack installation.

  • python-tempest : It can be used as a python library, consumed as a dependency for out of tree tempest plugins i.e. for horizon and designate tempest plugins.
  • openstack-tempest : It provides python tempest library and required executables for running tempest.
  • openstack-tempest-all : It will install openstack-tempest as well as all the tempest plugins on the system.

Deploy packstack using latest RDO Newton packages

Roll out a vm of CentOS 7, Follow these steps:

  1. Install rdo-release-newton rpm

     # yum -y install https://rdoproject.org/repos/openstack-newton/rdo-release-newton.rpm
    
  2. Update your CentOS vm and perform reboot.

     # yum -y update
    
  3. Install openstack-packstack

     # yum install -y openstack-packstack
    
  4. Run packstack to deploy OpenStack Newton release:

     # packstack --allinone
    

    Once packstack installation is done, we are good to go ahead.

Install tempest and required tempest plugins

  1. Install tempest

    # yum install openstack-tempest
    
  2. Install tempest plugins based on the openstack services installed and configured on deployment.

    Packstack installs by default horizon, nova, neutron, keystone, cinder, swift, glance, ceilometer, aodh and gnocchi. To find out what are the openstack components installed, just do a rpm query:

     # rpm -qa | grep openstack-*
    

    OR you can use openstack-status command for the same. Just grab the tempest plugins of these services and install it.

     # yum install python-glance-tests python-keystone-tests python-horizon-tests-tempest \
       python-neutron-tests python-cinder-tests python-nova-tests python-swift-tests \
       python-ceilometer-tests python-gnocchi-tests python-aodh-tests
    

    OR you can automatically install the required tempest plugins of the configured services on the environment.

     # python /usr/share/openstack-tempest-*/tools/install_test_packages.py
    
  3. To find what are tempest plugins installed:

     # tempest list-plugins
    

    Once done, you are ready to run tempest.

    If you face any entry point issues while running tempest, you can debug using entry_point_inspector tool Install entry_point_inspector from epel.repo.

         # yum install epel-release
    
         # yum install python-epi
    

    Run epi command to show the entry points of tempest plugins:

     # epi group show tempest.test_plugins
    

Configuring and Running tempest

  1. source admin credentials and switch to normal user

     # source /root/keystonerc_admin
    
     # su <user>
    
  2. Create a directory from where you want to run tempest

     $ mkdir /home/$USER/tempest; cd /home/$USER/tempest
    
  3. Configure the tempest directory

     $ /usr/share/openstack-tempest-*/tools/configure-tempest-directory
    
  4. Auto generate tempest configuration for your deployed openstack environment

    $ python tools/config_tempest.py --debug identity.uri $OS_AUTH_URL \
      identity.admin_password  $OS_PASSWORD --create
    

    It will automatically create all the required configuration in etc/tempest.conf

  5. To list all the tests

    $ testr list-tests
    

    OR

    $ ostestr -l
    
  6. To run tempest tests:

     $ ostestr
    
  7. For running api and scenario tests using ostestr and prints the slowest tests after test run

     $ ostestr --regex '(?!.*\[.*\bslow\b.*\])(^tempest\.(api|scenario))'
    
  8. To run specific tests:

     $ python -m testtools.run tempest.api.volume.v2.test_volumes_list.VolumesV2ListTestJSON
    

    OR

     $ ostestr --pdb tempest.api.volume.v2.test_volumes_list.VolumesV2ListTestJSON
    

    ostestr –pdb will call python -m testtools.run under the hood.

Thanks to Luigi, Steve, Daniel, Javier, Alfredo, Alan for the review.

Happy Hacking!