Blog posts

RDO blog roundup, week of September 28

Here’s what RDO enthusiasts have been writing about over the past week.

If you’re writing about RDO, or about OpenStack on CentOS, Fedora or RHEL, and you’re not on my list, please let me know!

OpenContrail on the controller side by Sylvain Afchain

In my previous post I explained how packets are forwarded from point to point within OpenContrail. We saw the tools available to check what are the routes involved in the forwarding. Last time we focused on the agent side but now we are going to understand on another key component: the controller..

… read more at http://tm3.org/2m

Highly available virtual machines in RHEL OpenStack Platform 7 by Steve Gordon

OpenStack provides scale and redundancy at the infrastructure layer to provide high availability for applications built for operation in a horizontally scaling cloud computing environment. It has been designed for applications that are “designed for failure” and voluntarily excluded features that would enable traditional enterprise applications, in fear of limiting its’ scalability and corrupting its initial goals. These traditional enterprise applications demand continuous operation, and fast, automatic recovery in the event of an infrastructure level failure. While an increasing number of enterprises look to OpenStack as providing the infrastructure platform for their forward-looking applications they are also looking to simplify operations by consolidating their legacy application workloads on it as well.

… read more at http://tm3.org/2n

Keystone Unit Tests by Adam Young

Running the Keystone Unit tests takes a long time. To start with a blank slate, you want to make sure you have the latest from master and a clean git repository.

… read more at http://tm3.org/2o

Hints and tips from the CERN OpenStack cloud team by Tim Bell

Having reported that EPT has a negative influence on the High Energy Physics standard benchmark HepSpec06, we have started the deployment of those settings across the CERN OpenStack cloud,

… read more at http://tm3.org/2p

Ossipee by Adam Young

OpenStack is a big distributed system. FreeIPA is designed for security in distributed system. In order to develop and test each of them, separately or together, I need a distributed system. Virtualization has been a key technology for making this kind of work possible. OpenStack is great of managing virtualization. Added to that is the benefits found when one “Fly our own airplanes.” Thus, I am using OpenStack to develop OpenStack.

… read more at http://tm3.org/2q

RDO blog roundup, week of September 21

Here’s what RDO enthusiasts have been writing about over the past week.

If you’re writing about RDO, or about OpenStack on CentOS, Fedora or RHEL, and you’re not on my list, please let me know!

Red Hat Confirms Speaking Sessions at OpenStack Summit Tokyo by Jeff Jameson

As this Fall’s OpenStack Summit in Tokyo approaches, the Foundation has posted the session agenda, outlining the final schedule of events. I am happy to report that Red Hat has 15 sessions that will be included in the weeks agenda, along with a few more as waiting alternates. With the limited space and shortened event this time around, I am please to see that Red Hat continues to remain in sync with the current topics, projects, and technologies the OpenStack community and customers are most interested in.

… read more at http://tm3.org/2g

Driving in the Fast Lane: Huge Page support in OpenStack Compute by Steve Gordon

In a previous “Driving in the Fast Lane” blog post we focused on optimization of instance CPU resources. This time around let’s take a dive into the handling of system memory, and more specifically configurable page sizes. We will reuse the environment from the previous post, but add huge page support to our performance flavor.

… read more at http://tm3.org/2h

Reviewing Puppet OpenStack patches by Emilien Macchi

Reviewing code takes to me 20% of my work. It’s a lot of time, but not too much when you look at OpenStack velocity. To be efficient, you need to understand how review process works and have the right tools in hand.

… read more at http://tm3.org/2i

Horizon Performance Optimizations by Silver Sky

ome notes on Open Stack Horizon Performance optimizations on CentOS 7.1 install: 4 vCPU (2.3 GHz Intel Xeon E5 v3), 2 GB – 4 GB RAM, SSD backed 40 GB RAW image.

… read more at http://tm3.org/2j

Python 3 Status in OpenStack Liberty by Victor Stinner

The Python 3 support in OpenStack Liberty made huge visible progress. Blocking libraries have been ported. Six OpenStack applications are now compatible with Python3: Aodh, Ceilometer, Gnocchi, Ironic, Rally and Sahara. Thanks to voting python34 check jobs, the Python 3 support can only increase, Python 2 only code cannot be reintroduced by mistake in tested code.

… read more at http://tm3.org/2k

Analyzing the performance of Red Hat Enterprise Linux OpenStack Platform using Rally by Roger Lopez and Joe Talerico

In our recent blog post, we’ve discussed the steps involved in determining the performance and scalability of a Red Hat Enterprise Linux OpenStack Platform environment. To recap, we’ve recommended the following:

… read more at http://tm3.org/2l

RDO blog roundup, week of September 14th

Here’s what RDO enthusiasts have been writing about over the past week.

If you’re writing about RDO, or about OpenStack on CentOS, Fedora or RHEL, and you’re not on my list, please let me know!

Let me tell you something about being a PTL, by Flavio Percoco

It’s that time of the cycle, in OpenStack, when projects need to elect who’s going to be the PTL for the next 6 months. People look at the, hopefully many, candidacies and vote based on the proposals that are more sound to them. I believe, for the PTL elections, the voting process has worked decently, which is why this post is not meant for voters but for the, hopefully many, PTL candidates.

… read more at http://tm3.org/2c

Liberty cycle retrospective in Puppet OpenStack by Emilien Macchi

Things are moving very fast in OpenStack; it might be useful to take a short break and write down a little bit of retrospective; it will help to see what happened in Puppet OpenStack project during the last months.

… read more at http://tm3.org/2d

Big data in the open, private cloud by Tim Gasper

Organizations that take advantage of comprehensive insights from their data can gain a competitive edge. However, the ever-increasing amount of data coming in can make it hard to see trends. Adding to this challenge, many companies have data locked in silos, making it difficult—if not impossible—to gain critical insights. Big data technologies like Hadoop can help unify and organize data, but getting fast, meaningful insight still isn’t easy.

… read more at http://tm3.org/2e

Multi master Database Cluster on OpenStack with Load Balancing by Silver Sky Soft

Multi Master database replication in a cluster of databases allows applications to write to any database node and data is available at other nodes within short order. The main advantage is high availability deployment, high read performance and scalability.

… read more at http://tm3.org/2f

Kyle Mestery and the future of Neutron

At LinuxCon two weeks ago I had the privilege of chatting with Kyle about the future of Neutron. Kyle was a delight to interview, because he’s obviously so passionate about his project.

If the audio player below doesn’t work for you, you can listen to the interview HERE, or see the transcript below

R: This is Rich Bowen. I’m the OpenStack community liaison at Red Hat. This is a continuation of my series … I’m talking with various Project Technical Leads (PTLs) at OpenStack about what’s coming in the future. We’ve noticed … ever since I’ve been involved with OpenStack I’ve noticed that networking is always the number one place where people have difficulties. I’m really excited to be taling with Kyle Mestery, who is the PTL of Neutron. Thanks so much for taking time to do this.

K: Absolutely, Rich. Thanks a lot. I’m definitely excited to talk about this, because networking is obviously important. You’ve got to have a network for your compute nodes.

R: We’re doing this in person at LinuxCon. My other two interviews were online. So that’s what all this background noise is. When I did the other two interviews, I focused mainly on what’s new in Kilo and and what’s coming in Liberty. But I’m particularly interested in looking a little bit further out with Neutron, because this seems to be a really difficult problem, but just in the last few years we’ve seen amazing progress here. Tell me what you see coming in Liberty, and in M and N and O and P and …

K: I guess what I’d like to start talking about is maybe a little bit less about the technology, and more about the project itself.

Neutron, if you look at all the metrics, whether it’s code reviews, bugs, blueprints, everything … it’s in the top four for everything usually. So it’s a really large project. And we’re also the project that has the most plugins and drivers. So we’ve kind of had this problem in the history of … we implemented a platform, for networking, an API and a platform layer, but we also have a reference implementation, and then we have this huge grouping of implementations of the API, whether they’re from vendors or other Open Source projects, like OpenDaylight or OpenContrail or something like that.

And there’s been this perception problem of, what is Neutron? Is it a platform? Or is it Neutron, the OVS + ML2 implementation? And so I think, over the years, that’s been a concern. People who had issues with Neutron, maybe had issues with this OVS + ML2 implementation.

What we’ve done over the last year - Juno and Kilo - the team’s done a lot of work on that implementation - the built-in agent implementation. We’ve really made that a lot better, and Kilo should be a pretty solid release for people. And we’re still doing improvements in Liberty for this.

In parallel, we’ve helped to enable the platform so things like OpenDaylight, OpenContrails, Midonet, all of these new projects, OVN, they can also enable and implement the APIs as well, because there’s a lot of groups working there.

It’s a challenging thing when you’re trying to build this platform, and you have your own implementation of it, and you’re trying to enable these other groups, too. But I think we’ve learned a lot over the last year, and the team has done a good job.

R: When I first started hearing about OpenDaylight, it was this magical thing where you could draw lines between blobs and networking would just happen. It feels like maybe that’s a long way away, but is there going to be a time when I don’t have to be a networking scholar to use OpenStack?

K: Yes, that ultimately is the goal. We’re definitely working to get there. So that you as an operator can provide this scalable tenant networking to your tenants, and you as a tenant can just consume it. And I think we’re getting there. I’m hopeful that it’s going to be soon.

So, speaking of the future, one of the things that we’re looking at beyond Liberty that we’ve actually spent a lot of Liberty talking about has been this concept of an L3 network as well. So right now the API specifies networks of L2 broadcast domains, and we have subnets and ports and routers and things. It turns out that a lot of operators, especially operators at scale, like GoDaddy and RackSpace and Yahoo, they’re very interested in breaking up what a network means, perhaps introducing the concept of an L3 network. They might want to do L3 just per-rack, for example.

This has been a great cycle because we’ve spent a lot of time with these operators understanding their use case, and we’ve really refined this. In fact I just literally got out of a meeting. I dialed in remotely to the ops midcycle for this. I think we have this nailed down so that we can get the spec done now and get it approved, and then in Mitaka we can implement this. This will really help these large deployers, and these large operators, and I think it’s going to be a good change.

R: Before we started talking on mic, you were talking about the next cycle, and how Neutron will be tagged in that. Tell us more about that.

K: There were a couple of big developments. The community as a whole has really been working to bring Neutron in, because … The quick back-story is Nova networking has been around for a long time. Neutron’s been around for 4 years now. We’re getting close to the point where hopefully we can finally deprecate Nova network. We’re not quite there. But the community as a whole has done a couple of great things.

Number one, there was recently, as part of the tag process with new governance changes, someone proposed a starter-kit:compute tag, that was summarized as what services in OpenStack you need to bring up a small compute cluster. Initially that had Nova network. Now the interesting thing was, we as the Neutron team, we really pushed for Neutron on there, and it emerged with Nova network, but Monty Taylor actually proposed a patch to change it to Neutron, with the logic being, if you put someone in and start this cloud with Nova network, you can’t really grow it and expand to Neutron.

So, it was great. The community came together and we all merged it, and so now Neutron is recommended.

Now, in parallel, I was just at the OpenStack board meeting, and I dialed into the Defcore midcycle. The current cycle that they’re starting for Defcore, they’re going to focus on Neutron as the default networking there as well.

R: Great!

K: Yeah. So it’s been really great to see the community come together and rally around Neutron.

R: The project’s only five years old. Looking forward another five years, what do you envision for Neutron?

K: I really think that we’re kind of evolving it to be this platform - to be a platform layer at this point. So the hope is that one of these great Open Source networking projects like OpenDaylight, for example, could maybe come in and become the default networking, or something like that - could become this rock-solid awesome thing.

R: That seemed to be their goal when they started out.

K: Yeah, and they’re doing a great job. I’ve been involved with OpenDaylight, and there’s a great team from Red Hat, amongst other companies, HP, Cisco, Brocade, a really good team up there that’s working really hard, and doing a lot of great stuff, and making progress. It’s been fun to see that.

So that’s the goal, to make it a platform. One of the things we were looking at doing was spinning out the reference implementation into its own Gerrit repository, so it could be on equal footing with some of these other things that are out there. And I think we’ll do that in Mitaka. We didn’t quite do that in Liberty, but maybe then.

R: When I was doing a little research before we talked, I was looking at the Neutron Git repository. It seems that it’s got more than any other project - just dozens of subprojects. If somebody wanted to get involved with Neutron, as a developer, is there an easy place - is there an easy in?

K: My recommendation would be, reach out in #openstack-neutron on Freenode. There’s a lot of us core reviewers and other community members who are there are willing to help out. We have a lot of bugs in Launchpad, and we try to tag a bunch as low-hanging fruit, which hopefully are easy for initial developers, which also helps. And we’ve done a lot to enhance our DevRef - http://docs.openstack.org/developer/neutron/devref/ - documentation to hopefully make it easier for new developers to pull the tree and then go and look at what’s there and say all this is documented better. We’ve really focused on that so hopefully it’s easier for new people to get involved. and contribute.

The documentation team in Openstack has been great. This cycle we had some members of that team along with some Neutron people really enhance the install guide. One of the things we did was we came up with a comparable install mode to what Nova network had. You could always do it in Neutron, it turns out it wasn’t documented well. So we documented that. And the other thing is we had some of the documentation people … it turns out we have a lot of documentation in DevRef, around installation and configuration, that probably should live in the install or networking guide, and so that’s moving there too. We’re doing a lot of work there.

I’m really excited about, and it’s really hard to believe Liberty is in two weeks. We’re two weeks out, and I’m at LinuxCon. I was just in the cross-project meeting. At this point, we’re really trying to focus on finishing Liberty. Then we’ll take a small break, and ramp up for Mitaka again.

R: Thanks so much for your time.

K: Absolutely. Thanks, Rich.

RDO blog roundup, week of August 31st

It’s been a very quiet week in RDO blogs. If you’re writing about RDO, or about OpenStack on CentOS, Fedora or RHEL, and you’re not on my list, please let me know!

Scaling NFV to 213 Million Packets per Second with Red Hat Enterprise Linux, OpenStack, and DPDK by Andrew Theurer

There is a lot of talk about NFV and OpenStack, but frankly not much hard data, showing us how well OpenStack can perform with technologies like DPDK. We at Red Hat want to know, and I suspect many of you do as well. So, we decided to see what RDO Kilo is capable of, by testing multiple Virtual Network Functions (VNFs), deployed and managed completely by OpenStack.

… read more at http://tm3.org/25

ZooKeeper part 2: building highly available applications, Ceilometer central agent unleashed by Yassine Lamgarchal

The Ceilometer project is in charge of collecting various measurements from the whole OpenStack infrastructure, including the bare metal and virtual level. For instance, we can see the number of virtual machines running or the number of storage volumes.

… read more at http://tm3.org/26

RDO blog roundup, week of August 17th

Here’s what RDO enthusiasts have been writing about over the past week.

If you’re writing about RDO, or about OpenStack on CentOS, Fedora or RHEL, and you’re not on my list, please let me know!

Flavio Percoco, PTL of the Zaqar project by Rich Bowen

Zaqar (formerly called Marconi) is the messaging service in OpenStack. I recently had an opportunity to interview Flavio Percoco, who is the PTL (Project Technical Lead) of that project, about what’s new in Kilo, and what’s coming in Liberty.

… read more at http://tm3.org/1x

Tokenless Keystone by Adam Young

Keystone Tokens are bearer tokens, and bearer tokens are vulnerable to replay attacks. What if we wanted to get rid of them?

… read more at http://tm3.org/1y

Upgrades are dying, don’t die with them, by Maxime Payant-Chartier

We live in a world that has changed the way it consumes applications. The last few years have seen a rapid rise in the adoption of Software-as-a-Service (SaaS) and Platform-as-a-Service (PaaS). Much of this can be attributed to the broad success of Amazon Web Services (AWS), which is said to have grown revenue from $3.1B to $5B last year (Forbes). More and more people, enterprise customers included, are consuming applications and resources that require little to no maintenance. And any maintenance that does happen, now goes unnoticed by users. This leaves traditional software vendors contending to find a way to adapt their distribution models to make their software easier to consume. Lengthy, painful upgrades are no longer acceptable to users, forcing vendors to create a solution to this problem.

… read more at http://tm3.org/1z

The OpenStack Big Tent, by Rich Bowen

OpenStack is big and complicated. It’s composed of many moving parts, and it can be somewhat intimidating to figure out what all the bits do, what’s required, what’s optional, and how to put all the bits together.

… read more at http://tm3.org/1-

Provider external networks (in an appropriate amount of detail) by Lars Kellogg-Stedman

In Quantum in Too Much Detail, I discussed the architecture of a Neutron deployment in detail. Since that article was published, Neutron gained the ability to handle multiple external networks with a single L3 agent. While I wrote about that back in 2014, I covered the configuration side of it in much more detail than I discussed the underlying network architecture. This post addresses the architecture side.

… read more at http://tm3.org/20

Logging configuration in OpenContrail by Numan Siddique

We know that all software components and services generate log files. These log files are vital in troubleshooting and debugging problems. If the log files are not managed properly then it can be extremely difficult to get a good look into them.

… read more at http://tm3.org/21

Neutron in-tree integration tests by Assaf Muller

It’s time for OpenStack projects to take ownership of their quality. Introducing in-tree, whitebox multinode simulated integration testing. A lot of work went in over the last few months by a lot of people to make it happen.

… read more at http://tm3.org/22

Dims talks about the Oslo project, by Rich Bowen

This is the second in what I hope is a long-running series of interviews with the various OpenStack PTLs (Project Technical Leads), in an effort to better understand what the various projects do, what’s new in the Kilo release, and what we can expect in Liberty, and beyond.

… read (and listen) more at http://tm3.org/23

Performance and Scaling your Red Hat Enterprise Linux OpenStack Platform Cloud by Joe Talerico

As OpenStack continues to grow into a mainstream Infrastructure-as-a-service (IaaS) platform, the industry seeks to learn more about its performance and scalability for use in production environments. As recently captured in this blog, common questions that typically arise are: “Is my hardware vendor working with my software vendor?”, “How much hardware would I actually need?”, and “What are the best practices for scaling out my OpenStack environment?”

… read more at http://tm3.org/24

Dims talks about the Oslo project

This is the second in what I hope is a long-running series of interviews with the various OpenStack PTLs (Project Technical Leads), in an effort to better understand what the various projects do, what’s new in the Kilo release, and what we can expect in Liberty, and beyond.

If the audio player below doesn’t work for you, you can listen to the recording –> here or see the transcript below.

Rich: Hi, this is Rich Bowen. I am the OpenStack Community Liaison at Red Hat, and continuing my series on Project Technical Leads (PTLs) at OpenStack, I’m talking with Davanum Srinivas, who I’ve known for a few years outside of the OpenStack context, and he is the PTL for the Oslo project.

Oslo is the OpenStack Commons Library.

Thanks for speaking with me Davanum.

Dims: Thanks, Rich. You can call me Dims. You know me by Dims.

R: Yeah, I know. laughs

R: Give us a little bit of background. How long has the Oslo project been around?

D: We were doing things differently - we have a really old history, though. Some of the initial effort was started back in release B.

R: Oh, that long ago.

D: Yeah. So, what we were doing … why did Oslo come about? Oslo came about because way back when Nova started, we started splitting code from Nova into separate projects. But these projects were sharing code, so we were trying to figure out the best way to synchronize code between these sibling or child projects. So we ended up with a single repository of source code, called Oslo Incubator, where you would have the master copy, and everybody would sync from there, but what was happening was, everybody had their own sync schedule. Some people were contributing patches back, and it was becoming hard to maintain those patches. We decided we had to change the method the team worked. And we started releasing libraries, for specific purposes. What you saw in Kilo was a big bang explosion of a huge number of libraries from the Oslo team. Most of it was code in Oslo Incubator. We just had to cut the modules in a proper shape, sequence, with an API, with a correct set of dependencies, and that’s what we ended up releasing, one by one. Other projects started using things like oslo.config, oslo.messaging, oslo.db, oslo.log, and all these different libraries.

So that’s where we are today.

R: What is it that you’ll be doing in coming releases? Is it just the effort of identifying duplication, or are you actively developing new libraries.

D: Yes, we are. In Liberty, we have 5 new libraries coming up. Three of them start with ‘oslo.’ - like oslo.cache, oslo.reports, oslo.service. The other two do not have ‘oslo’ in their names. One is called Automaton, the other is called Futurist.

Automaton is a library for building state machines and things like that. Futurist is picking up some of the work that is done in upstream futures and things like that and making it available to all of the projects in the OpenStack ecosystem.

So these two projects can be used outside of Oslo, and outside of OpenStack by other people. That’s why they don’t have the Oslo name in them.

R: Do you see a lot of projects outside of OpenStack using these?

D: We hope so. For example, there is a project called Debt Collector, which we think fits well with how do you deprecate code and what are the primitives that we can provide that make it easy to mark their code being deprecated. A lot of people who work on Oslo also work in the overall Python ecosystem, so the hope is that if we design the libraries in such a way that it’s reusable, other people will pick some of our stuff up. But that’s a stretch goal. The real goal is to make sure that these libraries work well with the OpenStack projects.

And the other thing about these libraries is that they don’t drag in the Oslo baggage. For example, if you take oslo.db or oslo.messaging, they pull in a lot of other Oslo libraries, and these little libraries are designed so that they don’t drag in other Oslo libraries. So that’s the other good thing about these.

The way the Oslo project has been for the last few cycles has been that we are doing a lot of experiments in Oslo, which have been rolled out to other projects in the OpenStack ecosystem. Oslo is slightly different from other projects in the sense that people don’t work on it full time. So we have people work part time on it, but they focus mainly on other bigger projects. They come here when they need a feature or a fix, or things like that, and then they stay. We have a few cores who monitor reviews and bugs across all of the Oslo projects, but we also have people who specifically focus on individual little libraries, and they get core rights there.

People in the OpenStack ecosystem are experimenting with different structures, like for Neutron, they put everything into subrepos, and they experiement that way. And I think that what we are doing might be more useful to Nova, for example, and other projects, where they would like to keep a set of cores together, and also have subsystem maintainers and things like that.

Oslo is a good place to do this experimentation because the code base is not that huge, and the community is not that big as well. And the rate of churn, in terms of bugs and reviews, is not that high, as well. We are also experimenting with release versioning and things like that, and some of the things that you’ve seen recently driven by Doug, across the OpenStack ecosystem, we tested it here first, in terms of the versioning numbers, not having the Big Bang release, how do we do it, and things like that.

We lead the way.

The other big thing is, for example, Python 3.4 support. All the Oslo libraries have to be Python 3.4 complient first before they can be used and the other projects can adopt. So we end up being in the forefront trying to use libraries like websockify, or other libraries from the OpenStack ecosystem, which are not Python 3.4 complient, and we work with them to get them complient, and then use it in Oslo libraries, and then we roll it in. So we play an important role, I think, in the OpenStack ecosystem.

R: As PTL, is this a full time thing for you, or not? What are your responsibilities as PTL?

D: One of the very time consuming work is getting the releases out on a weekly basis. We try to make it predictable. At least, this cycle, we have started to make it predictable. Earlier, we heard complaints, we don’t know when you guys are releasing, so we were not ready, and things like that. So we have a good process this time around, where, over the weekend, we run a bunch of tests, outside of the CI, as well as inside our CI system, to make sure that the master of all Oslo libraries works well with Nova, Neutron, Glance, and things like that, and come Monday morning, we decide which projects need releases, based on what has changed in them for the last week or so. We follow the release management guildelines, working with Doug and Thierry, to generate the releases during the day on Monday.

After Tuesday you don’t have to worry about Oslo releases breaking the CI or your code. That has helped a lot of projects, especially Nova, for example. If they know there is a break late on Monday evening, they know who to ping, and we can start triaging the issue, and by Tuesday they are back on their feet.

That’s the worst-case scenario. Best-case scenario, no problem happens, and we are good to go. But there’s always one test case, or one scenario here or there. We always try to test beforehand, but like somebody said, it’s a living thing - the ecosystem, the CI system, is like an emergent behavior, it’s a living thing. It’s hard.

Flavio Percoco talks about the Zaqar project

Zaqar (formerly called Marconi) is the messaging service in OpenStack. I recently had an opportunity to interview Flavio Percoco, who is the PTL (Project Technical Lead) of that project, about what’s new in Kilo, and what’s coming in Liberty.

If the audio player below doesn’t work for you, the recording is HERE, and the transcript follows below.


FlavioPercoco

R: This is Rich Bowen. I am the RDO community liaison at Red Hat, and I’m speaking with Flavio Percoco, who is the PTL of the Zaqar project. We spoke two years ago about the project, and at that time it had a different name. I was hoping you could tell us what has been happening in the Kilo cycle, and what we can expect to see in Liberty.

F: Thanks, Rich, for having me here. Yes, we spoke two years ago, back in Hong Kong, while the project was called Marconi. Many things have happened in these last few years. We developed new APIs, we’ve added new features to the project.

At that time, we had version 1 of the API, and we were still figuring out what the project was supposed to be like, and what features we wanted to support, and after that we released a version 1.1 of the API, which was pretty much the same thing, but with a few changes, and a few things that would make consuming Zaqar easier for the final user.

Some other things changed. The community provided a lot of feedback to the project team. We’ve attempted to graduate two times, and then the Big Tent discussion happened, and we just fell into the category of projects that would be a good part of the community - of the Big Tent discussion. So we are now officially part of OpenStack. We’re part of this Big Tent group.

We changed the API a little bit. The impression that the old API gave was that it was a queueing service, whereas what we really wanted to do was a messaging service. There is a fundamental difference between the two. Our focus is to provide a messaging API for OpenStack that would not just allow users to send messages from one point to another, but it would also allow users to have notifications right away from that API. So we’ll take advantage of the common storage that we’ll use for both features, for different services living within the same service. That’s a big thing, and something we probably didn’t talk about back then.

The other thing is that in Kilo we dedicated a lot of time to work on these versions of the API and making sure that all of the feedback that we got from the community was taken care of and that we were improving the API based on that feedback, and those long discussions that we had on the mailing list.

In Liberty, we’ve dedicated time to integrating with other project, as in, having other projects consume the API. So we’re very excited to say that in Liberty a few patches have landed in Heat that rely on Zaqar for having notifications, or to send messages, and communicate with other parts of the Heat service. This is very exciting for us, because we have some stories of production environments, but we didn’t have stories of other projects consuming Zaqar, and this definitely puts us in a better position to improve the service, and get more feedback from the community.

In terms of features for the Liberty cycle, we’ve dedicated time to improve the websocket transport which we started in Kilo, but didn’t have enough time to complete there. This websocket transport will allow for persistent connections to be made against the Zaqar service, so you’ll just connect to the service once, and you’ll keep that connection alive. This is ideal for several scenarios, and one of those is connecting to Zaqar from a browser and having Javascript communication directory to Zaqar, which is something we really want to have.

Another interesting feature that we implemented in Liberty is called pre-signed URLs, and what it does is something very similar - if folks are familiar with Swift temp URLs - http://docs.openstack.org/kilo/config-reference/content/object-storage-tempurl.html

  • this is something very similar to that. It generates a URL that can expire. You will share that URL with people or services that don’t have an username in Zaqar, so that they can connect to the service and still send messages. This URL is limited to a single tenant and a single queue, and it has privileges and policies attached to it so that we can protect all the data that is going through the service.

I believe those are the two features that excite me the most from the Liberty cycle. But what excites me the most about this cycle is that we have other services using Zaqar, and that will allow us to improve our service a lot.

R: Looking forward to the future, is there anything that you would like to see in the M cycle? What is the next big thing for Zaqar?

F: In the M cycle, I still see us working on having more projects consuming Zaqar. There’s several use cases that we’ve talked about that are not being taken care of in the community. For instance, talking to guest agents. We have several services that need to have an agent running in the instances. We can talk about Trove, we can talk about Sahara, and Murano. We are looking forward to address that use case, which is what we built presigned URLs for. I’m not sure we’re going to make it in Liberty, because we’re already on the last milestone of the cycle, but we’ll still try to make it in Liberty. If we can’t make it in Liberty, that’s definitely one of the topics we’ll need to dedicate time to in the M cycle.

But as a higher level view, I would really like to see a better story for Zaqar in terms of operations support and deployment - make it very simple for people to go there and say they want Zaqar, this is all I need, I have my Puppet manifest, or Anisible playbooks, or whatever people are using now - we want to address that area that we haven’t paid much attention to. There is already some effort in the Puppet community to create manifests for Zaqar, which is amazing. We want to complete that work, we want to tell operations, hey, you don’t have to struggle to make that happen, you don’t have to struggle to run Zaqar, this is all you need.

And the second thing that I would like to see Zaqar doing in the future is to have a better opinion of what the storage it wants to rely on is. So far we have support for two storages that are unicode based and there’s a proposal to support a third storage, but in reality what we would really like to do is have a more opinionated Zaqar instance of storage, so that we can build a better API, make it consistent, and make sure it is dependable, and provide specific features that are supported and that it doesn’t matter what storage you are using, it doesn’t matter how you deploy Zaqar, you’ll always get the same API, which is something that right now it’s not true. If you deploy Redis, for instance, you will not have support for FIFO queues, which are optional right now in the service. You won’t be able to have them because that’s something that’s related to the storage itself. You don’t get the same guarantees that you’d get with other storage. We want to have a single story that we can tell to users, regardless of what storage they are using. This doesn’t mean that ops cannot use their own storage. If you deploy Zaqar and you really want to use a different storage, that’s fine, we’re not going to remove plugability from the service. But in terms of support, I would like Zaqar to be more opinionated.

RDO Blog roundup, week of August 10

Here’s what RDO enthusiasts have been writing about over the past week.

If you’re writing about RDO, or about OpenStack on CentOS, Fedora or RHEL, and you’re not on my list, please let me know!

Ceilometer, Gnocchi & Aodh: Liberty progress by Julien Danjou

It’s been a while since I talked about Ceilometer and its companions, so I thought I’d go ahead and write a bit about what’s going on this side of OpenStack. I’m not going to cover new features and fancy stuff today, but rather a shallow overview of the new project processes we initiated.

… read more at http://tm3.org/1s

Tuning hypervisors for High Throughput Computing, by Tim Bell

Over the past set of blogs, we’ve looked at a number of different options for tuning High Energy Physics workloads in a KVM environment such as the CERN OpenStack cloud.

… read more at http://tm3.org/1t

Template for a KeystoneV3.rc by Adam Young

If you are moving from Keystone v2 to v3 call, you need more variables in your environment. Here is a template for an update keystone.rc for V3, in jinja format:

… read more at http://tm3.org/1u

Findings from Reviewing the oVirt Overall Dashboard Concept at Red Hat Summit, by Liz Blanchard

At Red Hat Summit this past year, Serena and I presented a few concepts that we hope to include as features in oVirt.NEXT. One of those concepts was a design for an overall dashboard. This dashboard would aggregate all of the data from within an environment (all data centers) and present the user with visualizations of this information. The following is the initial design concept for the overall dashboard that we showed:

… read more at http://tm3.org/1v

How to choose the best-fit hardware for your OpenStack deployment by Jonathan Gershater

One of the benefits of OpenStack is the ability to deploy the software on standard x86 hardware, and thus not be locked-in to custom architectures and high prices from specialized vendors.

… read more at http://tm3.org/1w

RDO Blog roundup, week of August 3rd

Here’s what RDO enthusiasts have been writing about over the past week.

If you’re writing about RDO, or about OpenStack on CentOS, Fedora or RHEL, and you’re not on my list, please let me know!

A Cinder Road to Active/Active HA, by Gorka Eguileor

We all want to see OpenStack’s Block Storage Service operating in High Availability with Active/Active node configurations, and we are all keen to contribute to make it happen, but what does it take to get there?

… read more at http://tm3.org/1i

From 0 to OpenStack with devtest: the process in details, by Yanis Guenane

Devtest is the upstream way to deploy Openstack with TripleO. In simple words it takes you from a fresh bare metal server to an overcloud (understand OpenStack cloud) up and running with a single script.

… read more at http://tm3.org/1j

How VMs access metadata via qrouter-namespace in Openstack Kilo, by Boris Derzhavets

It is actually an update for Neutron on Kilo of original blog entry http://techbackground.blogspot.ie/2013/06/metadata-via-quantum-router.html considering Quantum implementation on Grizzly.

… read more at http://tm3.org/1k

CPU Pinning and NUMA Topology on RDO Kilo on Fedora Server 22, by Boris Derzhavets

Posting bellow follows up http://redhatstackblog.redhat.com/2015/05/05/cpu-pinning-and-numa-topology-awareness-in-openstack-compute/ on RDO Kilo installed on Fedora 22 . See detailed instructions for installation here http://lxer.com/module/newswire/view/216855/index.html

… read more at http://tm3.org/1l

OpenStack CPU topology for High Throughput Computing, by Tim Bell

We are starting to look at the latest features of OpenStack Juno and Kilo as part of the CERN OpenStack cloud to optimise a number of different compute intensive applications.

We’ll break down the tips and techniques into a series of small blogs. A corresponding set of changes to the upstream documentation will also be made to ensure the options are documented fully.

… read more at http://tm3.org/1m

CPU Pinning and NUMA Topology on RDO Kilo && Hypervisor Upgrade up to qemu-kvm-ev-2.1.2-23.el7.1 on CentOS 7.1 by Boris Derzhavets

Posting bellow follows up http://redhatstackblog.redhat.com/2015/05/05/cpu-pinning-and-numa-topology-awareness-in-openstack-compute/ on RDO Kilo upgraded via qemu-kvm-ev-2.1.2-23.el7.1 on CentOS 7.1

… read more at http://tm3.org/1n

8CPU Model Selection for High Throughput Computing by Tim Bell

As part of the work to tune the configuration of the CERN cloud, we have been exploring various options for tuning compute intensive workloads.

One option in the Nova configuration allows the model of CPU visible in the guest to be configured between different alternatives.

… read more at http://tm3.org/1o

8EPT and KSM for High Throughput Computing, by Tim Bell

As part of the analysis of CERN’s compute intensive workload in a virtualised infrastructure, we have been examining various settings of KVM to tune the performance.

… read more at http://tm3.org/1p

Simpler Road to Cinder Active-Active by Gorka Eguileor

Last week I presented a possible solution to support Active-Active configurations in Cinder, and as much as it pains me to admit it, it was too complex, so this week I’ll present a simpler solution.

… read more at http://tm3.org/1q

NUMA and CPU Pinning in High Throughput Computing, by Tim Bell

CERN’s OpenStack cloud runs the Juno release on mainly CentOS 7 hypervisors. Along with previous tuning options described in this blog which can be used on Juno, a number of further improvements have been delivered in Kilo.

… read more at http://tm3.org/1r