openstack-discuss
Threads by month
- ----- 2026 -----
- February
- January
- ----- 2025 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
January 2019
- 234 participants
- 279 discussions
23 Apr '19
Hey everybody!tl;dr - Let me know if you'd like to go diving in the Rocky Mountains.Of all of the places we've had a Summit where one might reasonably expect there to be a possibility of Scuba Diving, the Denver Summit is probably REALLY low on the list.But it turns out that there is a Dive Shop that does dives at the Denver Aquarium [0] (with all the fish and sharks and whatnot) The Denver Aquarium is also very close to the Convention Center.I'm going to see about putting together a private group dive event for those of us who think that sounds like a fun idea. Due to cross-contamination concerns for the health and safety of the fish, the dive shop provides all equipment - so you don't have to bring gear. They will allow bringing your own mask - and I'm assuming if a well-scrubbed mask is ok then a well-scrubbed dive computer would be ok - but I'll be checking with them when I call to arrange things.If you are Open Water certified and would like to join, please send me a response back (reply directly to me rather than the list is fine) so I can judge general interest level when talking to the lovely humans.If you are **NOT** Open Water certified but are thinking to yourself "wow, that sounds like fun" - the dive shop does certification dives at the aquarium on the weekends. That's a bit out of scope for me to arrange, but if you want to show up in Denver early and do an Open Water class then join us, that's awesome. You can also get certified at home before coming ... but you'll need an Open Water certification to join us.BUT WAIT - THERE'S MORE!There is another interesting diving opportunity in the general Mountain West area - although not immediately **close** to Denver - that Sandy and I are going to be going to before the Summit - Homestead Crater. We're arranging to meet a Scuba instructor there to do our second Altitude cert dive, and also to do a 2-dive "Hot Springs Diver" specialty. We'd be happy to have folks join us.Homestead Crater [1] is outside of Salt Lake City. It's a hot spring - the water is 96F/35.5C and it's also an altitude dive. If people are interested, we can meet at the crater on Thursday morning pre-summit, dive the crater, then drive to Denver on Friday/Saturday.** If only a small number of people are interested in any of this - we'll likely just do the Denver dives the weekend before during one of the normally scheduled dive times instead of arranging a group outing, and shift Homestead, if anyone is interested, a few days earlier **I will not have space enough in my car to get you from the crater to Denver, so if you want to fly in to SLC ahead of time and meet there, awesome.Unlike Denver, the crater will require you have equipment, including scuba tanks. For anyone interested in coming, there are dive shops in Salt Lake where you can rent gear. There is a dive shop at the crater to do air fills.Anyway - if anyone is interested in some or all of these things, let me know this week so I can get a headcount and then we can talk more off-list about logistics and figure out how to deal with things for people coming from far off (carpools, equipment, etc)MontyPS. If any of you aren't already using Subsurface for dive logging, I'll be happy to help you get set up.[0]https://divedowntown.com/[1]https://homesteadresort.com/utah-resort-things-to-do/homestead-crater/
4 5
11 Apr '19
I wanted to send this separate from the latest gate status update [1] since it's primarily about latent cinder bugs causing failures in the gate for which no one is really investigating.Running down our tracked gate bugs [2] there are several related to cinder-backup testing:*http://status.openstack.org/elastic-recheck/#1483434*http://status.openstack.org/elastic-recheck/#1745168*http://status.openstack.org/elastic-recheck/#1739482*http://status.openstack.org/elastic-recheck/#1635643All of those bugs were reported a long time ago. I've done some investigation into them (at least at the time of reporting) and some are simply due to cinder-api using synchronous RPC calls to cinder-volume (or cinder-backup) and that doesn't scale. This bug isn't a backup issue, but it's definitely related to using RPC call rather than cast:http://status.openstack.org/elastic-recheck/#1763712Regarding the backup tests specifically, I don't see a reason why they need to be run in the integrated gate jobs, e.g. tempest-full(-py3). They don't involve other services, so in my opinion we should move the backup tests to a separate job which only runs on cinder changes to alleviate these latent bugs failing jobs for unrelated changes and resetting the entire gate.I would need someone from the cinder team that is more involved in knowing what their job setup looks like to identify a candidate job for these tests if this is something everyone can agree on doing.[1]http://lists.openstack.org/pipermail/openstack-discuss/2018-December/000867…[2]http://status.openstack.org/elastic-recheck/-- Thanks,Matt
6 12
03 Apr '19
Hi team,There are some xstatic packages which I didn't start to use in Horizon orplugins. We didn't do any release of them.During the last meeting [1] we agreed to mark them as retired. I'll startretired procedure [2] today. If you're going to use them, please let meknow.The list of the projects to be retired:- xstatic-angular-ui-router- xstatic-bootstrap-datepicker- xstatic-hogan- xstatic-jquery-migrate- xstatic-jquery.quicksearch- xstatic-jquery.tablesorter- xstatic-rickshaw- xstatic-spin- xstatic-vis[1]http://eavesdrop.openstack.org/meetings/horizon/2019/horizon.2019-01-09-15.…Regards,Ivan Kolodyazhny,http://blog.e0ne.info/
3 4
03 Apr '19
Hi stable team,Kolla project is ready to tag EOL the stable/ocata branch. We would like totag the following repositories as EOL:- openstack/kolla- openstack/kolla-ansibleLet me know if need anything else.Regards
4 5
18 Mar '19
Hi All,Shortly before the holidays CI jobs moved from xenial to bionic, forIronic this meant a bunch failures[1], all have now been dealt with,with the exception of the UEFI job. It turns out that during this jobour (virtual) baremetal nodes use tftp to download a ipxe image. Inorder to track these tftp connections we have been making use of thefact that nf_conntrack_helper has been enabled by default. In newerkernel versions[2] this is no longer the case and I'm now trying tofigure out the best way to deal with the new behaviour. I've puttogether some possible solutions along with some details on why theyare not ideal and would appreciate some opinions1. Why not enable the conntrack helper withecho 1 > /proc/sys/net/netfilter/nf_conntrack_helperThe router namespace is still created with nf_conntrack_helper==0 asit follows the default the nf_conntrack module was loaded with2. Enable it in modprobe.d# cat /etc/modprobe.d/conntrack.confoptions nf_conntrack nf_conntrack_helper=1This works but requires the nf_conntrack module to be unloaded if ithas already been loaded, for devstack and I guess in the majority ofcases (including CI nodes) this means a reboot stage or a potentiallyerror prone sequence of stopping the firewall and unloadingnf_conntrack modules.This also globally turns on the helper on the host reintroducing thesecurity concerns it comes with3. Enable the contrack helper in the router network namespace when itis created[3]This works for ironic CI, but there may be better solutions that canbe worked within neutron that I'm not aware of. Of the 3 options abovethis would be most transparent to other operators as the originalbehaviour would be maintained.thoughts on any of the above? or better solutions?1 -https://storyboard.openstack.org/#!/story/20046042 -https://kernel.googlesource.com/pub/scm/linux/kernel/git/horms/ipvs-next/+/…3 -https://review.openstack.org/#/c/628493/1
6 10
[Interop-wg] [dev] [cinder] [qa] Strict Validation for Volume API using JSON Schema
by Ghanshyam Mann 08 Mar '19
by Ghanshyam Mann 08 Mar '19
08 Mar '19
Hello everyone, Tempest is planning to add the strict API validation using JSON schema for Volume test [1]. We do the same for all the compute test.With *Strict* JSON schema validation, all the API response will be validated with predefined schema with additionalProperties=False. additionalProperties=False will not allow any additional attribute in API response than what upstream APIs has in their current version. For example: If any vendor has modified the API response and returning additional attributes then Tempest tests going to fail.This will help:- To improve the OpenStack interoperability. Strict validation of API response is always helpful to maintain the interoperability. - To improve the volume API testing to avoid the backward compatible changes. Sometime we accidentally change the API in backward incompatible way and strict validation with JSON schema help to block those. We want to hear from cinder and interop team about any impact of this change to them. [1]https://blueprints.launchpad.net/tempest/+spec/volume-response-schema-valid…-gmann
4 6
25 Feb '19
This morning at the OpenStack Foundation board meeting we announced thatwe will focusing more our open infrastructure messaging efforts onbare-metal, and the promotion of Ironic in particular. To support theseefforts, I'd like to form a Bare Metal SIG to bring together communitymembers to collaborate on this work. The purpose of the Bare Metal SIGwill be to promote the development and use of Ironic and other OpenStackbare-metal software. This will include marketing efforts like case studiesof Ironic clusters in industry and academia, supporting integration ofIronic with projects like Airship and the Kubernetes Cluster API,coordinating presentations for industry events, developing documentationand tutorials, gathering feedback from the community on usage and featuregaps, and other broader community-facing efforts to encourage theadoption of Ironic as a bare-metal management tool.If you would like to participate, please indicate your interest in thelinked planning etherpad. Ideally we would like to have broad engagementfrom across the community, from developers and practitioners alike. We'dlike to highlight all of the efforts and usage across our communitycommunicate how powerful Ironic is for hardware management.https://etherpad.openstack.org/p/bare-metal-sigThanks in advance to everyone. I’ve been using Ironic to manage my homecluster for quite a while now, and I’m really excited to be working on theSIG and in supporting the efforts of the Ironic team and the users whoare running Ironic in production.Chris HogeStrategic Program ManagerOpenStack Foundation
4 5
19 Feb '19
Hi everyone,The "Help most needed" list[1] was created by the Technical Committee to clearly describe areas of the OpenStack open source project which were in the most need of urgent help. This was done partly to facilitate communications with corporate sponsors and engineering managers, and be able to point them to an official statement of need from "the project".[1]https://governance.openstack.org/tc/reference/help-most-needed.htmlThis list encounters two issues. First it's hard to limit entries: a lot of projects teams, SIGs and other forms of working groups could use extra help. But more importantly, this list has had a very limited impact -- new contributors did not exactly magically show up in the areas we designated as in most need of help.When we raised that topic (again) at a Board+TC meeting, a suggestion was made that we should turn the list more into a "job description" style that would make it more palatable to the corporate world. I fear that would not really solve the underlying issue (which is that at our stage of the hype curve, no organization really has spare contributors to throw at random hard problems).So I wonder if we should not reframe the list and make it less "this team needs help" and more "I offer peer-mentoring in this team". A list of contributor internships offers, rather than a call for corporate help in the dark. I feel like that would be more of a win-win offer, and more likely to appeal to students, or OpenStack users trying to contribute back.Proper 1:1 mentoring takes a lot of time, and I'm not underestimating that. Only people that are ready to dedicate mentoring time should show up on this new "list"... which is why it should really list identified individuals rather than anonymous teams. It should also probably be one-off offers -- once taken, the offer should probably go off the list.Thoughts on that? Do you think reframing help-needed as mentoring-offered could help? Do you have alternate suggestions?-- Thierry Carrez (ttx)
13 35
19 Feb '19
Dear all,There has been several patch sets getting sparse reviews.Since some of authors wrote these patch sets are difficult to join IRCmeeting due to time and language constraints, I would like to pass some oftheir voice, and get more detail feedback from core reviewers and otherdevs via ML.I fully understand core reviewers are quite busy and believe they are doingtheir best efforts. period!However, I sometimes feel that turnaround time for some of patch sets arereally long. I would like to hear opinion from others and suggestions onhow to improve this. It can be either/both something each patch set ownerneed to do more, or/and it could be something we as a openstack-helmproject can improve. For instance, it could be influenced by timedifferences, lack of irc presence, or anything else. etc. I really wouldlike to find out there are anything we can improve together.I would like to get any kind of advise on the following.- sometimes, it is really difficult to get core reviewers' comments orreviews. I routinely put the list of patch sets on irc meeting agenda,however, there still be a long turnaround time between comments. As aresult, it usually takes a long time to process a patch set, does sometimescause rebase as well.- Having said that, I would like to have any advise on what we need to domore, for instance, do we need to be in irc directly asking each patch setto core reviewers? do we need to put core reviewers' name when we pushpatch set? etc.- Some of patch sets are being reviewed and merged quickly, and some ofpatch sets are not. I would like to know what makes this difference so thatI can tell my developers how to do better job writing and communicatingpatch sets.There are just some example patch sets currently under review stage.1.https://review.openstack.org/#/c/603971/ >> this ps has been discussedfor its contents and scope. Cloud you please add if there is anything elsewe need to do other than wrapping some of commit message?2.https://review.openstack.org/#/c/633456/ >> this is simple fix. how canwe make core reviewer notice this patch set so that they can quickly view?3.https://review.openstack.org/#/c/625803/ >> we have been gettingfeedbacks and questions on this patch set, that has been good. butround-trip time for the recent comments takes a week or more. because ofthat delay (?), the owner of this patch set needed to rebase this oneoften. Will this kind of case be improved if author engages more on ircchannel or via mailing list to get feedback rather than relying on gerritreviews?Frankly speaking, I don't know if this is a real issue or just way it is. Ijust want to pass some of voice from our developers, and really would liketo hear what others think and find a better way to communicate.Thanks you.-- *Jaesuk Ahn*, Ph.D.Software Labs, SK Telecom
2 3
16 Feb '19
TL;DR:Maybe to help with cross-project work we should formalize temporary teams with clear objective and disband criteria, under the model of Kubernetes "working groups".Long version:Work in OpenStack is organized around project teams, who each own a set of git repositories. One well-known drawback of this organization is that it makes cross-project work harder, as someone has to coordinate activities that ultimately affects multiple project teams.We tried various ways to facilitate cross-project work in the past. It started with a top-level repository of cross-project specs, a formal effort which failed due to a disconnect between the spec approvers (TC), the people signed up to push the work, and the teams that would need to approve the independent work items.This was replaced by more informal "champions", doing project-management and other heavy lifting to get things done cross-project. This proved successful, but champions are often facing an up-hill battle and often suffer from lack of visibility / blessing / validation.SIGs are another construct that helps holding discussions and coordinating work around OpenStack problem spaces, beyond specific project teams. Those are great as a permanent structure, but sometimes struggle to translate into specific development work, and are a bit heavy-weight just to coordinate a given set of work items.Community goals fill the gap between champions and SIGs by blessing a given set of cross-community goals for a given release. However, given their nature (being blessed by the TC at every cycle), they are a better fit for small, cycle-long objectives that affect most of the OpenStack project teams, and great to push consistency across all projects.It feels like we are missing a way to formally describe a short-term, cross-project objective that only affects a number of teams, is not tied to a specific cycle, and organize work around a temporary team specifically formed to reach that objective. A team that would get support from the various affected project teams, increasing chances of success.Kubernetes encountered the same problem, with work organized around owners and permanent SIGs. They created the concept of a "working group"[1] with a clear limited objective, and a clear disband criteria. I feel like adopting something like it in OpenStack could help with work that affects multiple projects. We would not name it "working group" since that's already overloaded in OpenStack, but maybe "pop-up team" to stress the temporary nature of it. We've been sort-of informally using those in the past, but maybe formalizing and listing them could help getting extra visibility and prioritization.Thoughts? Alternate solutions?[1]https://github.com/kubernetes/community/blob/master/committee-steering/gove…-- Thierry Carrez (ttx)
11 23