[02:43] menn0: https://github.com/juju/juju/pull/6315 [02:46] thumper: looking [02:52] hmm... best go pick up the dog [03:19] menn0: the QA steps were all with the next branch, which adds the command aspects [03:19] there is no QA for just the server bits [03:19] because it is a new call [03:20] ok cool, that's fine [03:20] thumper: sorry that I forgot [03:20] :) [03:20] I'll add the bits shortly [03:20] let me add docstrings, tweak names and move on to submitting the next bit [03:25] thumper: seems like a bit of an edge case but if you create and then delete a model it still shows up in list-models, but cannot be deleted. But it also unselects it as the focused model: http://pastebin.ubuntu.com/23232282/ [03:29] thumper, menn0: You seen something like that before? ^^ Going to file a bug as I can't find an existing bug for it [03:30] veebers: it hangs around until the undertaker kills it [03:30] and cleans up [03:30] it shouldn't take too long [03:30] veebers: this could be related to an existing ticket [03:30] * menn0 finds [03:30] when we first did it, we had it keep the model around for a day so logs and things could be removed [03:30] but folks didn't like that [03:30] so it was shortened, but not sure what to [03:31] menn0: PR updated [03:31] menn0, thumper: hmm ok the models (I tried a couple of times) are still there and the status stats 'available', that should be 'destroying' or something no? [03:32] veebers: when you say delete, [03:32] veebers: yeah, that sounds like the bug I'm looking for [03:32] what are you doing? [03:34] thumper: as per the command in the pastebin: juju --show-log add-model -c charm-test model89; juju --show-log destroy-model model89 -y [03:35] yes, the model hangs around for a while [03:35] is it still there? [03:35] hang on [03:35] those commands errored out [03:35] with not found [03:36] thumper: yeah, after the original delete attempt (no error there) any follow up attempts error [03:36] oh, first line does it all [03:36] thumper: just re-checked list-models and they are still there (with status available0 [03:36] veebers, thumper: nope, I can't find that ticket [03:37] yeah, that's definitely odd [03:37] menn0: you thinking of this one? https://bugs.launchpad.net/juju/+bug/1613960 [03:37] Bug #1613960: list-models can show a model that was supposed to have been deleted [03:37] menn0: huh right I have come across this before (as I filed that bug :-\) [03:37] ha [03:38] menn0: wanna +1 that PR? [03:38] veebers: I was thinking of a different one, where an error like that appears after lots of add/destroy model commands [03:38] thumper: yep [03:39] thumper: done [03:39] ta [03:39] menn0: ah, that might be one that was alluded to when I tried this test run here (create a bunch of models, then delete a bunch of models) [03:40] menn0: hmm, or not I think this might be the one I'm thinking of: https://bugs.launchpad.net/juju/+bug/1625774 [03:40] Bug #1625774: memory leak after repeated model creation/destruction [03:42] veebers: no not that one [03:43] menn0: if we keep going through _all_ the bugs I'm sure we'll finally uncover the one we're looking for ;-) [03:43] veebers: haha ... I'm sure there is one but I can't find it [03:43] veebers: I saw it when helping babbageclunk with something [03:45] * thumper waits for branch to land before proposing next === petevg_ is now known as petevg === dooferlad_ is now known as dooferlad === mpontillo_ is now known as mpontillo === Tribaal_ is now known as Tribaal === frankban_ is now known as frankban [07:53] jam: morning [07:54] jam: do you have the HO link? [07:54] morning [07:54] yes [07:54] ok [07:54] I'll be there in about 5 min [07:54] +1 [08:12] macgreagoir: hey, I think on Friday I may have just been imaptient - I did eventually see one container deploy working [08:12] macgreagoir: I think I just may not have been allowing enough time for image download [08:12] macgreagoir: so I'm retrying your branch [08:13] voidspace: Enjoy! [08:13] :-) [08:13] macgreagoir: are you at the London sprint now? [08:13] I was wondering if you needed ot dpkg-reconfig maas pkgs to get dhcp on your new subnet too. [08:13] voidspace: I am. [08:13] macgreagoir: have fun :-) [08:14] Cheers! === gnuoy` is now known as gnuoy [08:26] http://www.ryman.co.uk/search/go?w=adapter [08:26] wrong link [08:26] how about https://docs.google.com/spreadsheets/d/1AGF6ED7kOtigvWTOBS8lkC0t2st63IRhbdpPWeofauU/edit#gid=1152189692 [08:50] redir: https://bugs.launchpad.net/juju/+bug/1611766 [08:50] Bug #1611766: upgradeSuite.TearDownTest sockets in a dirty state [09:34] Bug #1626576 changed: credential v. credentials is confusing [09:34] Bug #1626878 changed: ERROR juju.worker.dependency engine.go [09:34] Bug #1627554 changed: juju binary broken on sierra [09:44] jam: dimitern: macgreagoir: replacing JujuConnSuite in state with ConnSuite: https://github.com/juju/juju/pull/6317 [09:44] anastasiamac: looking [09:44] ja \o/ [09:44] ta even :D [09:46] anastasiamac: +1 [09:46] jam: amazing \o/ [10:34] macgreagoir: so I'm afraid I still see - with a machine with a single nic on the pxe subnet a lxd container starts fine [10:34] macgreagoir: with two nics, the "first" on a separate subnet, the container starts but gets no address [10:34] macgreagoir: your branch [10:35] macgreagoir: I'm just trying to confirm its not an oddity of the way I've set up the two nics [10:49] voidspace: You're seeing the addressing issue on my branch too? [10:49] macgreagoir: yup [10:49] macgreagoir: can't connect to the lxd at all (nor exec commands in it) to see the rendered /e/n/i [10:49] macgreagoir: unless you know a trick to get it [10:50] voidspace: Can you see inside /var/lib/containers//rootfs ? [10:51] /var/lib/lxd/containers... that is [10:53] macgreagoir: will try shortly - just adding a lxd container with your branch with the second NIC unconfigured [10:53] macgreagoir: to check that works [10:54] macgreagoir: for the second NIC (ethA not on pxe subnet) I have gateway address *on* that subnet - which probably means that subnet is not routable to the other one (or the wider internet) [10:54] macgreagoir: I wonder if that might be the issue and if the gateway address for 172.16.1.0/24 should be 172.16.0.1 (on the pxe subnet) [10:55] i've just resurrected https://github.com/juju/testing/pull/108 after leaving it languishing for a month or so. could someone review it please? (i got a positive review from fwereade, but it needed tests which i've just done). [10:55] it has a companion branch at https://github.com/juju/utils/pull/242 (much smaller) [10:59] rogpeppe: we'll look shortly :D thank you for the tests! [11:10] anastasiamac: ta! [11:11] macgreagoir: hmmm... with an unconfigured NIC as the "first" NIC it *looks* like I'm still seeing no address for the container [11:11] macgreagoir: restoring the order and trying *again* [11:26] morning [11:26] dooferlad: welcome back [11:26] hi! [11:27] macgreagoir: yup, if I reorder the NICs then it starts fine. [11:27] dooferlad: how's the little one? [11:27] macgreagoir: will try again with the order reversed and see if I can get to /e/n/i [11:27] rick_h_: doing well. Old enough to smile now, which is lovely. [11:27] dooferlad: hey, hi! [11:27] dooferlad: you back, or just a visit? [11:27] voidspace: hello [11:27] I am back [11:27] dooferlad: so good when they can smile :-) [11:28] dooferlad: congratulations and welcome back! [11:28] voidspace: thanks! [11:28] voidspace: I just wish big sister would go back to sleeping well! [11:28] dooferlad: oh no! [11:29] dooferlad: I feel your pain, Benjamin is in (another) phase of not going to sleep until about 1am [11:29] very tiring, literally and figuratively [11:29] the joy of children :-) [11:30] voidspace: yea, Naomi is often up before 6. I was just about feeling human before that started! [11:32] dooferlad: ah man, not much fun === rogpeppe is now known as rogpeppe1 === rogpeppe1 is now known as rogpeppe [11:49] ashipika: for some reason i seem to have been disconnected from canonical IRC [11:49] rogpeppe: yea, lots of stuff down [11:49] LP, etc [11:50] rick_h_: marvellous :) [11:50] rogpeppe, rick_h_: interesting :) [11:51] "space| dooferlad: ah man, not much fun [11:51] 11:49 rogpeppe| is now known as rogpeppe1 [11:51] 11:49 rogpeppe1| is now known as rogpeppe [11:51] 11:49 rogpeppe| ashipika: for some reason i seem to have been disconnected from canonical IRC [11:51] bah [11:52] rogpeppe: ashipika looks like firewall issue atm, being worked on [11:52] rick_h_: thank you for the info! [11:56] ashipika: re: waiting for AfterFunc - we've got the Alarms method to tell when things have started waiting. That is unfortunately necessary, but anything more seems like it would be more than the test code should be relying on. For example, the code could change to start a goroutine itself rather than calling AfterFunc and the test code wouldn't be able to tell when that finished. [11:58] rogpeppe: and i suppose you'd have to change the signature of the parameter function to AfterFunc [11:59] ashipika: no, i don't think so [11:59] rogpeppe: ok.. it was just a thought.. feel free to land the PR [12:16] ashipika: any chance you could approve this too please? https://github.com/juju/utils/pull/242 [12:18] rogpeppe: done [12:28] ashipika: ta === freyes__ is now known as freyes [12:37] ashipika: ha, marvellous, there's a cyclic dependency between juju/utils and juju/testing [12:37] rogpeppe: my condolences :) [13:19] natefinch: ping, how goes the rackspace work? [13:25] rick_h_: mostly figured out what was going on Friday. still have one question for curtis when he gets on [13:26] natefinch: k, I've got a call with rax in a bit under an hour and wanted to know where we stand with things [13:27] rick_h_: hoping to get a fix up today [13:27] voidspace: did the binary from my branch worked? I fixed my bug finally am writing tests now [13:27] rick_h_: but I would tell rackspace a couple days to be safe :) [13:27] natefinch: k, all good. it's on a different topic but it might come up [13:27] natefinch: so need to make sure we're still rc2 targeted [13:31] rick_h_: yep [13:48] voidspace: ping, did we get anywhere with the MAAS issues friday? [13:49] redir: http://bazaar.launchpad.net/~juju-qa/juju-ci-tools/repository/ [14:03] mgz: ping [14:03] voidspace: standup time [14:03] katco: omw [14:05] rick_h_: I suggested something that might be the cause of the problem [14:05] rick_h_: hang on, in standup - I'll come back to you after that [14:06] voidspace: rgr ty [14:08] frobware: yo [14:13] frobware: can I help? [14:14] mgz: please - I was trying to run assess_recovery.py but I don't think I have enough runes; bombs with permisson denied [14:14] frobware: run with --verbose --debug and pastebin? [14:15] mgz: heh pastebin seems to be down [14:16] frobware: eheh, try a different pastebin [14:19] hm, come back lp, I need to finish getting by stuff reviewed [14:20] alexisb: wanna catchuo? [14:20] p [14:20] babbageclunk, sure [14:20] dooferlad: I could do with bugging at some point today about some cross maas version network things [14:21] mgz: sure. When works for you? [14:22] half an hour? [14:22] +in [14:22] mgz: sounds good === hatch_ is now known as hatch === hatch is now known as Guest72453 [14:28] Hi. I have OpenStack-on-lxd setup. juju version is 2.0-beta15. I am trying to install multipath-tools package on nova deployed LXD container and cinder deployed lxd container using our "cinder-storage driver" charm. But that package not installing on LXD containers. Giving issues. http://paste.openstack.org/show/582953/ [14:29] I ran #apt-get install --yes multipath-tools command on Online LXD container console . There also it was giving the same issue as I pasted above. [14:30] #lxd and #lxccontainers channels are not in active state. [14:30] if anyone have any idea on this please provide me that . [14:32] voidspace: this should fix your issue with some luck https://github.com/juju/juju/pull/6321 === BradCrittenden is now known as bac [14:38] rock: you'll probably have better response on #juju but it sounds like a packaging problem since it's a dpkg error === hatch_ is now known as hatch [14:40] I need a non trivial review here https://github.com/juju/juju/pull/6321 [14:41] natefinch: OK. Thank you. [14:42] rick_h_: irc or hangout [14:43] rick_h_: but the summary, custom binaries from here *may* solve the issue: https://github.com/juju/juju/pull/6321 [14:46] rick_h_: I've sent an email [14:48] voidspace: ty [14:52] perrito666: thanks! [14:52] perrito666: on your branch, is the status polling in a goroutine the same pattern used by the other providers? [14:52] perrito666: and have you manually tested with maas 1.9 and 2... [14:54] perrito666: the code changes themselves look pretty straightforward, I like the maas2Controller interface [14:56] LP still broken - we can't merge anything due to check-blockers.py getting 503 [14:56] voidspace: answering in order :) [14:56] 1) the status polling gorouting is not a pattern, we are not doing it for other providers (and we should) [14:57] voidspace: we where only updating the "instance status" which is wrong [14:57] I have manually tested with maas 1.9 [14:57] sorry maas 2 === elmo_ is now known as elmo [15:01] perrito666: please, keep in mind we have 2 separate code paths for maas 1.9 and 2.0 [15:02] perrito666: both should be tested if the change applies to both versions [15:02] dimitern`: t looks good to me [15:03] dimitern`: I have (not very nicely separated btw :p ) but yes I kept it in mind while coding the fix, I guess I can start a 1.9 maas to try this [15:03] dimitern`: I'll see if I can check with maas 1.9, need to fail a deployment... [15:03] perrito666: ah, well - happy for you to do it [15:03] perrito666: and oi! the separation is *great* [15:03] voidspace: if you HAVe a 1.9 I would be very thankful if you did it for me [15:03] voidspace, perrito666 thanks guys! :) [15:03] voidspace: I have to install the whole thing [15:04] if not ill do it [15:04] perrito666: I have one setup [15:04] perrito666: how did you test - what did you do to get deployment to fail. Mark as broken after deployment starts? [15:04] voidspace: I shall pay in beer :p [15:04] voidspace: I wrote the QA steps :p, basically bootstrap and once it is up, break the power profile for the nodes and deploy something [15:05] perrito666: I'll let you know how it goes. [15:06] tx a lot [15:07] macgreagoir: http://paste.ubuntu.com/23233677/ [15:16] mgz: is the bot stuck? http://juju-ci.vapour.ws:8080/job/github-merge-juju/9327/console [15:17] likely, lp is down [15:17] ok, my bad [15:18] mgz: is there a timeout for the blocker check? [15:18] my PR got picked up by the bot, which due to LP being down, is now stuck at check-blockers.py [15:18] not independently, but we can't land when lp isn't up anyway [15:19] and that's because check-blockers is not called with a timeout [15:31] gah forgot launchpad is down === daniel3 is now known as Odd_Bloke [15:49] frobware: did you manage to reproduce the telefonica issue? === kadams54 is now known as kadams54-lunch [15:52] http://reports.vapour.ws/releases/issue/5762fb3b749a5667e3627666 [15:52] frobware: babbageclunk ^ [15:59] sinzui: I tried doing the easy fix for rackspace, just hack the endpoint url. but I'm getting a 401 response from rackspace: [15:59] 11:54:41 DEBUG juju.provider.openstack provider.go:625 authentication failed: authentication failed [15:59] caused by: requesting token: Unauthorised URL https://dfw.images.api.rackspacecloud.com/v2/auth/tokens [15:59] caused by: request (https://dfw.images.api.rackspacecloud.com/v2/auth/tokens) returned unexpected status: 401 [15:59] sinzui: do I need to access that identity.api.rackspacecloud.com url first, to authenticate? and if so, where do I get the api key? [16:06] natefinch: I think so. I am sprinting this week. josvaz in @cloudware has most of the details. I found the API key in the rackspace web ui. There isn't anything in the juju config to show that. I do have a rackspacrc file. It exports the standard OpenStack vars. I see "_RACKSPACE_API_KEY" defined, but unused. I didn't notice it unitl now :( [16:07] hmm ok [16:07] sinzui: thanks for the info [16:07] sinzui: I'll talk to josvaz [16:12] ouch, I actually need lp to pick a new bug [16:12] it's back [16:14] just in time [16:17] perrito666, do you need bug suggestions? [16:17] alexisb: sure [16:18] admit it, you have a script checking on me talking about bugs === jillr_ is now known as jillr [16:18] :) [16:19] that is one of the duties of my job === kadams54-lunch is now known as kadams54 [16:37] perrito666: hmmm... so after a long update / new image import / bootstrap cycle [16:38] perrito666: I'm now seeing on maas 1.9: after a deploy, then manually marking the machine as broken in maas (my nodes all have manual power types so that seemed easier) [16:38] perrito666: the machine stays as pending [16:38] perrito666: status doesn't change [16:39] perrito666: I'll try again :-/ [16:39] voidspace: interesting, tx, if you have that issue again ill investigate with my setup here === frankban is now known as frankban|afk [16:50] sinzui: is there anyone else I can talk to? Josvaz is past EOD, AFAICT. [16:51] natefinch: rcj? [16:52] sinzui: thanks [16:57] babbageclunk: https://bugs.launchpad.net/juju/+bug/1606310 [16:57] Bug #1606310: storeManagerSuite.TestMultiwatcherStop not stopped [17:00] perrito666: so with a broken power type I do see the status change to down [17:00] perrito666: however, if I manually break the machine I don't see a status change I don't think [17:00] perrito666: I'm going to try that with maas 2 - but probably tomorrow now as I'm nearly EOD [17:00] perrito666: I left a question and a comment - the question is likely to be just me being dumb [17:01] voidspace: tx a lot for the tests [17:01] np [17:35] hi. i got openstack up and running with openstack-base-xenial-mitaka [17:36] i can log into the dashboard, but when i go to containers, i get an error: Unable to get the Swift container listing. [17:37] how do i configure this? how do i log into the servers juju configured? [17:37] ssh into them i mean [18:12] question - how to i add a new endpoint for running the juju go test with the openstack provider? i’m missing the piece [18:25] katco: natefinch have any hint for hml ? ^ [18:25] hml: what is the juju go test? our suite of tests written in go? [18:26] katco: looking at the contribute.md - you run go test github.com/juju/juju to test changes? [18:27] yeah [18:27] those won't hit a real openstack [18:27] hml: ah ok. you should be able to just run that command; i don't know what you mean by add an endpoint. can you explain? [18:28] katco: the code change starts to use the neutron api. however the test environment doesn’t know about an endpoint to find neutron - [18:29] you can't make a test that hits a real openstack.... they have to be (more or less) self contained. [18:29] katco: for the related goose pkg changes - in the tests i had to add code to spoof neutron [18:29] hml: ah, as natefinch says we don't do that in juju. the test should be a unit test and only utilize things in memory [18:30] the way to connect to openstack using juju normally is to use add-cloud which will prompt for the endpoint [18:30] natefinch: hrm… so how do the juju openstack provider tests for nova run then? they look for a novaClient. [18:31] hml: there's a lot of spoofing in the tests, precisely to keep it from hitting real infrastructure. I'm afraid I don't know the details of the openstack tests [18:32] hml: i don't know the specifics of the openstack provider tests, but you would mock a novaClient and pass that in. any new tests should not hit anything outside of memory [18:32] natefinch: okay - i believe that if i could find where the nova spoofing is done, i could figure it out for neutron, but i haven’t been able to find it yet [18:34] katco: i’m looking for where the novaclient is mocked, so i can do the same for a neutronclient. but so far i’m missing how it’s done. [18:37] hml probably something with this: gopkg.in/goose.v1/testservices/novaservice [18:38] hml: natefinch: i think it's this: https://github.com/juju/juju/blob/master/provider/openstack/local_test.go#L1917 [18:38] hml: natefinch: called from here: https://github.com/juju/juju/blob/master/provider/openstack/local_test.go#L170 [18:39] natefinch: katco: cool - i’ll take a look. thanks [18:39] hml: hth, gl [19:33] morning [19:45] morning thumper [19:45] voidspace, you still around? [19:46] alexisb: kind of [19:46] alexisb: :-) [19:46] how can this not be working now if it was working on friday [19:46] voidspace, trying to leave dimiter and andy alone [19:46] voidspace, do you know if this bug is still an issue for 2.0: [19:46] https://bugs.launchpad.net/juju/+bug/1560331 [19:46] Bug #1560331: juju-br0 fails to be up when no gateway is set on interface [19:47] o is dooferlad back?? [19:47] alexisb: he is! [19:47] welcome back dooferlad! [19:47] dooferlad, you could probably answer the q above as well, if you are still around [19:47] alexisb: it's nearly 9pm UK time so unlikely he's around, nor the sprint people [19:47] alexisb: I don't know specifically about that bug, but that area of the code has changed dramatically in recent months [19:48] alexisb: we have new bugs related to "bridging all the things" for example [19:48] alexisb: I can talk to dooferlad tomorrow morning and email you [19:48] alexisb: that should be corrected in rc1 [19:48] lol yes that is a fun topic for today [19:48] rick_h_, ack will mark it so [19:48] cool, sorry I couldn't be more helpful [19:49] alexisb: ah, there's a comment on the bug saying "if we bridge all the things it will go away" [19:49] alexisb: and that is done [19:50] its fixed so that is good :) one less bug [19:50] alexisb: morning [19:52] thumper: o/ [19:52] hey voidspace [19:52] sprinting? [19:52] thumper: nope, going downstairs to watch a movie with the wife and hope that the lad falls asleep before 1am tonight :-) [19:53] thumper: have a good day [19:53] see you on the other side maybe [19:53] ha [19:53] night [19:58] rick_h_: ping [19:58] katco: pong [19:59] rick_h_: https://github.com/juju/juju/pull/6323 https://github.com/juju/juju/pull/6324 [19:59] rick_h_: double PR into develop branch failed. what do i do? $$merge$$ other pr? [19:59] rick_h_: i wasn't there for the discussion on this [19:59] katco: only deal with merging from the one that deals with master [19:59] katco: but we should look at what failed in taht check [20:00] rick_h_: that merge into develop brought in like 30 commits from before mine [20:01] katco: yea, that's all good, because it's not constantly kept up with develop that'll happen [20:01] rick_h_: just not sure what failure is related to. looks possibly related to my pr, but i will trust the results of $$merge$$ into master [20:02] katco: k [20:03] katco: http://juju-ci.vapour.ws/job/github-check-merge-juju/42/artifact/artifacts/trusty-out.log though with a failure in "testAddLocalCharm" seems like it might be a real thing [20:03] rick_h_: yes it's possible [20:03] katco: the windows one there's an intermittent test failure that's hit before there. Might check it matches up, but not sure. [20:03] katco: but might be worth double checking that test while the merge runs to get ahead of the game if there is something === natefinch is now known as natefinch-afk [21:50] alexisb: having fun? [21:52] always having fun [21:54] you have changed the standup 5 times :p [21:59] thumper: phew, https://github.com/juju/juju/pull/6325 [21:59] perrito666, yeah I was learning something new [22:01] alexisb: is ian on vacation or something? [22:01] yeah [22:01] katco yes [22:01] and he did not update the calendar [22:01] alexisb: ah ok :) ty [22:01] which I will be pestering him about when he returns next week [22:01] ;) [22:02] lol no biggie [22:12] menn0: review done [23:11] thumper: a little later on today I would like to bother you again about bug; https://bugs.launchpad.net/juju/+bug/1626784 I have some more details regarding it [23:11] Bug #1626784: upgrade-juju --version increments supplied patch version [23:12] ok [23:18] axw, ping [23:46] thumper: you said you had me pinged? (sounds a bit like "had me made") [23:46] :) [23:46] menn0: ping me when you need me [23:47] I think I said pinned [23:47] I was really wanting the time queue as part of the provisioner [23:47] but as nice as it would be [23:47] it isn't as high a priority as many of the current fires [23:48] perrito666: could you please try deploying something into a container with Juju 2.0 using MAAS 2.0? [23:48] thumper: oh right, gotcha [23:48] menn0: surre, bootstrapping, gimme a moment [23:48] perrito666: thank you [23:49] menn0: anny particular formula or just placement? [23:49] perrito666: we don't have much detail at the moment [23:49] perrito666: seems the user was trying to deploy openstack using maas and all the container watchers were panicking [23:50] perrito666: so let's just establish whether or not container deployments work at all for you [23:51] menn0: k, ill go get a beer for the US debate while this bootstraps [23:51] perrito666: sounds good :)