[02:43] <thumper> menn0: https://github.com/juju/juju/pull/6315
[02:46] <menn0> thumper: looking
[02:52] <thumper> hmm... best go pick up the dog
[03:19] <thumper> menn0: the QA steps were all with the next branch, which adds the command aspects
[03:19] <thumper> there is no QA for just the server bits
[03:19] <thumper> because it is a new call
[03:20] <menn0> ok cool, that's fine
[03:20] <menn0> thumper: sorry that I forgot
[03:20] <thumper> :)
[03:20] <thumper> I'll add the bits shortly
[03:20] <thumper> let me add docstrings, tweak names and move on to submitting the next bit
[03:25] <veebers> thumper: seems like a bit of an edge case but if you create and then delete a model it still shows up in list-models, but cannot be deleted. But it also unselects it as the focused model: http://pastebin.ubuntu.com/23232282/
[03:29] <veebers> thumper, menn0: You seen something like that before? ^^ Going to file a bug as I can't find an existing bug for it
[03:30] <thumper> veebers: it hangs around until the undertaker kills it
[03:30] <thumper> and cleans up
[03:30] <thumper> it shouldn't take too long
[03:30] <menn0> veebers: this could be related to an existing ticket
[03:30]  * menn0 finds
[03:30] <thumper> when we first did it, we had it keep the model around for a day so logs and things could be removed
[03:30] <thumper> but folks didn't like that
[03:30] <thumper> so it was shortened, but not sure what to
[03:31] <thumper> menn0: PR updated
[03:31] <veebers> menn0, thumper: hmm ok the models (I tried a couple of times) are still there and the status stats 'available', that should be 'destroying' or something no?
[03:32] <thumper> veebers: when you say delete,
[03:32] <menn0> veebers: yeah, that sounds like the bug I'm looking for
[03:32] <thumper> what are you doing?
[03:34] <veebers> thumper: as per the command in the pastebin: juju --show-log add-model -c charm-test model89; juju --show-log destroy-model model89 -y
[03:35] <thumper> yes, the model hangs around for a while
[03:35] <thumper> is it still there?
[03:35] <thumper> hang on
[03:35] <thumper> those commands errored out
[03:35] <thumper> with not found
[03:36] <veebers> thumper: yeah, after the original delete attempt (no error there) any follow up attempts error
[03:36] <thumper> oh, first line does it all
[03:36] <veebers> thumper: just re-checked list-models and they are still there (with status available0
[03:36] <menn0> veebers, thumper: nope, I can't find that ticket
[03:37] <thumper> yeah, that's definitely odd
[03:37] <veebers> menn0: you thinking of this one? https://bugs.launchpad.net/juju/+bug/1613960
[03:37] <mup> Bug #1613960: list-models can show a model that was supposed to have been deleted <juju:Triaged> <https://launchpad.net/bugs/1613960>
[03:37] <veebers> menn0: huh right I have come across this before (as I filed that bug :-\)
[03:37] <thumper> ha
[03:38] <thumper> menn0: wanna +1 that PR?
[03:38] <menn0> veebers: I was thinking of a different one, where an error like that appears after lots of add/destroy model commands
[03:38] <menn0> thumper: yep
[03:39] <menn0> thumper: done
[03:39] <thumper> ta
[03:39] <veebers> menn0: ah, that might be one that was alluded to when I tried this test run here (create a bunch of models, then delete a bunch of models)
[03:40] <veebers> menn0: hmm, or not I think this might be the one I'm thinking of: https://bugs.launchpad.net/juju/+bug/1625774
[03:40] <mup> Bug #1625774: memory leak after repeated model creation/destruction <eda> <oil> <oil-2.0> <juju:Triaged by alexis-bruemmer> <https://launchpad.net/bugs/1625774>
[03:42] <menn0> veebers: no not that one
[03:43] <veebers> menn0: if we keep going through _all_ the bugs I'm sure we'll finally uncover the one we're looking for ;-)
[03:43] <menn0> veebers: haha ... I'm sure there is one but I can't find it
[03:43] <menn0> veebers: I saw it when helping babbageclunk with something
[03:45]  * thumper waits for branch to land before proposing next
[07:53] <dimitern> jam: morning
[07:54] <dimitern> jam: do you have the HO link?
[07:54] <jam> morning
[07:54] <jam> yes
[07:54] <dimitern> ok
[07:54] <jam> I'll be there in about 5 min
[07:54] <dimitern> +1
[08:12] <voidspace> macgreagoir: hey, I think on Friday I may have just been imaptient - I did eventually see one container deploy working
[08:12] <voidspace> macgreagoir: I think I just may not have been allowing enough time for image download
[08:12] <voidspace> macgreagoir: so I'm retrying your branch
[08:13] <macgreagoir> voidspace: Enjoy!
[08:13] <voidspace> :-)
[08:13] <voidspace> macgreagoir: are you at the London sprint now?
[08:13] <macgreagoir> I was wondering if you needed ot dpkg-reconfig maas pkgs to get dhcp on your new subnet too.
[08:13] <macgreagoir> voidspace: I am.
[08:13] <voidspace> macgreagoir: have fun :-)
[08:14] <macgreagoir> Cheers!
[08:26] <redir> http://www.ryman.co.uk/search/go?w=adapter
[08:26] <redir> wrong link
[08:26] <redir> how about https://docs.google.com/spreadsheets/d/1AGF6ED7kOtigvWTOBS8lkC0t2st63IRhbdpPWeofauU/edit#gid=1152189692
[08:50] <frobware> redir: https://bugs.launchpad.net/juju/+bug/1611766
[08:50] <mup> Bug #1611766: upgradeSuite.TearDownTest sockets in a dirty state <ci> <intermittent-failure> <regression> <unit-tests> <juju:Triaged> <https://launchpad.net/bugs/1611766>
[09:34] <mup> Bug #1626576 changed: credential v. credentials is confusing <usability> <juju:Triaged> <https://launchpad.net/bugs/1626576>
[09:34] <mup> Bug #1626878 changed: ERROR juju.worker.dependency engine.go <juju:Triaged> <https://launchpad.net/bugs/1626878>
[09:34] <mup> Bug #1627554 changed: juju binary broken on sierra <juju:Triaged> <https://launchpad.net/bugs/1627554>
[09:44] <anastasiamac> jam: dimitern: macgreagoir: replacing JujuConnSuite in state with ConnSuite: https://github.com/juju/juju/pull/6317
[09:44] <jam> anastasiamac: looking
[09:44] <anastasiamac> ja \o/
[09:44] <anastasiamac> ta even :D
[09:46] <jam> anastasiamac: +1
[09:46] <anastasiamac> jam: amazing \o/
[10:34] <voidspace> macgreagoir: so I'm afraid I still see - with a machine with a single nic on the pxe subnet a lxd container starts fine
[10:34] <voidspace> macgreagoir: with two nics, the "first" on a separate subnet, the container starts but gets no address
[10:34] <voidspace> macgreagoir: your branch
[10:35] <voidspace> macgreagoir: I'm just trying to confirm its not an oddity of the way I've set up the two nics
[10:49] <macgreagoir> voidspace: You're seeing the addressing issue on my branch too?
[10:49] <voidspace> macgreagoir: yup
[10:49] <voidspace> macgreagoir: can't connect to the lxd at all (nor exec commands in it) to see the rendered /e/n/i
[10:49] <voidspace> macgreagoir: unless you know a trick to get it
[10:50] <macgreagoir> voidspace: Can you see inside /var/lib/containers/<container>/rootfs ?
[10:51] <macgreagoir> /var/lib/lxd/containers... that is
[10:53] <voidspace> macgreagoir: will try shortly - just adding a lxd container with your branch with the second NIC unconfigured
[10:53] <voidspace> macgreagoir: to check that works
[10:54] <voidspace> macgreagoir: for the second NIC (ethA not on pxe subnet) I have gateway address *on* that subnet - which probably means that subnet is not routable to the other one (or the wider internet)
[10:54] <voidspace> macgreagoir: I wonder if that might be the issue and if the gateway address for 172.16.1.0/24 should be 172.16.0.1 (on the pxe subnet)
[10:55] <rogpeppe> i've just resurrected https://github.com/juju/testing/pull/108 after leaving it languishing for a month or so. could someone review it please? (i got a positive review from fwereade, but it needed tests which i've just done).
[10:55] <rogpeppe> it has a companion branch at https://github.com/juju/utils/pull/242 (much smaller)
[10:59] <anastasiamac> rogpeppe: we'll look shortly :D thank you for the tests!
[11:10] <rogpeppe> anastasiamac: ta!
[11:11] <voidspace> macgreagoir: hmmm... with an unconfigured NIC as the "first" NIC it *looks* like I'm still seeing no address for the container
[11:11] <voidspace> macgreagoir: restoring the order and trying *again*
[11:26] <rick_h_> morning
[11:26] <rick_h_> dooferlad: welcome back
[11:26] <dooferlad> hi!
[11:27] <voidspace> macgreagoir: yup, if I reorder the NICs then it starts fine.
[11:27] <rick_h_> dooferlad: how's the little one?
[11:27] <voidspace> macgreagoir: will try again with the order reversed and see if I can get to /e/n/i
[11:27] <dooferlad> rick_h_: doing well. Old enough to smile now, which is lovely.
[11:27] <voidspace> dooferlad: hey, hi!
[11:27] <voidspace> dooferlad: you back, or just a visit?
[11:27] <dooferlad> voidspace: hello
[11:27] <dooferlad> I am back
[11:27] <voidspace> dooferlad: so good when they can smile :-)
[11:28] <voidspace> dooferlad: congratulations and welcome back!
[11:28] <dooferlad> voidspace: thanks!
[11:28] <dooferlad> voidspace: I just wish big sister would go back to sleeping well!
[11:28] <voidspace> dooferlad: oh no!
[11:29] <voidspace> dooferlad: I feel your pain, Benjamin is in (another) phase of not going to sleep until about 1am
[11:29] <voidspace> very tiring, literally and figuratively
[11:29] <voidspace> the joy of children :-)
[11:30] <dooferlad> voidspace: yea, Naomi is often up before 6. I was just about feeling human before that started!
[11:32] <voidspace> dooferlad: ah man, not much fun
[11:49] <rogpeppe> ashipika: for some reason i seem to have been disconnected from canonical IRC
[11:49] <rick_h_> rogpeppe: yea, lots of stuff down
[11:49] <rick_h_> LP, etc
[11:50] <rogpeppe> rick_h_: marvellous :)
[11:50] <ashipika> rogpeppe, rick_h_: interesting :)
[11:51] <rick_h_> "space| dooferlad: ah man, not much fun
[11:51] <rick_h_> 11:49   rogpeppe| is now known as rogpeppe1
[11:51] <rick_h_> 11:49  rogpeppe1| is now known as rogpeppe
[11:51] <rick_h_> 11:49   rogpeppe| ashipika: for some reason i seem to have been disconnected from canonical IRC
[11:51] <rick_h_> bah
[11:52] <rick_h_> rogpeppe: ashipika looks like firewall issue atm, being worked on
[11:52] <ashipika> rick_h_: thank you for the info!
[11:56] <rogpeppe> ashipika: re: waiting for AfterFunc - we've got the Alarms method to tell when things have started waiting. That is unfortunately necessary, but anything more seems like it would be more than the test code should be relying on. For example, the code could change to start a goroutine itself rather than calling AfterFunc and the test code wouldn't be able to tell when that finished.
[11:58] <ashipika> rogpeppe: and i suppose you'd have to change the signature of the parameter function to AfterFunc
[11:59] <rogpeppe> ashipika: no, i don't think so
[11:59] <ashipika> rogpeppe: ok.. it was just a thought.. feel free to land the PR
[12:16] <rogpeppe> ashipika: any chance you could approve this too please? https://github.com/juju/utils/pull/242
[12:18] <ashipika> rogpeppe: done
[12:28] <rogpeppe> ashipika: ta
[12:37] <rogpeppe> ashipika: ha, marvellous, there's a cyclic dependency between juju/utils and juju/testing
[12:37] <ashipika> rogpeppe: my condolences :)
[13:19] <rick_h_> natefinch: ping, how goes the rackspace work?
[13:25] <natefinch> rick_h_: mostly figured out what was going on Friday.  still have one question for curtis when he gets on
[13:26] <rick_h_> natefinch: k, I've got a call with rax in a bit under an hour and wanted to know where we stand with things
[13:27] <natefinch> rick_h_: hoping to get a fix up today
[13:27] <perrito666> voidspace: did the binary from my branch worked? I fixed my bug finally am writing tests now
[13:27] <natefinch> rick_h_: but I would tell rackspace a couple days to be safe :)
[13:27] <rick_h_> natefinch: k, all good. it's on a different topic but it might come up
[13:27] <rick_h_> natefinch: so need to make sure we're still rc2 targeted
[13:31] <natefinch> rick_h_: yep
[13:48] <rick_h_> voidspace: ping, did we get anywhere with the MAAS issues friday?
[13:49] <babbageclunk> redir: http://bazaar.launchpad.net/~juju-qa/juju-ci-tools/repository/
[14:03] <frobware> mgz: ping
[14:03] <katco> voidspace: standup time
[14:03] <voidspace> katco: omw
[14:05] <voidspace> rick_h_: I suggested something that might be the cause of the problem
[14:05] <voidspace> rick_h_: hang on, in standup - I'll come back to you after that
[14:06] <rick_h_> voidspace: rgr ty
[14:08] <mgz> frobware: yo
[14:13] <mgz> frobware: can I help?
[14:14] <frobware> mgz: please - I was trying to run assess_recovery.py but I don't think I have enough runes; bombs with permisson denied
[14:14] <mgz> frobware: run with --verbose --debug and pastebin?
[14:15] <frobware> mgz: heh pastebin seems to be down
[14:16] <mgz> frobware: eheh, try a different pastebin
[14:19] <mgz> hm, come back lp, I need to finish getting by stuff reviewed
[14:20] <babbageclunk> alexisb: wanna catchuo?
[14:20] <babbageclunk> p
[14:20] <alexisb> babbageclunk, sure
[14:20] <mgz> dooferlad: I could do with bugging at some point today about some cross maas version network things
[14:21] <dooferlad> mgz: sure. When works for you?
[14:22] <mgz> half an hour?
[14:22] <mgz> +in
[14:22] <dooferlad> mgz: sounds good
[14:28] <rock> Hi. I have OpenStack-on-lxd setup. juju version is 2.0-beta15. I am trying to install multipath-tools package on nova deployed LXD container and cinder deployed lxd container using our "cinder-storage driver" charm. But that package not installing on LXD containers. Giving issues. http://paste.openstack.org/show/582953/
[14:29] <rock> I ran #apt-get install --yes multipath-tools   command on Online LXD container console . There also it was giving the same issue as I pasted above.
[14:30] <rock> #lxd and #lxccontainers channels are not in active state.
[14:30] <rock> if anyone have any idea on this please provide me that .
[14:32] <perrito666> voidspace: this should fix your issue with some luck https://github.com/juju/juju/pull/6321
[14:38] <natefinch> rock: you'll probably have better response on #juju but it sounds like a packaging problem since it's a dpkg error
[14:40] <perrito666> I need a non trivial review here https://github.com/juju/juju/pull/6321
[14:41] <rock> natefinch: OK. Thank you.
[14:42] <voidspace> rick_h_: irc or hangout
[14:43] <voidspace> rick_h_: but the summary, custom binaries from here *may* solve the issue: https://github.com/juju/juju/pull/6321
[14:46] <voidspace> rick_h_: I've sent an email
[14:48] <rick_h_> voidspace: ty
[14:52] <voidspace> perrito666: thanks!
[14:52] <voidspace> perrito666: on your branch, is the status polling in a goroutine the same pattern used by the other providers?
[14:52] <voidspace> perrito666: and have you manually tested with maas 1.9 and 2...
[14:54] <voidspace> perrito666: the code changes themselves look pretty straightforward, I like the maas2Controller interface
[14:56] <dimitern`> LP still broken - we can't merge anything due to check-blockers.py getting 503
[14:56] <perrito666> voidspace:  answering in order :)
[14:56] <perrito666> 1) the status polling gorouting is not a pattern, we are not doing it for other providers (and we should)
[14:57] <perrito666> voidspace: we where only updating the "instance status" which is wrong
[14:57] <perrito666> I have manually tested with maas 1.9
[14:57] <perrito666> sorry maas 2
[15:01] <dimitern`> perrito666: please, keep in mind we have 2 separate code paths for maas 1.9 and 2.0
[15:02] <dimitern`> perrito666: both should be tested if the change applies to both versions
[15:02] <voidspace> dimitern`:  t looks good to me
[15:03] <perrito666> dimitern`: I have (not very nicely separated btw :p ) but yes I kept it in mind while coding the fix, I guess I can start a 1.9 maas to try this
[15:03] <voidspace> dimitern`: I'll see if I can check with maas 1.9, need to fail a deployment...
[15:03] <voidspace> perrito666: ah, well - happy for you to do it
[15:03] <voidspace> perrito666: and oi! the separation is *great*
[15:03] <perrito666> voidspace: if you HAVe a 1.9 I would be very thankful if you did it for me
[15:03] <dimitern`> voidspace, perrito666 thanks guys! :)
[15:03] <perrito666> voidspace: I have to install the whole thing
[15:04] <perrito666> if not ill do it
[15:04] <voidspace> perrito666: I have one setup
[15:04] <voidspace> perrito666: how did you test - what did you do to get deployment to fail. Mark as broken after deployment starts?
[15:04] <perrito666> voidspace: I shall pay in beer :p
[15:04] <perrito666> voidspace: I wrote the QA steps :p, basically bootstrap and once it is up, break the power profile for the nodes and deploy something
[15:05] <voidspace> perrito666: I'll let you know how it goes.
[15:06] <perrito666> tx a lot
[15:07] <dimitern`> macgreagoir: http://paste.ubuntu.com/23233677/
[15:16] <anastasiamac> mgz: is the bot stuck? http://juju-ci.vapour.ws:8080/job/github-merge-juju/9327/console
[15:17] <mgz> likely, lp is down
[15:17] <dimitern`> ok, my bad
[15:18] <anastasiamac> mgz: is there a timeout for the blocker check?
[15:18] <dimitern`> my PR got picked up by the bot, which due to LP being down, is now stuck at check-blockers.py
[15:18] <mgz> not independently, but we can't land when lp isn't up anyway
[15:19] <dimitern`> and that's because check-blockers is not called with a timeout
[15:31] <natefinch> gah forgot launchpad is down
[15:49] <voidspace> frobware: did you manage to reproduce the telefonica issue?
[15:52] <redir> http://reports.vapour.ws/releases/issue/5762fb3b749a5667e3627666
[15:52] <redir> frobware: babbageclunk ^
[15:59] <natefinch> sinzui: I tried doing the easy fix for rackspace, just hack the endpoint url. but I'm getting a 401 response from rackspace:
[15:59] <natefinch> 11:54:41 DEBUG juju.provider.openstack provider.go:625 authentication failed: authentication failed
[15:59] <natefinch> caused by: requesting token: Unauthorised URL https://dfw.images.api.rackspacecloud.com/v2/auth/tokens
[15:59] <natefinch> caused by: request (https://dfw.images.api.rackspacecloud.com/v2/auth/tokens) returned unexpected status: 401
[15:59] <natefinch> sinzui: do I need to access that identity.api.rackspacecloud.com url first, to authenticate?  and if so, where do I get the api key?
[16:06] <sinzui> natefinch: I think so. I am sprinting this week. josvaz in @cloudware has most of the details. I found the API key in the rackspace web ui. There isn't anything in the juju config to show that. I do have a rackspacrc file. It exports the standard OpenStack vars. I see "_RACKSPACE_API_KEY" defined, but unused. I didn't notice it unitl now :(
[16:07] <natefinch> hmm ok
[16:07] <natefinch> sinzui: thanks for the info
[16:07] <natefinch> sinzui: I'll talk to josvaz
[16:12] <perrito666> ouch, I actually need lp to pick a new bug
[16:12] <natefinch> it's back
[16:14] <perrito666> just in time
[16:17] <alexisb> perrito666, do you need bug suggestions?
[16:17] <perrito666> alexisb: sure
[16:18] <perrito666> admit it, you have a script checking on me talking about bugs
[16:18] <alexisb> :)
[16:19] <alexisb> that is one of the duties of my job
[16:37] <voidspace> perrito666: hmmm... so after a long update / new image import / bootstrap cycle
[16:38] <voidspace> perrito666: I'm now seeing on maas 1.9: after a deploy, then manually marking the machine as broken in maas (my nodes all have manual power types so that seemed easier)
[16:38] <voidspace> perrito666: the machine stays as pending
[16:38] <voidspace> perrito666: status doesn't change
[16:39] <voidspace> perrito666: I'll try again :-/
[16:39] <perrito666> voidspace: interesting, tx, if you have that issue again ill investigate with my setup here
[16:50] <natefinch> sinzui: is there anyone else I can talk to? Josvaz is past EOD, AFAICT.
[16:51] <sinzui> natefinch: rcj?
[16:52] <natefinch> sinzui: thanks
[16:57] <redir> babbageclunk: https://bugs.launchpad.net/juju/+bug/1606310
[16:57] <mup> Bug #1606310: storeManagerSuite.TestMultiwatcherStop not stopped <ci> <intermittent-failure> <regression> <unit-tests> <juju:Triaged> <https://launchpad.net/bugs/1606310>
[17:00] <voidspace> perrito666: so with a broken power type I do see the status change to down
[17:00] <voidspace> perrito666: however, if I manually break the machine I don't see a status change I don't think
[17:00] <voidspace> perrito666: I'm going to try that with maas 2 - but probably tomorrow now as I'm nearly EOD
[17:00] <voidspace> perrito666: I left a question and a comment - the question is likely to be just me being dumb
[17:01] <perrito666> voidspace: tx a lot for the tests
[17:01] <voidspace> np
[17:35] <CorvetteZR1> hi.  i got openstack up and running with openstack-base-xenial-mitaka
[17:36] <CorvetteZR1> i can log into the dashboard, but when i go to containers, i get an error:  Unable to get the Swift container listing.
[17:37] <CorvetteZR1> how do i configure this?  how do i log into the servers juju configured?
[17:37] <CorvetteZR1> ssh into them i mean
[18:12] <hml> question - how to i add a new endpoint for running the juju go test with the openstack provider?  i’m missing the piece
[18:25] <rick_h_> katco: natefinch have any hint for hml ? ^
[18:25] <katco> hml: what is the juju go test? our suite of tests written in go?
[18:26] <hml> katco: looking at the contribute.md - you run go test github.com/juju/juju to test changes?
[18:27] <natefinch> yeah
[18:27] <natefinch> those won't hit a real openstack
[18:27] <katco> hml: ah ok. you should be able to just run that command; i don't know what you mean by add an endpoint. can you explain?
[18:28] <hml> katco: the code change starts to use the neutron api.  however the test environment doesn’t know about an endpoint to find neutron -
[18:29] <natefinch> you can't make a test that hits a real openstack.... they have to be (more or less) self contained.
[18:29] <hml> katco: for the related goose pkg changes - in the tests i had to add code to spoof neutron
[18:29] <katco> hml: ah, as natefinch says we don't do that in juju. the test should be a unit test and only utilize things in memory
[18:30] <natefinch> the way to connect to openstack using juju normally is to use add-cloud which will prompt for the endpoint
[18:30] <hml> natefinch: hrm… so how do the juju openstack provider tests for nova run then?  they look for a novaClient.
[18:31] <natefinch> hml: there's a lot of spoofing in the tests, precisely to keep it from hitting real infrastructure.  I'm afraid I don't know the details of the openstack tests
[18:32] <katco> hml: i don't know the specifics of the openstack provider tests, but you would mock a novaClient and pass that in. any new tests should not hit anything outside of memory
[18:32] <hml> natefinch: okay - i believe that if i could find where the nova spoofing is done, i could figure it out for neutron, but i haven’t been able to find it yet
[18:34] <hml> katco: i’m looking for where the novaclient is mocked, so i can do the same for a neutronclient.  but so far i’m missing how it’s done.
[18:37] <natefinch> hml probably something with this: gopkg.in/goose.v1/testservices/novaservice
[18:38] <katco> hml: natefinch: i think it's this: https://github.com/juju/juju/blob/master/provider/openstack/local_test.go#L1917
[18:38] <katco> hml: natefinch: called from here: https://github.com/juju/juju/blob/master/provider/openstack/local_test.go#L170
[18:39] <hml> natefinch: katco: cool - i’ll take a look.  thanks
[18:39] <katco> hml: hth, gl
[19:33] <thumper> morning
[19:45] <alexisb> morning thumper
[19:45] <alexisb> voidspace, you still around?
[19:46] <voidspace> alexisb: kind of
[19:46] <voidspace> alexisb: :-)
[19:46] <perrito666> how can this not be working now if it was working on friday
[19:46] <alexisb> voidspace, trying to leave dimiter and andy alone
[19:46] <alexisb> voidspace, do you know if this bug is still an issue for 2.0:
[19:46] <alexisb> https://bugs.launchpad.net/juju/+bug/1560331
[19:46] <mup> Bug #1560331: juju-br0 fails to be up when no gateway is set on interface <juju:Triaged> <juju-core:Won't Fix> <juju-core 1.25:Won't Fix> <https://launchpad.net/bugs/1560331>
[19:47] <alexisb> o is dooferlad back??
[19:47] <voidspace> alexisb: he is!
[19:47] <alexisb> welcome back dooferlad!
[19:47] <alexisb> dooferlad, you could probably answer the q above as well, if you are still around
[19:47] <voidspace> alexisb: it's nearly 9pm UK time so unlikely he's around, nor the sprint people
[19:47] <voidspace> alexisb: I don't know specifically about that bug, but that area of the code has changed dramatically in recent months
[19:48] <voidspace> alexisb: we have new bugs related to "bridging all the things" for example
[19:48] <voidspace> alexisb: I can talk to dooferlad tomorrow morning and email you
[19:48] <rick_h_> alexisb: that should be corrected in rc1
[19:48] <alexisb> lol yes that is a fun topic for today
[19:48] <alexisb> rick_h_, ack will mark it so
[19:48] <voidspace> cool, sorry I couldn't be more helpful
[19:49] <voidspace> alexisb: ah, there's a comment on the bug saying "if we bridge all the things it will go away"
[19:49] <voidspace> alexisb: and that is done
[19:50] <alexisb> its fixed so that is good :)  one less bug
[19:50] <thumper> alexisb: morning
[19:52] <voidspace> thumper: o/
[19:52] <thumper> hey voidspace
[19:52] <thumper> sprinting?
[19:52] <voidspace> thumper: nope, going downstairs to watch a movie with the wife and hope that the lad falls asleep before 1am tonight :-)
[19:53] <voidspace> thumper: have a good day
[19:53] <voidspace> see you on the other side maybe
[19:53] <thumper> ha
[19:53] <thumper> night
[19:58] <katco> rick_h_: ping
[19:58] <rick_h_> katco: pong
[19:59] <katco> rick_h_: https://github.com/juju/juju/pull/6323 https://github.com/juju/juju/pull/6324
[19:59] <katco> rick_h_: double PR into develop branch failed. what do i do? $$merge$$ other pr?
[19:59] <katco> rick_h_: i wasn't there for the discussion on this
[19:59] <rick_h_> katco: only deal with merging from the one that deals with master
[19:59] <rick_h_> katco: but we should look at what failed in taht check
[20:00] <katco> rick_h_: that merge into develop brought in like 30 commits from before mine
[20:01] <rick_h_> katco: yea, that's all good, because it's not constantly kept up with develop that'll happen
[20:01] <katco> rick_h_: just not sure what failure is related to. looks possibly related to my pr, but i will trust the results of $$merge$$ into master
[20:02] <rick_h_> katco: k
[20:03] <rick_h_> katco: http://juju-ci.vapour.ws/job/github-check-merge-juju/42/artifact/artifacts/trusty-out.log though with a failure in "testAddLocalCharm" seems like it might be a real thing
[20:03] <katco> rick_h_: yes it's possible
[20:03] <rick_h_> katco: the windows one there's an intermittent test failure that's hit before there. Might check it matches up, but not sure.
[20:03] <rick_h_> katco: but might be worth double checking that test while the merge runs to get ahead of the game if there is something
[21:50] <perrito666> alexisb: having fun?
[21:52] <alexisb> always having fun
[21:54] <perrito666> you have changed the standup 5 times :p
[21:59] <menn0> thumper: phew, https://github.com/juju/juju/pull/6325
[21:59] <alexisb> perrito666, yeah I was learning something new
[22:01] <katco> alexisb: is ian on vacation or something?
[22:01] <thumper> yeah
[22:01] <alexisb> katco yes
[22:01] <alexisb> and he did not update the calendar
[22:01] <katco> alexisb: ah ok :) ty
[22:01] <alexisb> which I will be pestering him about when he returns next week
[22:01] <alexisb> ;)
[22:02] <katco> lol no biggie
[22:12] <thumper> menn0: review done
[23:11] <veebers> thumper: a little later on today I would like to bother you again about bug; https://bugs.launchpad.net/juju/+bug/1626784 I have some more details regarding it
[23:11] <mup> Bug #1626784: upgrade-juju --version increments supplied patch version <juju:Incomplete> <https://launchpad.net/bugs/1626784>
[23:12] <thumper> ok
[23:18] <alexisb> axw, ping
[23:46] <axw> thumper: you said you had me pinged? (sounds a bit like "had me made")
[23:46] <thumper> :)
[23:46] <perrito666> menn0: ping me when you need me
[23:47] <thumper> I think I said pinned
[23:47] <thumper> I was really wanting the time queue as part of the provisioner
[23:47] <thumper> but as nice as it would be
[23:47] <thumper> it isn't as high a priority as many of the current fires
[23:48] <menn0> perrito666: could you please try deploying something into a container with Juju 2.0 using MAAS 2.0?
[23:48] <axw> thumper: oh right, gotcha
[23:48] <perrito666> menn0: surre, bootstrapping, gimme a moment
[23:48] <menn0> perrito666: thank you
[23:49] <perrito666> menn0: anny particular formula or just placement?
[23:49] <menn0> perrito666: we don't have much detail at the moment
[23:49] <menn0> perrito666: seems the user was trying to deploy openstack using maas and all the container watchers were panicking
[23:50] <menn0> perrito666: so let's just establish whether or not container deployments work at all for you
[23:51] <perrito666> menn0: k, ill go get a beer for the US debate while this bootstraps
[23:51] <menn0> perrito666: sounds good :)