/srv/irclogs.ubuntu.com/2016/09/26/#juju-dev.txt

thumpermenn0: https://github.com/juju/juju/pull/631502:43
menn0thumper: looking02:46
thumperhmm... best go pick up the dog02:52
thumpermenn0: the QA steps were all with the next branch, which adds the command aspects03:19
thumperthere is no QA for just the server bits03:19
thumperbecause it is a new call03:19
menn0ok cool, that's fine03:20
menn0thumper: sorry that I forgot03:20
thumper:)03:20
thumperI'll add the bits shortly03:20
thumperlet me add docstrings, tweak names and move on to submitting the next bit03:20
veebersthumper: seems like a bit of an edge case but if you create and then delete a model it still shows up in list-models, but cannot be deleted. But it also unselects it as the focused model: http://pastebin.ubuntu.com/23232282/03:25
veebersthumper, menn0: You seen something like that before? ^^ Going to file a bug as I can't find an existing bug for it03:29
thumperveebers: it hangs around until the undertaker kills it03:30
thumperand cleans up03:30
thumperit shouldn't take too long03:30
menn0veebers: this could be related to an existing ticket03:30
* menn0 finds03:30
thumperwhen we first did it, we had it keep the model around for a day so logs and things could be removed03:30
thumperbut folks didn't like that03:30
thumperso it was shortened, but not sure what to03:30
thumpermenn0: PR updated03:31
veebersmenn0, thumper: hmm ok the models (I tried a couple of times) are still there and the status stats 'available', that should be 'destroying' or something no?03:31
thumperveebers: when you say delete,03:32
menn0veebers: yeah, that sounds like the bug I'm looking for03:32
thumperwhat are you doing?03:32
veebersthumper: as per the command in the pastebin: juju --show-log add-model -c charm-test model89; juju --show-log destroy-model model89 -y03:34
thumperyes, the model hangs around for a while03:35
thumperis it still there?03:35
thumperhang on03:35
thumperthose commands errored out03:35
thumperwith not found03:35
veebersthumper: yeah, after the original delete attempt (no error there) any follow up attempts error03:36
thumperoh, first line does it all03:36
veebersthumper: just re-checked list-models and they are still there (with status available003:36
menn0veebers, thumper: nope, I can't find that ticket03:36
thumperyeah, that's definitely odd03:37
veebersmenn0: you thinking of this one? https://bugs.launchpad.net/juju/+bug/161396003:37
mupBug #1613960: list-models can show a model that was supposed to have been deleted <juju:Triaged> <https://launchpad.net/bugs/1613960>03:37
veebersmenn0: huh right I have come across this before (as I filed that bug :-\)03:37
thumperha03:37
thumpermenn0: wanna +1 that PR?03:38
menn0veebers: I was thinking of a different one, where an error like that appears after lots of add/destroy model commands03:38
menn0thumper: yep03:38
menn0thumper: done03:39
thumperta03:39
veebersmenn0: ah, that might be one that was alluded to when I tried this test run here (create a bunch of models, then delete a bunch of models)03:39
veebersmenn0: hmm, or not I think this might be the one I'm thinking of: https://bugs.launchpad.net/juju/+bug/162577403:40
mupBug #1625774: memory leak after repeated model creation/destruction <eda> <oil> <oil-2.0> <juju:Triaged by alexis-bruemmer> <https://launchpad.net/bugs/1625774>03:40
menn0veebers: no not that one03:42
veebersmenn0: if we keep going through _all_ the bugs I'm sure we'll finally uncover the one we're looking for ;-)03:43
menn0veebers: haha ... I'm sure there is one but I can't find it03:43
menn0veebers: I saw it when helping babbageclunk with something03:43
* thumper waits for branch to land before proposing next03:45
=== petevg_ is now known as petevg
=== dooferlad_ is now known as dooferlad
=== mpontillo_ is now known as mpontillo
=== Tribaal_ is now known as Tribaal
=== frankban_ is now known as frankban
dimiternjam: morning07:53
dimiternjam: do you have the HO link?07:54
jammorning07:54
jamyes07:54
dimiternok07:54
jamI'll be there in about 5 min07:54
dimitern+107:54
voidspacemacgreagoir: hey, I think on Friday I may have just been imaptient - I did eventually see one container deploy working08:12
voidspacemacgreagoir: I think I just may not have been allowing enough time for image download08:12
voidspacemacgreagoir: so I'm retrying your branch08:12
macgreagoirvoidspace: Enjoy!08:13
voidspace:-)08:13
voidspacemacgreagoir: are you at the London sprint now?08:13
macgreagoirI was wondering if you needed ot dpkg-reconfig maas pkgs to get dhcp on your new subnet too.08:13
macgreagoirvoidspace: I am.08:13
voidspacemacgreagoir: have fun :-)08:13
macgreagoirCheers!08:14
=== gnuoy` is now known as gnuoy
redirhttp://www.ryman.co.uk/search/go?w=adapter08:26
redirwrong link08:26
redirhow about https://docs.google.com/spreadsheets/d/1AGF6ED7kOtigvWTOBS8lkC0t2st63IRhbdpPWeofauU/edit#gid=115218969208:26
frobwareredir: https://bugs.launchpad.net/juju/+bug/161176608:50
mupBug #1611766: upgradeSuite.TearDownTest sockets in a dirty state <ci> <intermittent-failure> <regression> <unit-tests> <juju:Triaged> <https://launchpad.net/bugs/1611766>08:50
mupBug #1626576 changed: credential v. credentials is confusing <usability> <juju:Triaged> <https://launchpad.net/bugs/1626576>09:34
mupBug #1626878 changed: ERROR juju.worker.dependency engine.go <juju:Triaged> <https://launchpad.net/bugs/1626878>09:34
mupBug #1627554 changed: juju binary broken on sierra <juju:Triaged> <https://launchpad.net/bugs/1627554>09:34
anastasiamacjam: dimitern: macgreagoir: replacing JujuConnSuite in state with ConnSuite: https://github.com/juju/juju/pull/631709:44
jamanastasiamac: looking09:44
anastasiamacja \o/09:44
anastasiamacta even :D09:44
jamanastasiamac: +109:46
anastasiamacjam: amazing \o/09:46
voidspacemacgreagoir: so I'm afraid I still see - with a machine with a single nic on the pxe subnet a lxd container starts fine10:34
voidspacemacgreagoir: with two nics, the "first" on a separate subnet, the container starts but gets no address10:34
voidspacemacgreagoir: your branch10:34
voidspacemacgreagoir: I'm just trying to confirm its not an oddity of the way I've set up the two nics10:35
macgreagoirvoidspace: You're seeing the addressing issue on my branch too?10:49
voidspacemacgreagoir: yup10:49
voidspacemacgreagoir: can't connect to the lxd at all (nor exec commands in it) to see the rendered /e/n/i10:49
voidspacemacgreagoir: unless you know a trick to get it10:49
macgreagoirvoidspace: Can you see inside /var/lib/containers/<container>/rootfs ?10:50
macgreagoir/var/lib/lxd/containers... that is10:51
voidspacemacgreagoir: will try shortly - just adding a lxd container with your branch with the second NIC unconfigured10:53
voidspacemacgreagoir: to check that works10:53
voidspacemacgreagoir: for the second NIC (ethA not on pxe subnet) I have gateway address *on* that subnet - which probably means that subnet is not routable to the other one (or the wider internet)10:54
voidspacemacgreagoir: I wonder if that might be the issue and if the gateway address for 172.16.1.0/24 should be 172.16.0.1 (on the pxe subnet)10:54
rogpeppei've just resurrected https://github.com/juju/testing/pull/108 after leaving it languishing for a month or so. could someone review it please? (i got a positive review from fwereade, but it needed tests which i've just done).10:55
rogpeppeit has a companion branch at https://github.com/juju/utils/pull/242 (much smaller)10:55
anastasiamacrogpeppe: we'll look shortly :D thank you for the tests!10:59
rogpeppeanastasiamac: ta!11:10
voidspacemacgreagoir: hmmm... with an unconfigured NIC as the "first" NIC it *looks* like I'm still seeing no address for the container11:11
voidspacemacgreagoir: restoring the order and trying *again*11:11
rick_h_morning11:26
rick_h_dooferlad: welcome back11:26
dooferladhi!11:26
voidspacemacgreagoir: yup, if I reorder the NICs then it starts fine.11:27
rick_h_dooferlad: how's the little one?11:27
voidspacemacgreagoir: will try again with the order reversed and see if I can get to /e/n/i11:27
dooferladrick_h_: doing well. Old enough to smile now, which is lovely.11:27
voidspacedooferlad: hey, hi!11:27
voidspacedooferlad: you back, or just a visit?11:27
dooferladvoidspace: hello11:27
dooferladI am back11:27
voidspacedooferlad: so good when they can smile :-)11:27
voidspacedooferlad: congratulations and welcome back!11:28
dooferladvoidspace: thanks!11:28
dooferladvoidspace: I just wish big sister would go back to sleeping well!11:28
voidspacedooferlad: oh no!11:28
voidspacedooferlad: I feel your pain, Benjamin is in (another) phase of not going to sleep until about 1am11:29
voidspacevery tiring, literally and figuratively11:29
voidspacethe joy of children :-)11:29
dooferladvoidspace: yea, Naomi is often up before 6. I was just about feeling human before that started!11:30
voidspacedooferlad: ah man, not much fun11:32
=== rogpeppe is now known as rogpeppe1
=== rogpeppe1 is now known as rogpeppe
rogpeppeashipika: for some reason i seem to have been disconnected from canonical IRC11:49
rick_h_rogpeppe: yea, lots of stuff down11:49
rick_h_LP, etc11:49
rogpepperick_h_: marvellous :)11:50
ashipikarogpeppe, rick_h_: interesting :)11:50
rick_h_"space| dooferlad: ah man, not much fun11:51
rick_h_11:49   rogpeppe| is now known as rogpeppe111:51
rick_h_11:49  rogpeppe1| is now known as rogpeppe11:51
rick_h_11:49   rogpeppe| ashipika: for some reason i seem to have been disconnected from canonical IRC11:51
rick_h_bah11:51
rick_h_rogpeppe: ashipika looks like firewall issue atm, being worked on11:52
ashipikarick_h_: thank you for the info!11:52
rogpeppeashipika: re: waiting for AfterFunc - we've got the Alarms method to tell when things have started waiting. That is unfortunately necessary, but anything more seems like it would be more than the test code should be relying on. For example, the code could change to start a goroutine itself rather than calling AfterFunc and the test code wouldn't be able to tell when that finished.11:56
ashipikarogpeppe: and i suppose you'd have to change the signature of the parameter function to AfterFunc11:58
rogpeppeashipika: no, i don't think so11:59
ashipikarogpeppe: ok.. it was just a thought.. feel free to land the PR11:59
rogpeppeashipika: any chance you could approve this too please? https://github.com/juju/utils/pull/24212:16
ashipikarogpeppe: done12:18
rogpeppeashipika: ta12:28
=== freyes__ is now known as freyes
rogpeppeashipika: ha, marvellous, there's a cyclic dependency between juju/utils and juju/testing12:37
ashipikarogpeppe: my condolences :)12:37
rick_h_natefinch: ping, how goes the rackspace work?13:19
natefinchrick_h_: mostly figured out what was going on Friday.  still have one question for curtis when he gets on13:25
rick_h_natefinch: k, I've got a call with rax in a bit under an hour and wanted to know where we stand with things13:26
natefinchrick_h_: hoping to get a fix up today13:27
perrito666voidspace: did the binary from my branch worked? I fixed my bug finally am writing tests now13:27
natefinchrick_h_: but I would tell rackspace a couple days to be safe :)13:27
rick_h_natefinch: k, all good. it's on a different topic but it might come up13:27
rick_h_natefinch: so need to make sure we're still rc2 targeted13:27
natefinchrick_h_: yep13:31
rick_h_voidspace: ping, did we get anywhere with the MAAS issues friday?13:48
babbageclunkredir: http://bazaar.launchpad.net/~juju-qa/juju-ci-tools/repository/13:49
frobwaremgz: ping14:03
katcovoidspace: standup time14:03
voidspacekatco: omw14:03
voidspacerick_h_: I suggested something that might be the cause of the problem14:05
voidspacerick_h_: hang on, in standup - I'll come back to you after that14:05
rick_h_voidspace: rgr ty14:06
mgzfrobware: yo14:08
mgzfrobware: can I help?14:13
frobwaremgz: please - I was trying to run assess_recovery.py but I don't think I have enough runes; bombs with permisson denied14:14
mgzfrobware: run with --verbose --debug and pastebin?14:14
frobwaremgz: heh pastebin seems to be down14:15
mgzfrobware: eheh, try a different pastebin14:16
mgzhm, come back lp, I need to finish getting by stuff reviewed14:19
babbageclunkalexisb: wanna catchuo?14:20
babbageclunkp14:20
alexisbbabbageclunk, sure14:20
mgzdooferlad: I could do with bugging at some point today about some cross maas version network things14:20
dooferladmgz: sure. When works for you?14:21
mgzhalf an hour?14:22
mgz+in14:22
dooferladmgz: sounds good14:22
=== hatch_ is now known as hatch
=== hatch is now known as Guest72453
rockHi. I have OpenStack-on-lxd setup. juju version is 2.0-beta15. I am trying to install multipath-tools package on nova deployed LXD container and cinder deployed lxd container using our "cinder-storage driver" charm. But that package not installing on LXD containers. Giving issues. http://paste.openstack.org/show/582953/14:28
rockI ran #apt-get install --yes multipath-tools   command on Online LXD container console . There also it was giving the same issue as I pasted above.14:29
rock#lxd and #lxccontainers channels are not in active state.14:30
rockif anyone have any idea on this please provide me that .14:30
perrito666voidspace: this should fix your issue with some luck https://github.com/juju/juju/pull/632114:32
=== BradCrittenden is now known as bac
natefinchrock: you'll probably have better response on #juju but it sounds like a packaging problem since it's a dpkg error14:38
=== hatch_ is now known as hatch
perrito666I need a non trivial review here https://github.com/juju/juju/pull/632114:40
rocknatefinch: OK. Thank you.14:41
voidspacerick_h_: irc or hangout14:42
voidspacerick_h_: but the summary, custom binaries from here *may* solve the issue: https://github.com/juju/juju/pull/632114:43
voidspacerick_h_: I've sent an email14:46
rick_h_voidspace: ty14:48
voidspaceperrito666: thanks!14:52
voidspaceperrito666: on your branch, is the status polling in a goroutine the same pattern used by the other providers?14:52
voidspaceperrito666: and have you manually tested with maas 1.9 and 2...14:52
voidspaceperrito666: the code changes themselves look pretty straightforward, I like the maas2Controller interface14:54
dimitern`LP still broken - we can't merge anything due to check-blockers.py getting 50314:56
perrito666voidspace:  answering in order :)14:56
perrito6661) the status polling gorouting is not a pattern, we are not doing it for other providers (and we should)14:56
perrito666voidspace: we where only updating the "instance status" which is wrong14:57
perrito666I have manually tested with maas 1.914:57
perrito666sorry maas 214:57
=== elmo_ is now known as elmo
dimitern`perrito666: please, keep in mind we have 2 separate code paths for maas 1.9 and 2.015:01
dimitern`perrito666: both should be tested if the change applies to both versions15:02
voidspacedimitern`:  t looks good to me15:02
perrito666dimitern`: I have (not very nicely separated btw :p ) but yes I kept it in mind while coding the fix, I guess I can start a 1.9 maas to try this15:03
voidspacedimitern`: I'll see if I can check with maas 1.9, need to fail a deployment...15:03
voidspaceperrito666: ah, well - happy for you to do it15:03
voidspaceperrito666: and oi! the separation is *great*15:03
perrito666voidspace: if you HAVe a 1.9 I would be very thankful if you did it for me15:03
dimitern`voidspace, perrito666 thanks guys! :)15:03
perrito666voidspace: I have to install the whole thing15:03
perrito666if not ill do it15:04
voidspaceperrito666: I have one setup15:04
voidspaceperrito666: how did you test - what did you do to get deployment to fail. Mark as broken after deployment starts?15:04
perrito666voidspace: I shall pay in beer :p15:04
perrito666voidspace: I wrote the QA steps :p, basically bootstrap and once it is up, break the power profile for the nodes and deploy something15:04
voidspaceperrito666: I'll let you know how it goes.15:05
perrito666tx a lot15:06
dimitern`macgreagoir: http://paste.ubuntu.com/23233677/15:07
anastasiamacmgz: is the bot stuck? http://juju-ci.vapour.ws:8080/job/github-merge-juju/9327/console15:16
mgzlikely, lp is down15:17
dimitern`ok, my bad15:17
anastasiamacmgz: is there a timeout for the blocker check?15:18
dimitern`my PR got picked up by the bot, which due to LP being down, is now stuck at check-blockers.py15:18
mgznot independently, but we can't land when lp isn't up anyway15:18
dimitern`and that's because check-blockers is not called with a timeout15:19
natefinchgah forgot launchpad is down15:31
=== daniel3 is now known as Odd_Bloke
voidspacefrobware: did you manage to reproduce the telefonica issue?15:49
=== kadams54 is now known as kadams54-lunch
redirhttp://reports.vapour.ws/releases/issue/5762fb3b749a5667e362766615:52
redirfrobware: babbageclunk ^15:52
natefinchsinzui: I tried doing the easy fix for rackspace, just hack the endpoint url. but I'm getting a 401 response from rackspace:15:59
natefinch11:54:41 DEBUG juju.provider.openstack provider.go:625 authentication failed: authentication failed15:59
natefinchcaused by: requesting token: Unauthorised URL https://dfw.images.api.rackspacecloud.com/v2/auth/tokens15:59
natefinchcaused by: request (https://dfw.images.api.rackspacecloud.com/v2/auth/tokens) returned unexpected status: 40115:59
natefinchsinzui: do I need to access that identity.api.rackspacecloud.com url first, to authenticate?  and if so, where do I get the api key?15:59
sinzuinatefinch: I think so. I am sprinting this week. josvaz in @cloudware has most of the details. I found the API key in the rackspace web ui. There isn't anything in the juju config to show that. I do have a rackspacrc file. It exports the standard OpenStack vars. I see "_RACKSPACE_API_KEY" defined, but unused. I didn't notice it unitl now :(16:06
natefinchhmm ok16:07
natefinchsinzui: thanks for the info16:07
natefinchsinzui: I'll talk to josvaz16:07
perrito666ouch, I actually need lp to pick a new bug16:12
natefinchit's back16:12
perrito666just in time16:14
alexisbperrito666, do you need bug suggestions?16:17
perrito666alexisb: sure16:17
perrito666admit it, you have a script checking on me talking about bugs16:18
=== jillr_ is now known as jillr
alexisb:)16:18
alexisbthat is one of the duties of my job16:19
=== kadams54-lunch is now known as kadams54
voidspaceperrito666: hmmm... so after a long update / new image import / bootstrap cycle16:37
voidspaceperrito666: I'm now seeing on maas 1.9: after a deploy, then manually marking the machine as broken in maas (my nodes all have manual power types so that seemed easier)16:38
voidspaceperrito666: the machine stays as pending16:38
voidspaceperrito666: status doesn't change16:38
voidspaceperrito666: I'll try again :-/16:39
perrito666voidspace: interesting, tx, if you have that issue again ill investigate with my setup here16:39
=== frankban is now known as frankban|afk
natefinchsinzui: is there anyone else I can talk to? Josvaz is past EOD, AFAICT.16:50
sinzuinatefinch: rcj?16:51
natefinchsinzui: thanks16:52
redirbabbageclunk: https://bugs.launchpad.net/juju/+bug/160631016:57
mupBug #1606310: storeManagerSuite.TestMultiwatcherStop not stopped <ci> <intermittent-failure> <regression> <unit-tests> <juju:Triaged> <https://launchpad.net/bugs/1606310>16:57
voidspaceperrito666: so with a broken power type I do see the status change to down17:00
voidspaceperrito666: however, if I manually break the machine I don't see a status change I don't think17:00
voidspaceperrito666: I'm going to try that with maas 2 - but probably tomorrow now as I'm nearly EOD17:00
voidspaceperrito666: I left a question and a comment - the question is likely to be just me being dumb17:00
perrito666voidspace: tx a lot for the tests17:01
voidspacenp17:01
CorvetteZR1hi.  i got openstack up and running with openstack-base-xenial-mitaka17:35
CorvetteZR1i can log into the dashboard, but when i go to containers, i get an error:  Unable to get the Swift container listing.17:36
CorvetteZR1how do i configure this?  how do i log into the servers juju configured?17:37
CorvetteZR1ssh into them i mean17:37
hmlquestion - how to i add a new endpoint for running the juju go test with the openstack provider?  i’m missing the piece18:12
rick_h_katco: natefinch have any hint for hml ? ^18:25
katcohml: what is the juju go test? our suite of tests written in go?18:25
hmlkatco: looking at the contribute.md - you run go test github.com/juju/juju to test changes?18:26
natefinchyeah18:27
natefinchthose won't hit a real openstack18:27
katcohml: ah ok. you should be able to just run that command; i don't know what you mean by add an endpoint. can you explain?18:27
hmlkatco: the code change starts to use the neutron api.  however the test environment doesn’t know about an endpoint to find neutron -18:28
natefinchyou can't make a test that hits a real openstack.... they have to be (more or less) self contained.18:29
hmlkatco: for the related goose pkg changes - in the tests i had to add code to spoof neutron18:29
katcohml: ah, as natefinch says we don't do that in juju. the test should be a unit test and only utilize things in memory18:29
natefinchthe way to connect to openstack using juju normally is to use add-cloud which will prompt for the endpoint18:30
hmlnatefinch: hrm… so how do the juju openstack provider tests for nova run then?  they look for a novaClient.18:30
natefinchhml: there's a lot of spoofing in the tests, precisely to keep it from hitting real infrastructure.  I'm afraid I don't know the details of the openstack tests18:31
katcohml: i don't know the specifics of the openstack provider tests, but you would mock a novaClient and pass that in. any new tests should not hit anything outside of memory18:32
hmlnatefinch: okay - i believe that if i could find where the nova spoofing is done, i could figure it out for neutron, but i haven’t been able to find it yet18:32
hmlkatco: i’m looking for where the novaclient is mocked, so i can do the same for a neutronclient.  but so far i’m missing how it’s done.18:34
natefinchhml probably something with this: gopkg.in/goose.v1/testservices/novaservice18:37
katcohml: natefinch: i think it's this: https://github.com/juju/juju/blob/master/provider/openstack/local_test.go#L191718:38
katcohml: natefinch: called from here: https://github.com/juju/juju/blob/master/provider/openstack/local_test.go#L17018:38
hmlnatefinch: katco: cool - i’ll take a look.  thanks18:39
katcohml: hth, gl18:39
thumpermorning19:33
alexisbmorning thumper19:45
alexisbvoidspace, you still around?19:45
voidspacealexisb: kind of19:46
voidspacealexisb: :-)19:46
perrito666how can this not be working now if it was working on friday19:46
alexisbvoidspace, trying to leave dimiter and andy alone19:46
alexisbvoidspace, do you know if this bug is still an issue for 2.0:19:46
alexisbhttps://bugs.launchpad.net/juju/+bug/156033119:46
mupBug #1560331: juju-br0 fails to be up when no gateway is set on interface <juju:Triaged> <juju-core:Won't Fix> <juju-core 1.25:Won't Fix> <https://launchpad.net/bugs/1560331>19:46
alexisbo is dooferlad back??19:47
voidspacealexisb: he is!19:47
alexisbwelcome back dooferlad!19:47
alexisbdooferlad, you could probably answer the q above as well, if you are still around19:47
voidspacealexisb: it's nearly 9pm UK time so unlikely he's around, nor the sprint people19:47
voidspacealexisb: I don't know specifically about that bug, but that area of the code has changed dramatically in recent months19:47
voidspacealexisb: we have new bugs related to "bridging all the things" for example19:48
voidspacealexisb: I can talk to dooferlad tomorrow morning and email you19:48
rick_h_alexisb: that should be corrected in rc119:48
alexisblol yes that is a fun topic for today19:48
alexisbrick_h_, ack will mark it so19:48
voidspacecool, sorry I couldn't be more helpful19:48
voidspacealexisb: ah, there's a comment on the bug saying "if we bridge all the things it will go away"19:49
voidspacealexisb: and that is done19:49
alexisbits fixed so that is good :)  one less bug19:50
thumperalexisb: morning19:50
voidspacethumper: o/19:52
thumperhey voidspace19:52
thumpersprinting?19:52
voidspacethumper: nope, going downstairs to watch a movie with the wife and hope that the lad falls asleep before 1am tonight :-)19:52
voidspacethumper: have a good day19:53
voidspacesee you on the other side maybe19:53
thumperha19:53
thumpernight19:53
katcorick_h_: ping19:58
rick_h_katco: pong19:58
katcorick_h_: https://github.com/juju/juju/pull/6323 https://github.com/juju/juju/pull/632419:59
katcorick_h_: double PR into develop branch failed. what do i do? $$merge$$ other pr?19:59
katcorick_h_: i wasn't there for the discussion on this19:59
rick_h_katco: only deal with merging from the one that deals with master19:59
rick_h_katco: but we should look at what failed in taht check19:59
katcorick_h_: that merge into develop brought in like 30 commits from before mine20:00
rick_h_katco: yea, that's all good, because it's not constantly kept up with develop that'll happen20:01
katcorick_h_: just not sure what failure is related to. looks possibly related to my pr, but i will trust the results of $$merge$$ into master20:01
rick_h_katco: k20:02
rick_h_katco: http://juju-ci.vapour.ws/job/github-check-merge-juju/42/artifact/artifacts/trusty-out.log though with a failure in "testAddLocalCharm" seems like it might be a real thing20:03
katcorick_h_: yes it's possible20:03
rick_h_katco: the windows one there's an intermittent test failure that's hit before there. Might check it matches up, but not sure.20:03
rick_h_katco: but might be worth double checking that test while the merge runs to get ahead of the game if there is something20:03
=== natefinch is now known as natefinch-afk
perrito666alexisb: having fun?21:50
alexisbalways having fun21:52
perrito666you have changed the standup 5 times :p21:54
menn0thumper: phew, https://github.com/juju/juju/pull/632521:59
alexisbperrito666, yeah I was learning something new21:59
katcoalexisb: is ian on vacation or something?22:01
thumperyeah22:01
alexisbkatco yes22:01
alexisband he did not update the calendar22:01
katcoalexisb: ah ok :) ty22:01
alexisbwhich I will be pestering him about when he returns next week22:01
alexisb;)22:01
katcolol no biggie22:02
thumpermenn0: review done22:12
veebersthumper: a little later on today I would like to bother you again about bug; https://bugs.launchpad.net/juju/+bug/1626784 I have some more details regarding it23:11
mupBug #1626784: upgrade-juju --version increments supplied patch version <juju:Incomplete> <https://launchpad.net/bugs/1626784>23:11
thumperok23:12
alexisbaxw, ping23:18
axwthumper: you said you had me pinged? (sounds a bit like "had me made")23:46
thumper:)23:46
perrito666menn0: ping me when you need me23:46
thumperI think I said pinned23:47
thumperI was really wanting the time queue as part of the provisioner23:47
thumperbut as nice as it would be23:47
thumperit isn't as high a priority as many of the current fires23:47
menn0perrito666: could you please try deploying something into a container with Juju 2.0 using MAAS 2.0?23:48
axwthumper: oh right, gotcha23:48
perrito666menn0: surre, bootstrapping, gimme a moment23:48
menn0perrito666: thank you23:48
perrito666menn0: anny particular formula or just placement?23:49
menn0perrito666: we don't have much detail at the moment23:49
menn0perrito666: seems the user was trying to deploy openstack using maas and all the container watchers were panicking23:49
menn0perrito666: so let's just establish whether or not container deployments work at all for you23:50
perrito666menn0: k, ill go get a beer for the US debate while this bootstraps23:51
menn0perrito666: sounds good :)23:51

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!