[00:19] <menn0> thumper: poke regarding http://reviews.vapour.ws/r/5543/ :)
[01:08] <thumper> menn0: sorry, got distracted
[01:08] <menn0> thumper: no worries
[01:25] <thumper> menn0: do you recall how to get the engine as a dependency?
[01:25] <menn0> thumper: only vaguely
[01:26]  * menn0 checks
[01:29] <menn0> thumper: is worker/dependency.SelfManifold what you want?
[01:29] <thumper> probably
[01:29] <thumper> I'm halfway through your review
[01:29] <thumper> wallyworld: with you shortly, just want to finish this review
[01:29] <menn0> thumper: looks like it's currently only used in some tests
[01:30] <wallyworld> sure
[01:30]  * wallyworld is doing a review too
[01:38] <menn0> thumper: regarding version.Number over the API, that's how it's done elsewhere and it's fine. version.Number has custom JSON marshalling defined which turns it into a string.
[01:43] <thumper> menn0: if we are already doing it then fine
[01:44] <thumper> drop em
[01:46] <tych0> natefinch: hi, i am now
[01:47] <axw> wallyworld: can you please take a look at http://reviews.vapour.ws/r/5534/diff/3-4/, made some changes in response to mhilton's review
[01:48] <wallyworld> sure
[01:56] <natefinch> tych0: hey.  I'm having difficulty figuring out this bug: https://bugs.launchpad.net/juju-core/1.25/+bug/1610880
[01:56] <mup> Bug #1610880: Downloading container templates fails in manual environment <juju-core 1.25:Triaged by natefinch> <https://launchpad.net/bugs/1610880>
[01:57] <natefinch> tych0: the crux seems to be that lcx-create is trying a wget for the image ... and for some reason on a manual juju environment it fails and on a normal juju environment it works
[01:58] <natefinch> tych0: I know this is only tangentially related to anything you've worked on, but I was wondering if you had any ideas
[02:05] <thumper> why is the lxd provider downloading a new ubuntu-xenial image?
[02:05] <thumper> does it auto update?
[02:06] <natefinch> new as in - it already has one and it's getting another?
[02:08] <natefinch> and the answer is.. AFAIK, LXD maintains its own list of images, what it does with those is mysterious magic.  I don't *think* we tell it to auto-update, but I could be wrong.
[02:18] <thumper> logging added, ci test running again, time to get food
[02:29] <thumper> ugh
[02:29] <thumper> ffs
[02:29] <thumper> rerunning
[02:38] <natefinch> oh interesting.... so, in a manual environment, we're not adding the cloud-local address for the machine to the cert
[02:44] <menn0> wallyworld: I like the new bootstrap output. looks much better
[02:45] <menn0> wallyworld: shouldn't these 2 things be the other way around though:
[02:45] <menn0> Installing curl, cpu-checker, bridge-utils, cloud-utils, tmux
[02:45] <menn0> Running apt-get update
[03:06] <axw> pjdc: do you have an env that's currently exhibiting CPU/mem spikes as in https://bugs.launchpad.net/juju/+bug/1587644? I'm after a CPU profile in addition to what's been provided already
[03:06] <mup> Bug #1587644: jujud and mongo cpu/ram usage spike <canonical-bootstack> <canonical-is> <juju:Fix Released> <juju-core:Triaged> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1587644>
[03:07] <pjdc> axw: not right now. but if you can tell us how to capture what you'd like captured, i'll add it to our ticket
[03:08] <axw> pjdc: you should just use the same command as for the heap profile, but use /debug/pprof/profile instead of /debug/pprof/heap
[03:08] <pjdc> axw: righto - will update the ticket
[03:08] <axw> pjdc: which is described here for 1.25: https://github.com/juju/juju/wiki/pprof-facility
[03:08] <axw> pjdc: thanks
[03:15] <pjdc> axw: just testing it here; does this look right? https://pastebin.canonical.com/164042/plain/
[03:18] <axw> pjdc: hrm, nope, that doesn't look right. should be much bigger. odd, it worked for me just now
[03:18] <axw> pjdc: looks like the right command line invocation though ...
[03:19] <pjdc> the single quote character seems pretty odd
[03:31] <wallyworld> menn0: sorry, was at lunch. you could be right, i'll have to check
[03:38] <anastasiamac> thumper: since u r in the manul area, ci also seem to have observed long bootstrap (>45min)... if u have any thoughts on this would be awesome to have them in this bug
[03:38] <anastasiamac> https://bugs.launchpad.net/juju/+bug/1617137
[03:38] <mup> Bug #1617137: Timing out fetching agent (xenial/s390x)  <bootstrap> <ci> <regression> <juju:Triaged> <https://launchpad.net/bugs/1617137>
[03:42] <wallyworld> anastasiamac: that's not related, that's due to closed network
[03:42] <wallyworld> curtis needs to reimport the images
[03:42] <anastasiamac> wallyworld: if u can comment in the bug, m sure QA and veebers will apprecaite
[03:43] <wallyworld> anastasiamac: curits already knows - he and i disucssed
[03:43] <wallyworld> he has more detial than i do
[03:43] <wallyworld> as to exactly what needs to be done
[03:43] <wallyworld> as he did it originally
[03:43] <veebers> wallyworld, anastasiamac ah ok, seems I missfiled that then. I'll get that fixed
[03:43] <anastasiamac> wallyworld: sure, veebers filed the bug and this info is useful for anyone who was not in on ur discussion :D
[03:43] <anastasiamac> awesome! tyvm :D
[03:44] <wallyworld> but i don't have all the detail
[03:44] <wallyworld> i only know generalities, i'd rather get the person who knows to comment
[03:45] <wallyworld> sounds likes the QA folks need to talk to each other more :-)
[03:45] <veebers> anastasiamac, wallyworld: If we can keep the bug for now (maybe not marked critical) I've updated the rule/issue to indicate it's a ci infra issue and we'll get those in the know to comment/remove/etc. thebug
[03:45] <wallyworld> no worries
[03:45] <wallyworld> having a bug means it can be tracked
[03:45] <anastasiamac> veebers: feel free to adjust priority on it :D
[03:49] <veebers> anastasiamac: sweet, have done. also added affects ci with a (vague) comment about network
[03:49] <anastasiamac> veebers: wallyworld \o/
[03:51] <wallyworld> the bug should be retargetted to CI, as curtis confirm it's not juju
[03:53] <veebers> wallyworld: it has been
[03:53] <wallyworld> awesome, tyvm
[03:53] <veebers> wallyworld: oh no wait, I didn't remove juju, just added juju-ci
[03:54] <veebers> perhaps I should have lied just now and said that I had removed juju too :-)
[03:54] <wallyworld> we can add juju back if needed, but for now the best info is that it's a CI issue - "someone" needs to import LXD images
[03:54] <menn0> veebers: with the changes landed today, there a bunch more prechecks in place
[03:54] <veebers> wallyworld: removed now
[03:54] <wallyworld> so that LXD uses those importd instead of calling out to cloud-images
[03:54] <wallyworld> \o/
[03:54] <veebers> menn0: oh neat :-) Does it break any assumptions the test makes currently?
[03:55] <menn0> veebers: the main one you'll be interested in is that it isn't possible to upgrade if the target controller tools version is less than the model tools version
[03:55] <menn0> veebers: no it shouldn't break the existing tests
[03:56] <menn0> veebers: the tests to ensure the source and target aren't controller are done too (mostly landed)
[03:56] <menn0> veebers: i'm implementing the prechecks to ensure that the source controller, model and target controller machines are healthy now
[03:58] <veebers> menn0: nice. Might look at getting a test for between versions going next week
[03:58] <menn0> veebers: sounds good.
[03:58] <menn0> veebers: just keeping you informed :)
[03:59] <veebers> menn0: :-)
[04:16] <natefinch> wallyworld: is it me, or does this seem dangerous? https://github.com/juju/juju/blob/master/worker/certupdater/certupdater.go#L119  We're updating the saved addresses before knowing if we've actually successfully updated the cert.
[04:17] <wallyworld> yeah, seems suboptimal
[04:18] <natefinch> wallyworld: I'm looking at this bug about lxc containers on manual provider, and it seems like we're not adding the cloud-local address to the cert for some reason
[04:19] <wallyworld> we use c.addresses to short circuit any future updates, i guess if things fail, that means we'll never process those addresses again
[04:19] <natefinch> wallyworld: I don't think that's the actual problem
[04:19] <natefinch> wallyworld: it just looks suspicious
[04:20] <wallyworld> i can't recall enough about the code to know why cloud local addresses are not arriving at the cert updater
[04:20] <wallyworld> they must be filtered out upstream somewhere
[04:20] <wallyworld> the machine address setting code is a bit gnarly
[04:21] <natefinch> wallyworld: yeah... we call state.APIHostPorts() when the cert updater starts, and add any local-cloud addresses... on manual, it gets no addresses, on gce, it gets the correct local-cloud address
[04:21] <wallyworld> oh, manual
[04:21] <wallyworld> we won't get any
[04:22] <natefinch> why not?
[04:22] <wallyworld> the cloud local addresses come from the instance poller from memory
[04:22] <wallyworld> and there's no such thing as an instance poller for manual IIANM
[04:22] <natefinch> ug
[04:22] <wallyworld> this is a bit hand wavy
[04:23] <wallyworld> i could be wrong
[04:23] <wallyworld> but generally the instance poller is a major source of our knoweledge of machine addresses
[04:23] <natefinch> you're at least slightly wrong... I see juju.state address.go:137 setting API hostPorts: [[104.196.3.75:17070 10.142.0.2:17070 127.0.0.1:17070 [::1]:17070]] on manual
[04:23] <wallyworld> with manual, we wil get machine local addresses
[04:24] <wallyworld> right, but it depends on how those are classified
[04:24] <natefinch> that 10.142 address is what lxc-create is trying to wget the image from
[04:24] <wallyworld> by our address heuristics
[04:24] <natefinch> hmm
[04:24] <wallyworld> we label addresses as machine local, cloud local, public etc
[04:25] <wallyworld> what is the machine foe which those host ports are being set above>
[04:25] <wallyworld> ?
[04:25] <wallyworld> a controller or a worker machine?
[04:25] <natefinch> controller
[04:25] <wallyworld> anyways, 127.0.0.1 looks wrong
[04:26] <wallyworld> cause if that is handed out as a controller address, it can't work
[04:26] <natefinch> works from that machine ;)
[04:26] <wallyworld> might not be the same issue as your seeing, but looks wrong to me
[04:27] <natefinch> it gets set that way on my gce controller as well, so probably not the problem
[04:27] <wallyworld> so from memory, set addreses happens and we filter somehow and then hand out to cert updater, but i can't really recall exactly
[04:28] <wallyworld> maybe the filtering takes care of localhost
[05:33] <wallyworld> axw: i've updated http://reviews.vapour.ws/r/5533/ if you could PTAL, a bit of code was moved around
[05:34] <axw> okey dokey
[05:34] <wallyworld> axw: added a fix and a test for the tranitive permission etc
[05:38] <thumper> well, with menn0's help, debugging is progressing
[05:38] <thumper> it is in the proxy updater where it is trying to set the lxd proxies
[05:38] <thumper> it is just blocking forever
[05:38] <thumper> holding workers up
[05:38] <thumper> anyway
[05:39] <thumper> more debugging for monday
[05:39]  * thumper is done for now
[05:39] <thumper> laters folks
[05:43] <axw> wallyworld: found an existing bug, but otherwise LGTM
[05:44] <wallyworld> ta, looking
[05:46] <wallyworld> axw: yeah, that was existing behaviour before I started this PR. it does seem wrong doesn't it
[05:46] <wallyworld> actually
[05:47] <wallyworld> it is as per the all users case above, but it makes more sense there
[05:47] <wallyworld> i'll change to errperm
[05:59] <wallyworld> axw: ah, the existing client code will bail out with an error if *any* of the requested users results in an error
[05:59] <axw> wallyworld: :/
[05:59] <wallyworld> so that's why the server side was skipping over unpermitted users
[06:00] <axw> wallyworld: we should do that filtering on the client, not on the server
[06:00] <wallyworld> agreed
[06:01] <wallyworld> axw: for now, how about i just modify the api caller to skip err perms
[06:01] <wallyworld> but error for other things
[06:01] <axw> wallyworld: sounds OK I guess
[06:01] <wallyworld> or the other things is we could print the errors for tabular output
[06:02] <wallyworld> as well as the users it can find
[06:02] <wallyworld> we don't have such a good pattern for this
[06:05] <wallyworld> bah, too much churn, i'll add a todo
[06:09] <wallyworld> and the CLI only passes one at a time anyway
[08:47] <axw_> mhilton: thanks for the review
[09:10] <mhilton> axw_: np
[12:11] <tych0> natefinch: replied on the bug
[12:11] <tych0> but it looks like wget is refusing to download something because the certs don't match?
[12:54] <lazyPower> interesting new format of juju debug-log in beta16, it appears it rolled over to json formatting by default?
[13:20] <babbageclunk> mgz: around?
[13:20] <mgz> babbageclunk: yo
[13:21] <babbageclunk> Trying to cleanup instances from running CI tests with --keep-env
[13:22] <babbageclunk> but I can't find my AWS credentials - I may have lost them when my machine died.
[13:22] <mgz> so, what I do is JUJU_DATA=~/cloud-city/jes-homes/gz-test-env-name $GOPATH/bin/juju destroy-controller
[13:23] <babbageclunk> oh, actually, I have the user name and password, just don't have the URL for the canonical console
[13:23] <mgz> depends if you have the env on disk still? you don't need seperate creds unless you wiped the env from disk
[13:23] <mgz> which creds were you using to test?
[13:23] <babbageclunk> I don't think that'll work - the api isn't up because the restore failed.
[13:23] <mgz> the shared dev ones, or CI's?
[13:24] <babbageclunk> Hmm - I guess the CI's ones.
[13:24] <mgz> babbageclunk: then fall back to kill-controller - which again should get the aws details out of JUJU_DATA
[13:24] <babbageclunk> opk
[13:24] <babbageclunk> ok
[13:24] <mgz> try that, yell if you have problems
[13:24] <mgz> the aws console details for ci are in the consoles.txt file
[13:25] <mgz> be careful poking around there, if you do use
[13:25] <babbageclunk> ok, thanks mgz
[13:50] <ram___> andrey-mp: Hi. w.r.t your information yesterday, I created "cinder-storagedriver"  charm which will pass config data to relation to modify the configuration. I have taken your scaleio-openstack as reference. How do you certify scaleio-openstack charm? Please provide me the juu charm certification requirements.
[14:01] <katco> natefinch: standup time
[14:02] <natefinch> katco: oops, thanks
[14:23] <admcleod> looks like juju has overwritten authorized_keys rather than appending to it, is that normal? (1.25)
[14:24] <mgz> admcleod: yes
[14:24] <admcleod> mgz: oh.
[14:25] <admcleod> mgz: thanks
[14:25] <mgz> admcleod: juju expects to manage the file, so it hass list/add/delete/import
[14:25] <mgz> to manage
[14:28] <admcleod> right fair enough
[14:29] <mgz> that is a surprise if you've edited it manually on a machine yourself perhaps :)
[14:30] <mgz> babbageclunk: yell if you have any questions about my ssh key branch
[14:42] <katco> frobware: dimitern: thank you for all your hard work for ivoks
[14:43] <dimitern> katco: :) cheers
[14:57] <babbageclunk> mgz: LGTM!
[14:58] <mgz> babbageclunk: merci
[15:06] <katco> dimitern: lmk when you have that hotfix ready so i can pass it on
[15:08] <dimitern> katco: will do
[15:08] <katco> dimitern: ta
[15:19] <mattyw> is someone able to talk to me about the new upload-tools logic?
[15:20] <natefinch> mattyw: it just works, as long as jujud is in the same directory as the juju you're running and they have the exact same version number
[15:21] <mattyw> natefinch, that's the thing - for me it didn't just work
[15:21] <mattyw> natefinch, so I'm trying to work out why
[15:29] <natefinch> mattyw: is it uploading when it's not supposed to, or vice versa?
[15:30] <babbageclunk> perrito666: ping?
[15:31] <perrito666> babbageclunk: pong
[15:31] <dimitern> katco: I've sent this patch to ivoks - tested locally and seems to work, waiting on feedback later: http://paste.ubuntu.com/23093530/
[15:31] <babbageclunk> perrito666: I'm trying to understand backup and restore! :)
[15:31] <katco> dimitern: yay! ty! hope the feedback is good
[15:31] <babbageclunk> perrito666: various people said you were the one to talk to.
[15:31] <perrito666> babbageclunk: sadly I am
[15:32] <alexisb> dimitern, frobware you guys are rockstars thank you!
[15:33] <katco> perrito666: don't you wish you had a... backup?
[15:33] <katco> perrito666: it would... restore your sanity
[15:33]  * babbageclunk lols
[15:33]  * katco groans
[15:33] <perrito666> katco: you should do that for a living
[15:33] <dimitern> :) let's see if it'll be any good on site
[15:33] <alexisb> babbageclunk, perrito666 I actually wanted to talk to both of you about the bugs youa reworking
[15:34] <alexisb> do you guys have time for a quik HO?
[15:34] <perrito666> alexisb: sure
[15:34] <babbageclunk> alexisb: sure!
[15:34] <alexisb> sorry got distracted by some other fires
[15:34] <alexisb> perrito666, babbageclunk https://hangouts.google.com/hangouts/_/canonical.com/a-team-standup
[15:44] <perrito666> its a trap
[15:47] <perrito666> could anyone http://reviews.vapour.ws/r/5546/ ?
[15:47] <perrito666> its a quick short one
[15:47] <natefinch> I can look
[15:48] <perrito666> natefinch: just added QA steps
[15:50] <natefinch> ahh crap, forgot about QA steps
[15:50] <natefinch> well that quintuples the amount of time I have to spend reviewing it :/
[16:02] <marcoceppi> why does the Juju client still hard code support for OS?
[16:07] <marcoceppi> https://bugs.launchpad.net/juju/+bug/1616531
[16:07] <mup> Bug #1616531: "panic: unknown OS for series" when running client on Fedora <juju:Triaged> <https://launchpad.net/bugs/1616531>
[16:29] <natefinch> lol fedora
[16:31] <perrito666> natefinch: oh, that is not nice
[16:31] <perrito666> fedora is a fine distro
[16:32] <natefinch> I am sure it is.
[16:32] <natefinch> also, why in the everloving hell does the client give a crap what OS it's running on?
[16:33] <perrito666> I thought we had fixed that bug ages ago
[16:33] <natefinch> no
[16:33] <natefinch> we talked about it in december and never did it
[16:34] <marcoceppi> :sadpanda:
[19:08] <alexisb> perrito666, is this still an issue: https://bugs.launchpad.net/juju/+bug/1530840
[19:08] <mup> Bug #1530840: juju status-history too noisy with update-status <landscape> <juju:In Progress by alexis-bruemmer> <https://launchpad.net/bugs/1530840>
[19:11] <perrito666> alexisb: priv
[19:59]  * redir lunches
[20:28] <redir> so quiet on fridays:)
[20:34] <natefinch> yep
[20:38] <natefinch> ok, I give up
[20:38] <natefinch> see you all next week
[23:31] <redir> I didn' miss anyone at the stand-up did I?
[23:32] <redir> I mean I just didn't join this week because there hasn't been anyone there since we rearranged
[23:44] <redir> ok
[23:44]  * redir goes EoW soon-ish
[23:57] <redir> see you next week juju