[00:04] <voidspace> thumper: nice work
[00:04] <voidspace> thumper: and I knew we'd hit subnets pretty soon
[00:04] <thumper> voidspace: it is coming along nicely
[00:04] <voidspace> thumper: that should be very straightforward to do
[00:05] <thumper> voidspace: I'm doing blockdevice, filesystem, and partition in gomaasapi
[00:05] <voidspace> thumper: anyway, goodnight - see you tomorrow if you're around
[00:05] <voidspace> thumper: cool, thanks
[00:05] <thumper> voidspace: night
[00:05] <voidspace> o/
[00:20] <wallyworld> axw_: does bug 1539684 ring any bells for you?
[00:20] <mup> Bug #1539684: storage-get unable to access previously attached devices <canonical-bootstack> <storage> <juju-core:Triaged> <https://launchpad.net/bugs/1539684>
[00:25] <sinzui> wallyworld: do you have a moment to review http://reviews.vapour.ws/r/4605/
[00:25] <wallyworld> i do
[00:26] <wallyworld> sinzui: ths is a merge of next into master right?
[00:26] <sinzui> wallyworld: yesl sorry. I seem to have missed a whole sentence.
[00:26] <wallyworld> np, thought it was, just checking
[00:34] <wallyworld> sinzui: i've eyeballed the changes, looks ok
[00:35] <wallyworld> and i am landing
[00:36]  * menn0 is back (had a visitor)
[00:37] <menn0> wallyworld: this is what's in the logs of the controller hosts added after enable-ha:
[00:37] <menn0> 2016-04-14 10:22:55 INFO juju.mongo open.go:125 dialled mongo successfully on address "127.0.0.1:37017"
[00:37] <menn0> 2016-04-14 10:22:55 DEBUG juju.worker.dependency engine.go:479 "state" manifold worker stopped: cannot connect to mongodb: no reachable servers
[00:37] <menn0> 2016-04-14 10:22:55 ERROR juju.worker.dependency engine.go:526 "state" manifold worker returned unexpected error: cannot connect to mongodb: no reachable servers
[00:39] <wallyworld> menn0: yeah, mongo is great
[00:40] <menn0> wallyworld: we really need those mongodb logs
[00:40]  * menn0 updates the bug
[00:40] <wallyworld> yes
[00:42] <wallyworld> alexisb: you still working on bug 1506225 ?
[00:42] <mup> Bug #1506225: Failed bootstrap does not clean up failed environment w/o --force and error message is unhelpful <bootstrap> <destroy-environment> <jujuqa> <juju-core:In Progress by alexis-bruemmer> <https://launchpad.net/bugs/1506225>
[00:44] <alexisb> wallyworld, nope I haven't loocke dat it in ages
[00:44] <wallyworld> ok, np, will update
[00:51] <mup> Bug # opened: 1570651, 1570654, 1570657, 1570660
[01:03] <thumper> once master has next merged
[01:03] <thumper> we need to remove the next branch
[01:29] <wallyworld> thumper: i am rebooting to try and fix my camera, will be a little late
[01:29] <thumper> wallyworld: ack
[01:38] <axw_> wallyworld: sorry didn't see message before. I think I've seen the storage bug in LP before, but haven't witnessed the bug first hand
[01:38] <wallyworld> np
[01:39] <wallyworld> axw_: i am just triaging bugs so was curious
[01:59] <wallyworld> axw: running late, otp, be a minute or 2
[01:59] <axw> wallyworld: sure, ping when ready
[02:02] <axw> sinzui: can the beta4 stabilisation bug be closed now?
[02:03] <sinzui> axw: soon, I need to bump the version. CI will reject all branches that claim to be 2.0-beta4
[02:03] <axw> sinzui: ah ok
[02:06] <sinzui> wallyworld: can you review http://reviews.vapour.ws/r/4606/
[02:06] <wallyworld> sure
[02:06] <wallyworld> sinzui: lgtm. i have been naughty and snuck in a landing prior to the stampede
[02:07] <sinzui> wallyworld: I better make sure CI is paused then because it will fail that revision
[02:07] <wallyworld> sinzui: sorry
[02:07] <wallyworld> i can abort
[02:07] <sinzui> wallyworld: no need
[02:08] <sinzui> wallyworld: Since CI makes version we release like agents. It assumes something terrible has happened if it is asked to makes and test a version it has made and released.
[02:09] <wallyworld> ah
[02:09] <sinzui> wallyworld: CI needs to prevent someone tampering with the agents in streams
[02:09] <wallyworld> makes sense
[02:09] <wallyworld> i was being impatient
[02:09] <wallyworld> axw: ready now
[02:10] <sinzui> wallyworld: all is fine. ci is paused, when all is merged I will unpause and rmove the blockl
[02:10] <wallyworld> tyvm
[02:20] <sinzui> wallyworld: and everyone, looks like I need to find some disk space fix merges.
[02:20] <wallyworld> ah bollocks
[02:24] <sinzui> why is mongo on this host
[03:13] <mup> Bug #1568943 changed: Juju 2.0-beta4 stabilization <blocker> <juju-core:Fix Released> <https://launchpad.net/bugs/1568943>
[03:22] <mup> Bug #1541536 changed: Deployer and Quickstart failed setting annotations because of socket or json parsing <api> <blocker> <ci> <deployer> <quickstart> <regression> <juju-core:Invalid> <juju-core 1.25:Fix Released> <https://launchpad.net/bugs/1541536>
[03:22] <mup> Bug #1564791 changed: 2.0-beta3: LXD provider, jujud architecture mismatch <lxd> <juju-core:Fix Released by natefinch> <https://launchpad.net/bugs/1564791>
[03:34] <mup> Bug #1541536 opened: Deployer and Quickstart failed setting annotations because of socket or json parsing <api> <blocker> <ci> <deployer> <quickstart> <regression> <juju-core:Invalid> <juju-core 1.25:Fix Released> <https://launchpad.net/bugs/1541536>
[03:34] <mup> Bug #1564791 opened: 2.0-beta3: LXD provider, jujud architecture mismatch <lxd> <juju-core:Fix Released by natefinch> <https://launchpad.net/bugs/1564791>
[03:43] <mup> Bug #1541536 changed: Deployer and Quickstart failed setting annotations because of socket or json parsing <api> <blocker> <ci> <deployer> <quickstart> <regression> <juju-core:Invalid> <juju-core 1.25:Fix Released> <https://launchpad.net/bugs/1541536>
[03:43] <mup> Bug #1564791 changed: 2.0-beta3: LXD provider, jujud architecture mismatch <lxd> <juju-core:Fix Released by natefinch> <https://launchpad.net/bugs/1564791>
[04:16] <natefinch> wallyworld: got a few minutes to talk?
[04:17] <wallyworld> natefinch: sure, am outside, give me a sec to move
[04:17] <natefinch> wallyworld: np
[04:18] <wallyworld> natefinch:  https://plus.google.com/hangouts/_/canonical.com/tanzanite-stand
[06:04] <bradm> should I be able to bootstrap juju2 on ppc64el?
[06:05] <bradm> I get an error about "ERROR invalid constraint value: arch=ppc64el ; valid values are: [amd64 arm64]"
[06:08] <frobware> bradm: I have done so in the past when looking at various pp64el bugs.
[06:10] <bradm> ah, I see my issue now that I say that
[06:11] <bradm> my images didn't sync right for some reason
[06:12] <bradm> yup, booting fine after kicking the sync a bit harder
[07:23] <axw> wallyworld: would you please take a look at the last diff here, http://reviews.vapour.ws/r/4533/diff/5-6/
[07:23] <wallyworld> sure
[07:23] <axw> wallyworld: the rest has been reviewed, the last rev is to remove models immediately
[07:23] <wallyworld> awesome
[07:32] <wallyworld> axw: a couple of questions
[07:34] <axw> wallyworld: thanks
[07:40] <axw> wallyworld: I'll add some more tests around the refcount, please see my reply to your first issue
[07:40] <wallyworld> sure
[07:41] <wallyworld> axw: ah sorry, i misread the command
[07:45] <axw> wallyworld: the increment for controller model would have always silently failed, because it was being run before the document is created
[07:45] <axw> without txn.DocExists
[07:45] <axw> woohoo
[07:45] <wallyworld> win!
[07:45] <wallyworld> lukcily we are adding a test :-)
[07:46] <wallyworld> naughty thumper :-)
[08:45] <voidspace> thumper-afk:  frobware: dimitern: a really difficult one to start the morning off http://reviews.vapour.ws/r/4611/
[08:47] <frobware> voidspace: shipit
[08:47] <voidspace> frobware: thanks :-)
[08:47] <dimitern> voidspace: +1
[08:47] <voidspace> dimitern: o/
[08:48] <dimitern> frobware: it turned out surprisingly difficult to get rid of AC only in MAAS
[08:49] <dimitern> it would've been a lot easier to get rid of all bits of AC
[08:49] <frobware> dimitern: patience... :)
[08:50] <dimitern> frobware: yeah, but still - I'm not quite happy with what I came up with eventually
[08:51] <voidspace> dimitern: not just removing the branches where we check for the flag?
[08:52] <dimitern> voidspace: removing the code in MAAS was easy, but convincing the provisioner/brokers to still work with and w/o AC on AWS, while not breaking MAAS at the same time, has proven to be nasty
[08:53] <voidspace> dimitern: ah
[08:54] <voidspace> dimitern: so sometimes use the flag but sometimes not
[08:54] <voidspace> I see
[08:54] <dimitern> the price we pay for dirty PoC-style hacks ..
[08:55] <dimitern> voidspace: we do, but a lot of places in the code assumes having the feature flag on is sufficient to use the legacy approach AC (iptables, allocateaddress, etc.)
[08:55] <voidspace> dimitern: oh yes, I was just understanding - that does sound much harder than "just remove some code"
[08:55] <dimitern> whereas now just the flag is not sufficient, as we also need to check if SupportsAddressAllocation returns true or NotSupported
[08:57] <voidspace> frobware: have you been to Vision Express?
[08:57] <frobware> not yet
[08:57] <voidspace> ok
[08:57] <frobware> voidspace: going to go around lunchtime
[08:57] <voidspace> frobware: ah, cool
[08:58] <voidspace> frobware: couldn't bear to miss our daily banter :-)
[08:58] <frobware> voidspace: makes your eyes bleed :-D
[08:58] <dimitern> frobware: but on the upside, I figured out how to untangle the contention around discoverspaces still going on when trying to add a container to the bootstrap node
[09:03] <thumper-afk> babbageclunk, voidspace: meeting...
[09:03] <frobware> voidspace: standup
[09:03] <voidspace> kk
[09:30] <frobware> dimitern, voidspace, dooferlad, babbageclunk: PTAL @ http://reviews.vapour.ws/r/4609/
[09:30] <voidspace> MAAS2 bootstrap and deploy seems to work with my branch!!!
[09:31] <mup> Bug #1570759 opened: apt-get install juju does not install /usr/bin/juju <juju-core:New> <https://launchpad.net/bugs/1570759>
[09:34] <frobware> voidspace: congrats all round !!
[09:56] <dimitern> frobware: you've got a review
[10:35] <babbageclunk> dimitern: Do you think I can put a $$merge$$ on that gomaasapi PR?
[10:35] <babbageclunk> https://github.com/juju/gomaasapi/pull/40
[10:35] <babbageclunk> dimitern: I can see the extra bit that needs adding in controller.AllocateMachine after that.
[10:36] <dimitern> babbageclunk: let me have a look
[10:36] <babbageclunk> dimitern: you already did! :)
[10:37] <dimitern> babbageclunk: ah, yes - I'm ok with landing this - esp. if the storage tests pass in maas?
[10:37] <babbageclunk> dimitern: ?
[10:38] <dimitern> babbageclunk: make check passed I presume?
[10:38] <babbageclunk> dimitern: Ok - I'll check it out and run them
[10:38] <dimitern> babbageclunk: cheers!
[10:40] <dimitern> babbageclunk: :) I'm not trying to be difficult, but folks familiar with storage are not around, so hopefully it has good tests that can show possible regressions
[10:40] <babbageclunk> dimitern: No, that makes sense"
[10:43] <babbageclunk> dimitern: I think its tests are mostly in the canned JSON mould, so there's not anything that would really show regressions, unfortunately.
[10:44] <dimitern> babbageclunk: I see, well - we'll take what we can get
[10:46] <babbageclunk> dimitern: :)
[10:46] <mup> Bug #1570791 opened: ERROR wait: no child processes with juju run on ppc64el <juju-core:New> <https://launchpad.net/bugs/1570791>
[10:47] <babbageclunk> dimitern: I think we're really the only clients of this part of the api, so if it's wrong it only hurts us anyway.
[10:52] <dimitern> babbageclunk: yeah, the feature flag gives at least some piece of mind to fix stuff
[11:04] <frobware> dimitern: thanks. got sidetracked by GCE, but can now bootstrap there too.
[11:04] <mup> Bug #1570796 opened: container startup issue when juju network management disabled <juju-core:New> <https://launchpad.net/bugs/1570796>
[11:06] <dimitern> frobware: nice!
[11:10] <babbageclunk> voidspace, dimitern: trying to work out what the storage stuff will look like in the constraint map JSON coming back from the MAAS api.
[11:10] <babbageclunk> Is there any way to specify storage constraints from the CLI?
[11:11] <dimitern> babbageclunk: yeah, sure - although I haven't done it with MAAS 2.0 CLI
[11:13] <dimitern> babbageclunk: in 1.0 I'd use `maas 19-root nodes acquire storage='...' dry_run=True verbose=True`
[11:13] <dimitern> or something like that
[11:13] <babbageclunk> dimitern: Ooh, dry_run is handy!
[11:13] <dimitern> as for the format of the storage argument, have a look at how's it constructed in maas/constraints.go
[11:14] <babbageclunk> dimitern: ok, will do - thanks
[11:33] <babbageclunk> dimitern: I can get it to refuse to give me machines if I ask for too much storage, but it doesn't seem to matter what I put fror networks (or not_networks), it always allocates me the node.
[11:33] <babbageclunk> dimitern: Am I specifying this right?
[11:33] <babbageclunk> networks=ip:192.168.200.5
[11:35] <babbageclunk> dimitern: argh - looks like I'm being bitten by docs not being up to date again - I think this is interfaces now.
[11:38] <babbageclunk> dimitern: ok, if I specify interfaces=default:space=0 that works.
[11:38] <babbageclunk> dimitern: I don't really understand what that means, but it seems to do something.
[11:49] <dimitern> babbageclunk: 'interfaces' takes a list of items, separated by ; - each item can have a "<label>:" and a list of key-value comma-separated attributes, like space=0 (0 is the ID; but the name should work as well there)
[11:51] <dimitern> babbageclunk: so 'interfaces=default:space=0;admin:space=admin-api' means pick a machine with 2 or more NICs, which have addresses from the subnets in 'space-0' and 'admin-api'
[11:52] <dimitern> babbageclunk: the not_networks on the other hand apply to the machine as a whole - i.e. not_networks=cidr:10.20.30.0/24 will mean none of the NICs on the machine have access to the subnet with cidr 10.20.30.0/24
[11:52]  * dimitern needs to step out for ~30m
[11:54] <babbageclunk> dimitern: thanks, that helps a lot!
[11:56] <babbageclunk> dimitern: And the labels in the interfaces specifications - what do they do?
[11:57] <babbageclunk> dimitern: Oh - do those become the labels in the constraints_by_type map that come back? So you can tie the ids back to the constraints you specified?
[11:58] <babbageclunk> dimitern: (I know you're not there, just asking while it occurs to me.)
[11:59] <dimitern> babbageclunk: labels are user-defined, in juju we use either binding names as labels (i.e. when you do `juju deploy mysql --bind 'server=db-space cluster=internal-api'`, juju will construct 'interfaces=server:space=42;cluster:space-62', assuming 42 and 62 are the maas provider ids of the 'db-space' and 'internal-api' spaces
[11:59] <dimitern> babbageclunk: I was almost out, but couldn't resist :)
[11:59] <babbageclunk> dimitern: gotcha.
[12:00] <dimitern> babbageclunk: anyway - bbs; we could get on a HO later if you want
[12:01] <babbageclunk> dimitern: o/
[12:35] <mup> Bug #1466514 opened: apiserver has a race in the using of the port number <ci> <intermittent-failure> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1466514>
[12:48]  * dimitern is back
[12:59] <perrito666> frobware: I just privmsgd you
[13:00] <perrito666> and by frobware I meant fwereade
[13:17] <mup> Bug #1547741 changed: Cannot build on armhf with go1.2 <2.0-count> <armhf> <packaging> <juju-core:Fix Released by gz> <https://launchpad.net/bugs/1547741>
[13:35] <wallyworld> frankban: any chance of a small review to fix an issue with local charm series detection https://github.com/juju/bundlechanges/pull/21
[13:43] <abentley> sinzui: I've got the updated public-clouds.syaml up, but I'm thinking I need to delete clouds.yaml before I'm done done.
[13:44] <sinzui> abentley: agreed
[13:46] <frankban> wallyworld: lgtm
[13:46] <wallyworld> frankban: tyvm
[13:47] <mup> Bug #1567594 changed: upgrade-gui command isn't in the juju tab complete <help> <ui> <juju-core:Invalid> <https://launchpad.net/bugs/1567594>
[13:47] <mup> Bug #1567938 changed: juju bootstrap requires network ID as config option on command line although it's specified in clouds.yaml <config> <juju-core:Invalid> <https://launchpad.net/bugs/1567938>
[14:05] <mup> Bug #1570883 opened: imageSuite.TestEnsureImageExistsCallbackIncludesSourceURL fails on centos go 1.6 <centos> <go1.6> <lxd> <juju-core:Triaged> <https://launchpad.net/bugs/1570883>
[14:06] <babbageclunk> dimitern, voidspace: Can you please look at this? https://github.com/juju/gomaasapi/pull/41
[14:07] <dimitern> babbageclunk: sure, looking
[14:08] <babbageclunk> dimitern: thanks! Was pretty easy to implement by just following what thumper had already done, but took a while for me to understand the constraints and response structure first.
[14:19] <abentley> sinzui, mgz_: I've got my keystone3 test passing using a fake juju client, but I think I'm at the stage where I need a real maas to test with.  Would it make sense to use parallel-maas17?
[14:20] <dooferlad> frobware: I think you are in the best place to review http://reviews.vapour.ws/r/4613/
[14:20] <sinzui> abentley: no maas 1.7 is unsupported and juju2 doesn't know 1.7 or 1.8
[14:21] <abentley> sinzui: Should I wait for munna to be idle, then?
[14:21] <sinzui> abentley: I have run small concurrent loads
[14:22] <abentley> sinzui: I don't think the openstack bundle counts as a small load, does it?
[14:23] <frobware> dooferlad: looking
[14:23] <mgz_> abentley: munna would be best, but can you use 1.9 on finfolk as well?
[14:23] <abentley> mgz_: I'm not picky.
[14:25] <abentley> mgz_: Here's what I'm doing: https://pastebin.canonical.com/154458/
[14:25] <dimitern> babbageclunk: LGTM
[14:25] <babbageclunk> dimitern: sweet
[14:25] <abentley> (Yes, I know I'm using the 2.0 endpoint)
[14:26] <mgz_> abentley: that all looks reasonable
[14:27] <abentley> mgz_: Okay, I'll try it on finfolk.
[14:27] <sinzui> abentley: 1. no, and liberty has never deplpued on ur vmass
[14:27] <abentley> sinzui: Sorry, I don't understand.
[14:29] <sinzui> abentley: some OS budles don't deploy on our vmass 1.9. You will learn if yours does and if it does, we gain new OS testing
[14:29] <abentley> sinzui: Okay.
[14:30] <perrito666> uh, if you have a bundle that deploys in vmaas I would like to have it too
[14:30]  * fwereade just had an interesting new life experience: rescuing a cat from a tree. only one scratch drew blood.
[14:31] <rogpeppe3> fwereade: :)
[14:31] <rogpeppe> fwereade: you're now one step on the road to becoming a burly fireman
[14:32] <sinzui> perrito666: I will pass it on when we have one
[14:32] <perrito666> fwereade: hint for the future, if the cat is mature, just tip it and it will fall ok :p
[14:33] <rogpeppe> fwereade: random question: i just did: "juju bootstrap --upload-tools ec2 aws"; go install github.com/juju/juju/...; juju upgrade-juju --upload-tools; should that have worked OK or is it something you shouldn't do?
[14:33] <sinzui> perrito666: We have a landscape and wikimedia bundlles that work with native deploy on maas 1.9
[14:34] <rogpeppe> fwereade: 'cos i tried to deploy a unit and it's failing because it can't fetch the tools ("no matching tools available")
[14:35] <rogpeppe> anyone else know about --upload-tools vs upgrade-juju ?
[14:36] <natefinch> rogpeppe: that should be fine. In fact, I think that's the only way you can upgrade after doing the initial upload-tools on bootstrap
[14:36] <rogpeppe> natefinch: ok, well then it looks like a bug
[14:38] <rogpeppe> natefinch: FWIW upgrade-juju appeared to work (zero exit status) but it did print something about "available-tools...\nbest version...\n" which confused me a but
[14:38] <rogpeppe> bit
[14:39] <frobware> dooferlad: ignore my github comments, doing it in RB
[14:39] <dooferlad> frobware: ack
[14:39] <natefinch> rogpeppe: I haven't upgraded a juju environment.... uh, possibly ever.  So, no idea what it's supposed to actually look like.
[14:39] <rogpeppe> natefinch: i used to do it all the time - very useful if you don't wanna wait for ec2 instances to start up again
[14:39] <rogpeppe> natefinch: pity it seems to be broken now
[14:40] <abentley> mgz_: Do we have a config for maas19 on finfolk?
[14:40] <sinzui> alexisb: bug 1570035 is cause master failures We need an engineer to fix the test or juju.
[14:40] <mup> Bug #1570035: Race in api/watcher/watcher.go <ci> <race-condition> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1570035>
[14:41] <natefinch> rogpeppe: well, it really should work, so we should fix that
[14:46] <ericsnow> rogpeppe: don't forget that you must use --upgrade-tools to upgrade the admin model and then separately upgrade any other models, explicitly providing the version of your newly upgraded tools
[14:46] <katco> ericsnow: rogpeppe: you can no longer upgrade anything but the admin model with --upload-tools
[14:47] <rogpeppe> katco: so if i use upgrade-juju on the admin model, it breaks all my models?
[14:48] <ericsnow> rogpeppe: you can upgrade the others separately but not using --upgrade-tools
[14:48] <katco> rogpeppe: it shouldn't? why do you think that's what would happen?
[14:48] <rogpeppe> katco: that's what happened for me
[14:48] <katco> rogpeppe: how did they break?
[14:49] <rogpeppe> katco: i did upgrade-juju, then deployed a unit in the default model and it failed 'cos it couldn't get tools
[14:49] <rogpeppe> katco: am just writing up a bug report now
[14:49] <katco> rogpeppe: kk
[14:49] <fwereade> rogpeppe, yeah, I thought that should work
[14:51] <rogpeppe> katco: tbh i didn't know that different models in the same controller *could* have different sets of tools. seems like it might be easy to introduce problems there.
[14:52] <katco> rogpeppe: well if you're using --upload-tools, you're off a production path anyway. all bets are off
[14:53] <rogpeppe> katco: well, even if you're not
[14:53] <katco> rogpeppe: i think different models having different agents satisfies the dev, test, prod use-case
[14:55] <alexisb> sinzui, ack
[14:56] <rogpeppe> katco, fwereade, ericsnow, natefinch: https://bugs.launchpad.net/juju-core/+bug/1570917
[14:57] <mup> Bug #1570917: upgrade-juju: success but then deploy fails <juju-core:New> <https://launchpad.net/bugs/1570917>
[14:57] <katco> rogpeppe: what's interesting is that every shop i've been a part of would test the entire installation of juju -- controller and all -- in a dev, test, prod
[14:57] <katco> rogpeppe: ta for the bug report
[14:57] <rogpeppe> katco: yes, because the controller is an integral part of the behaviour of the system
[14:58] <katco> rogpeppe: yep
[14:58] <rogpeppe> katco: you can't really usefully test a dev version without testing the server part too
[14:59] <katco> rogpeppe: i suppose in a multi-user world, you could look at the controller as a separately managed thing
[14:59] <katco> rogpeppe: to draw an analogy, it is like the OS upon which your app (another model) runs
[14:59] <katco> rogpeppe: ops people would test the controller in a separate env. app-devs would use dev, test, models, to test their workloads
[15:00] <abentley> mgz_, sinzui: http://10.0.30.100/MAAS/ is giving me "Service unavailable" even after a reboot.  I think it's forgotten that it was our parallel-maas19
[15:00] <katco> rogpeppe: app-devs might not have access to nor care about the admin model
[15:00] <katco> rogpeppe: uncharted waters here i suppose
[15:00] <rogpeppe> katco: it depends what reasons one might have for using different juju versions
[15:01] <katco> rogpeppe: btw upgrade-juju has a -m flag; no need to switch first
[15:01] <rogpeppe> katco: ah
[15:01] <abentley> rogpeppe: If you're interested in an alternative to --upload-tools that uses actual simplestreams, I'd be happy to work with you.  I've got lots of experience with simplestreams.
[15:02] <rogpeppe> abentley: tbh this is all overhead on top of what i'm trying to do atm
[15:02] <rogpeppe> abentley: but i'd be interested to see your alternative
[15:03] <katco> ericsnow: standup time
[15:04] <abentley> rogpeppe: What I have right now is code that generates simplestreams json that can then be formatted as actual simplestreams metadata.  lp:juju-release-tools make_agent_json.py
[15:05] <abentley> rogpeppe: I think it would be good to make this as convenient as --upload-tools, so that devs don't need to use --upload-tools any longer.
[15:05] <rogpeppe> abentley: if it was, i'd use it
[15:05] <rogpeppe> abentley: if only simplestreams was simple :)
[15:06] <icey> how do we configure apt-http-proxy in juju2 now by default for a specific cloud?
[15:07] <abentley> rogpeppe: If you're up for working with me, I'm happy to try.  I need dev input to get it to the point where it does what everyone needs.
[15:07] <babbageclunk> frobware, dooferlad, dimitern: spaces sync? Am I in the wrong place, or is it not happening?
[15:08] <rogpeppe> abentley: it's all a bit fraught around here currently, but i'll let you know if i get a few minutes free :)
[15:08] <dooferlad> babbageclunk: I don't remember that meeting happening recently...
[15:08] <babbageclunk> I mean, I just see meeting notifications and click on them, I don't know nothing.
[15:09] <fwereade> sinzui, alexisb, I have to go out in a sec so I can't test in detail but I'm reasonably confident that http://paste.ubuntu.com/15850768/ will fix lp:1570035
[15:09] <alexisb> fwereade, you still around?
[15:09] <alexisb> lol
[15:09] <fwereade> alexisb, ha :)
[15:09] <alexisb> fwereade, you read my mind
[15:09] <alexisb> I leave you to it
[15:09] <alexisb> thanks!
[15:09] <fwereade> that was serendipitous :)
[15:09] <alexisb> I will assign the bug to you
[15:10] <fwereade> alexisb, sinzui: yeah, I think that's crack actually, I misread something
[15:10] <fwereade> alexisb, sinzui: will poke at it while I can and update with what I know
[15:11] <alexisb> fwereade, assume, if you can take it great,if you need to hand it off please unassign yourself and send me a note
[15:11] <fwereade> alexisb, will do
[15:11] <voidspace> alexisb: I can bootstrap and deploy with MAAS2!
[15:12] <voidspace> alexisb: actually, to be fair I *could*, I've completed the implementation of Subnets but not yet tested, and I'm about to try a manual check that I can *still* bootstrap and deploy...
[15:12] <alexisb> SWEET!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
[15:13] <voidspace> alexisb: if it works I'll point you at the branch
[15:13] <alexisb> voidspace, awesome thank you
[15:23] <mup> Bug #1067213 changed: race: concurrent deployments corrupt secret token <deploy> <race-condition> <juju-core:Fix Released> <https://launchpad.net/bugs/1067213>
[15:23] <mup> Bug #1086236 changed: environs/ec2: concurrent deployments fail creating the s3 bucket. <ec2-provider> <race-condition> <test-needed> <juju-core:Fix Released> <https://launchpad.net/bugs/1086236>
[15:23] <mup> Bug #1570917 opened: upgrade-juju: success but then deploy fails <juju-core:New> <https://launchpad.net/bugs/1570917>
[15:43] <fwereade> alexisb, I have to stop; proposed a quick solution in the bug
[15:43] <fwereade> and unassigned myself
[15:44] <alexisb> fwereade, awesome thank you
[15:44] <alexisb> katco, ^^^ given this is a blocker it is high priority for th eteam
[15:47] <alexisb> well with katco out :)
[15:47] <alexisb> natefinch, perrito666, ericsnow what are you guys up to atm??
[15:48] <natefinch> alexisb: finishing up a bugfix.. should be proposing soon
[15:48] <ericsnow> alexisb: knee deep in 2 bugs
[15:48] <alexisb> natefinch, can you pick up the bug fwereade was working on
[15:48] <alexisb> lp:1570035
[15:48] <alexisb> once you are done
[15:48] <natefinch> sure, I can pick up #1570035
[15:48] <mup> Bug #1570035: Race in api/watcher/watcher.go <ci> <race-condition> <regression> <test-failure> <juju-core:In Progress> <https://launchpad.net/bugs/1570035>
[15:48] <alexisb> awesome thiank you!
[15:50] <lazyPower> http://paste.ubuntu.com/15851796/
[15:50] <lazyPower> Has anyone seen a model controller refuse to serve a charm due to error code 400?
[15:50] <lazyPower> if it were 401 i'd reasonably think it were auth based, (charms are everyone read), 404 not found, but 400? wat?
[15:53] <natefinch> lazyPower: weitf
[15:53] <natefinch> lazyPower: weird too
[15:53] <lazyPower> natefinch - indeed. i have an active env if thats helpful
[15:53] <alexisb> lazyPower, not one i have seen
[15:53] <alexisb> lazyPower, natefinch is busy
[15:54] <lazyPower> ack
[15:54] <alexisb> lazyPower, a bug is a good place to start
[15:54] <alexisb> as always :)
[16:12]  * alexisb changes location
[16:22] <frobware> dooferlad: ping
[16:22] <frobware> dooferlad: I just tried your latest change live against a node: http://pastebin.ubuntu.com/15852945/
[16:23] <frobware> dooferlad: it could have been there on your first PR -- I didn't try live, it was just a review
[16:32] <mup> Bug #1570963 opened: Model Controller refuses deployment with error code 400 <juju-core:New> <https://launchpad.net/bugs/1570963>
[16:40] <perrito666> alexisb: delayed answer, having lunch
[16:49] <frobware> dooferlad: fyi - https://bugs.launchpad.net/juju-core/+bug/1564397/comments/1
[16:49] <mup> Bug #1564397: MAAS provider bridge script deletes /etc/network/if-up.d/ntpdate during bootstrap <bootstrap> <network> <juju-core:Triaged> <https://launchpad.net/bugs/1564397>
[17:50] <mup> Bug #1570994 opened: deploy fails to download updated local charm <juju-core:New> <https://launchpad.net/bugs/1570994>
[17:54] <bogdanteleaga> is there something out there for creating aliases between commands? I want command A = command B + some args
[17:54] <natefinch> bogdanteleaga: you mean like command line aliases, or something else?
[17:55] <bogdanteleaga> natefinch, yes
[17:56] <bogdanteleaga> for example: "juju show-run-status" = "juju show-action-status --name juju-run"
[17:56] <natefinch> bogdanteleaga: linux or windows?
[17:56] <bogdanteleaga> natefinch, I don't think it matters
[17:56] <natefinch> bogdanteleaga: it does if I tell you to put something in your bashrc :)
[17:57] <bogdanteleaga> natefinch, hehe, there's PS aliases too
[17:57] <bogdanteleaga> natefinch, I guess that's a solution, but I thought having it built in would be nicer
[17:57] <natefinch> bogdanteleaga: actually... thumper was doing something with aliases
[17:58] <bdx> whats going on everyone? I'm currently having issues when trying to add ssh keys to my environment using `juju add-ssh-keys` command - using juju2 beta4 ... any insight or recommendations here? thx
[17:59] <natefinch> bogdanteleaga: I don't see anything in juju help about aliases though, so maybe it never made it in
[18:00] <bdx> when I enter 'juju import-ssh-keys `cat id_rsa.pub` --model lxd-share', the command completes successfully, but following the command I don't see any new ssh keys in my environment ....
[18:00] <bogdanteleaga> natefinch, yeah there's other kinds of aliases, but not ones that support args
[18:00] <bogdanteleaga> natefinch, I think I'll just add it to run's docstring for now
[18:00] <natefinch> bdx: try putting --model lxd-share before the call to cat
[18:01] <bdx> natefinch: negatory
[18:02] <natefinch> bdx: well, that's sort of good, it's not supposed to matter where the flag goes, but occasionally I've seen problems with specific commands.
[18:03] <natefinch> bdx: not sure what's going on there.
[18:03] <natefinch> bdx: file a bug, if you would, and include machine-0 logs if you can.
[18:03] <bdx> natefinch:  'juju add-ssh-keys --model lxd-share `cat id_rsa.pub`' fails with an error everytime too .... I can't seem to get the command to complete with success
[18:04] <bdx> natefinch: alright .... do you experience the same behavior?
[18:05] <natefinch> bdx: lemme give it a try
[18:10] <natefinch> bdx: what error are you getting?  I'm getting "cannot add key <my email>: invalid ssh key: <my email>"
[18:10] <bdx> natefinch: using `juju add-ssh-keys --model lxd-share `cat id_rsa.pub` - yea
[18:11] <bdx> natefinch: does 'juju import-ssh-keys `cat id_rsa.pub`' work for you?
[18:12] <natefinch> bdx: nope, still say invalid key ... man that command needs a flag or something to take a filename
[18:13] <natefinch> alexisb: I see your name on add_sshkeys.go .... how is that supposed to work?
[18:14] <alexisb> natefinch, I flattened the command
[18:14] <alexisb> when I tested it I just put the key on the cl
[18:15] <natefinch> weird, I can't get it to accept my key
[18:16] <bdx> natefinch: so I don't get the error with 'juju import-ssh-keys `cat id_rsa.pub`', but I also don't get a key added
[18:16] <alexisb> natefinch, bdx none of this sounds good
[18:18] <alexisb> bdx, can you please open a bug and I will get someone on it asap
[18:18] <bdx> totally, omp
[18:20] <alexisb> so bdx I am able to import a key
[18:21] <alexisb> but I had to actually copy the key in the CL
[18:21] <alexisb> just the key
[18:21] <alexisb> not the full output of id_rsa.pub
[18:22] <bdx> alexisb: nice! does that work for 'juju add-ssh-keys' too?
[18:22] <bdx> https://github.com/juju/juju/issues/5187
[18:22] <natefinch> interesting... I think it's something wrong with cat'ing the file
[18:23] <natefinch> I got it
[18:23] <natefinch> you gotta "`cat ~/.ssh/id_rsa.pub`"
[18:23] <natefinch> note the extra quotes
[18:24] <bdx> oooooh NICE!!!!
[18:26] <bdx> natefinch: so ^ got `juju add-ssh-keys` to work, but `import-ssh-keys` still has no affect on anything
[18:27] <natefinch> bdx: I don't honestly knwo the difference between add and import
[18:28] <bdx> natefinch: alright
[18:28] <bdx> I updated the bug to address that too
[18:28] <bdx> natefinch: thanks for your help
[18:38] <natefinch> bdx: you're adding the key to the lxd-share model... are you also checking for the key in a machine from that model?
[18:42] <bdx> natefinch: yes
[18:42] <natefinch> bdx: ok, cool, just double checking :)
[18:43] <natefinch> bdx: the new model stuff always screws me up
[18:43] <bdx> natefinch: yea, it took me a minute to adjust too, good looking out!
[18:54] <natefinch> huzzah, rogpeppe just deployed a charm with resources from the charmstore!
[18:54] <natefinch> ericsnow, katco ^^
[18:54]  * natefinch does a dance
[18:59] <redir_lunch> bbiab
[19:03] <ericsnow> natefinch: \o/
[19:39] <mup> Bug #1571053 opened: container networking lxd 'Missing parent for bridged type nic' <ci> <lxd> <juju-core:Triaged> <https://launchpad.net/bugs/1571053>
[20:20] <natefinch> alexisb: bing https://github.com/juju/juju/pull/5189
[20:21] <mup> Bug #1571065 opened: Panic on bundle with local charm and no series <ci> <deploy> <usability> <juju-core:Triaged> <https://launchpad.net/bugs/1571065>
[20:21] <natefinch> anyone got time for a quick review of a fix for a CI blocker? http://reviews.vapour.ws/r/4617/  ericsnow?
[20:21] <ericsnow> natefinch: sure
[20:25] <natefinch> ericsnow: I apologize in advance for the use of patching, export_test, etc.  I'm doing what fwereade_ recommended in his review of the bug earlier.
[20:27] <ericsnow> natefinch: np :)
[20:28] <ericsnow> natefinch: ship-it-ish
[20:28] <mgz_> natefinch: you ran with -race locally with your fix?
[20:28] <perrito666> wow ericsnow turnet into ned flanders
[20:28] <perrito666> :p
[20:28] <natefinch> mgz_: indeed
[20:29] <natefinch> $ go test -race -check.f=TestWatchForProxyConfigAndAPIHostPortChanges
[20:29] <natefinch> OK: 1 passed
[20:29] <natefinch> PASS
[20:29] <natefinch> ok  	github.com/juju/juju/api/proxyupdater	1.165s
[20:31] <mgz_> well, that may save us doing dodgy hacks to get the weekend tests running then
[20:51] <mup> Bug #1571082 opened: autopkgtest lxd provider tests fail for 2.0 <lxd-provider> <packaging> <juju-core:Triaged> <juju-core (Ubuntu):New> <https://launchpad.net/bugs/1571082>
[22:00] <redir> pretty sure this test is gonna timeout
[22:02] <perrito666> redir: it is ugly when you are not sure if test is too long or its going to timeout
[22:03] <redir> pretty sure this usually doesn't take so long...usually
[22:03] <redir> getting used to the ones that pause my life
[22:04] <perrito666> I have two gopaths with different branches of juju so I can do something in the other one while running tests
[22:04] <redir> speak of the devil...
[22:04] <redir> hah
[22:05] <redir> only two?
[22:05] <redir> jk
[22:17] <mgz_> anyone know how to configure the lxd provider to use daily images?
[23:12] <alexisb> evening all, have a great weekend!
[23:12] <perrito666> alexlist: you too
[23:12] <perrito666> lool, too late