[02:24] babbageclunk: small juju/description fix https://github.com/juju/description/pull/24 [02:30] or thumper ^^^^^ [02:31] wallyworld: looking [02:32] I'm not sure that's right [02:34] why? [02:34] added comments [02:34] well, it'll work, but I think it needs added description, and cleaner defaults [02:35] thumper: defaults is nil [02:35] as returned from v1Schema [02:35] ugh... ok [02:35] i guess I could return empty map [02:35] well... [02:35] v1Schema should never change [02:36] so you don't need to iterate over a null map [02:36] i mean empty map for defaults [02:36] right, i could [02:36] i just didn't want to assume [02:36] but maybe i could [02:38] you should [02:39] thumper: changes pushed [02:55] ugh... [02:55] I'm at that phase where I'm wondering how this ever worked [02:58] thumper: here's the small juju 2.2 pr https://github.com/juju/juju/pull/7847 [02:59] wallyworld: why add a remote application in the test? [02:59] because old older migrations [02:59] older exports [02:59] i want to still check that we refuse to import [02:59] a model with remote apps [03:00] this bit of the change won't be forward ported [03:02] can you add a comment to it then plz? [03:02] ok [03:03] * thumper headdesks [03:18] JaaS errors when launching OpenStack on Amazon [03:18] "Bootstrapping MON cluster" Ceph-MON is stuck on this error. Neutron Gateway is stuck on "hook failed: "config-changed". [03:19] Have tried to launch many times, I don't know if I'm doing something wrong or if there is a bug. [04:00] wallyworld: https://github.com/juju/juju/pull/7848 [04:01] looking after tech board [04:37] wallyworld: Does this sound ok? https://docs.google.com/a/canonical.com/document/d/1DFNU3lxeItsPLOTeQ4uAG_JpZUT1sNcrg7sDQHDAvqg/edit?usp=sharing [04:38] babbageclunk: one sec, in meeting === frankban|afk is now known as frankban [07:04] axw: i've pushed changes to the PR which implement a new relation suspended flag. it will be part 1 or N. for now, setting suspended flag also sets suspended status etc. that allows uniter and watchers to operate. i haven't done an assert that offer relations don't change as I'm not sure how - the issue is that the collection docs have their DocID as the relation Id. so the check would need to count the records in the collection matching a certain [07:04] query criteria and I'm not sure we can do that with assets? [07:05] wallyworld: you would need to record the IDs of relations on some other doc, like the application [07:06] with push/pull ops [07:06] axw: you mean we'd need to actually change the data model? [07:06] wallyworld: actually we already have relationcount on application. so you could just assert the life of all the relations you know about, and then check that relationcount <= current known value [07:06] to allow for this assert to work [07:07] that would work but be inexact [07:08] but relation count can change, just not for the user in question [07:08] so i'm not sure it would work [07:08] ie fred could lose permission but mary could add a remote relation [07:08] so relation count would increase [07:09] the only way i can see is to check after the fact and if there's more to do, try again for N times [07:09] wallyworld: sure, then you loop around and build the txn ops again [07:10] ok, i'll look at doing that [07:11] wallyworld: in case it wasn't clear, by that I just mean use the regular "buildTxn" approach [07:11] yeah, understood [07:11] coolies [07:11] doing it outside of build loop would be the only way to not try again unnecessarily [07:11] but would be ick [07:33] wallyworld: I need to take Charlotte to school tomorrow (Michelle has an early work function), so I'll miss standup. status: I've got the vsphere stuff all working, need to write all the tests now [07:34] awesome, ty [07:44] axw: i need to go out for dinner,but i've pushed the change to add the relation change check. so no rush, as i'll be afk for a bit [07:45] i'll look to lland this and then next pr will redo the watchers etc [07:45] wallyworld: ok, have a nice evening [07:49] Hi juju people! We have an issue opened against CDK from someone who is using localhost environment. His IP on his machine changed and all juju ssh attempts go though his old IP: https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/412 [07:49] Any hints on what might be wrong? [08:00] kjackal: I think they need to update ~/.local/share/juju/controllers.yaml to update the IP of their controller [08:09] axw: since juju status queries the controller the controller endpoint should be correct. Its the ssh that goes agains this http://:8443/1.0 that is failing [08:10] kjackal: ok, wasn't sure if status was immediately before or some time after. /me thinking [08:13] good point. Let me cross check that with him. [08:17] kjackal: looks like there's a faulty assumption in the lxd provider, which won't be easy to fix without a code change/new release. it's fixable with mongo surgery, by changing the cloud definition: the endpoint needs to be updated to the host machine's new IP [08:19] kjackal: the lxd provider was written to assume that the lxdbr0 address on the host won't ever change (derp). so either mongo surgery, or force the host back to the old IP === frankban is now known as frankban|afk === frankban|afk is now known as frankban === frankban is now known as frankban|afk [23:46] @team, tried to deploy a bundle in multiple models .... when I go to destroy my models they just hang with "waiting on model to be removed..." http://paste.ubuntu.com/25530827/ [23:46] I've `juju remove-machine {range of machine ids} --force [23:48] all the machines show terminated in the aws console [23:48] I've a few models in this state [23:48] I'll file a bug and check in tomorrow [23:48] thx [23:48] bdx: the issue is that the machines never started [23:49] I see [23:49] juju won't clean up machines unless is knows they are running correctly [23:49] so you need the --force [23:49] you need to diagnose why the cloud didn't start the machines [23:50] or why they couldn't report to juju that the were started [23:50] wallyworld_: https://imgur.com/a/kd793 [23:50] the terminated machines show the same model [23:50] wallyworld_: oooh, nm [23:51] I have models under two separate users with the same name garrrh [23:52] that is allowed [23:52] models are namespaced [23:52] yeah ... I just got confused thinking the model name I'm seeing in aws was for the other user [23:52] so ... looks like one user had a working deploy and the other had instances that wouldn't start [23:52] hmmm ... sounds like credentials or something poss [23:53] I'll do some more debugging here [23:53] yeah, could be [23:53] ok