[00:34] <wallyworld> axw: hml: babbageclunk: burton-aus: i'll be right back
[00:56] <babbageclunk> wallyworld: aw man, I didn't see that you'd requested the review on GH - I guess that's close enough to voluntelling. I'll have a look this afternoon once I get charms uploading.
[01:05] <wallyworld> babbageclunk: no worries, you are busy, totally understand
[02:06] <thumper> FARK
[02:11] <anastasiamac> thumper: ?
[02:17] <thumper> just trying to look at how mongo counts records in a collection
[02:17] <thumper> it is non-trivial
[02:32] <babbageclunk> thumper: Can we use stats instead of counting? Might be accurate enough and much faster.
[02:32] <thumper> what stats?
[02:32] <thumper> I have a theory on the count
[02:32] <babbageclunk> db.<collection>.stats()
[02:33] <thumper> but need access to a big controller to test
[02:33] <babbageclunk> https://docs.mongodb.com/manual/reference/method/db.collection.stats/
[02:33] <thumper> babbageclunk: you would think that db.coll.find().count() would use that
[02:33] <thumper> but perhaps it doesn't
[02:34] <thumper> can we get access to the stats through mgo?
[02:34] <babbageclunk> not sure
[02:34] <babbageclunk> mmm, toasted sandwich for lunch
[02:35] <wallyworld> axw: if you have a chance, here's a PR for review. sadly it doesn't reflect 2 file renames so counts the full contents towards the +/- count :-( https://github.com/juju/juju/pull/7808
[02:35] <thumper> var stats bson.D
[02:35] <thumper> session.DB("mydb").Run(bson.D{{"collStats", "mycollection"}}, &stats)
[02:35] <thumper> fmt.Println("Stats:", stats)
[02:36] <thumper> so... maybe
[02:38] <axw> wallyworld: ok, looking
[02:38] <wallyworld> ta
[02:39] <axw> wallyworld: is relationnetworks.go otherwise unchanged?
[02:40] <wallyworld> axw: sadly not - there's some tweaks to accommodate the egress stuff - same core structs but with a direction attribute - "ingress" or "egress"
[02:40] <wallyworld> pita the renames aren't better handled
[02:40] <wallyworld> reviewboard did it better
[02:42] <axw> wallyworld: perhaps do the rename and change in separate commits next time. anyway, not too big, reviewing...
[02:42] <wallyworld> that would have worked, yeah
[02:49] <axw> wallyworld: looks good
[02:49] <wallyworld> tyvm
[03:00] <thumper> anyone know what needs to be done to be able to run the mgo tests?
[03:25] <babbageclunk> thumper: I've never got them running locally, but I think I had them running in travis against my github fork
[03:35] <babbageclunk> thumper: that was pretty straightforward to set up
[03:57] <babbageclunk> wallyworld: approved
[03:57] <wallyworld> babbageclunk: yay, ty
[05:07] <babbageclunk> axw: just realised I need something from 2.2.3 in the upgrade juju2 tree (client side of the machine sanity check) - do you think I should just spot patch that in from 2.2.3, or update the tree to get it?
[05:07] <babbageclunk> axw: While typing that up I think I decided, I'll just update that client.
[05:08] <babbageclunk> axw: thanks for the help!
[05:47] <axw> babbageclunk: hah :)  FWIW I was going to say the same
[05:47] <babbageclunk> ha, lucky!
[05:59] <babbageclunk> axw: Can you take a look at https://github.com/juju/1.25-upgrade/pull/20 please?
[05:59] <axw> babbageclunk: ok. just finishing up a PR, then will take a look
[06:00] <babbageclunk> axw: thanks - no rush, I'm close to end of day so I'll look at comments tomorrow.
[06:00] <axw> babbageclunk: okey dokey
[06:29] <axw> jam: when you're free, can you please take a look at https://github.com/juju/juju/pull/7803?
[10:35] <wallyworld> jam: wpk: you free now for a HO? or soon?
[10:36] <jam> wallyworld: I was just about to make coffee and then I'm available, i'm not sure about wkp
[10:36] <jam> wpk
[10:36] <wallyworld> ok, just ping whenever
[10:41] <jam> wallyworld: is it worth just you and I meeting? I feel like we're mostly in sync
[10:42] <wallyworld> jam: we are, we can do it again tomorrow with everyone
[10:42] <jam> I'm off tomorrow, but maybe I'll be around the house (local 'vacation')
[10:42] <jam> Its a national holiday, vs a planned vacation
[10:43] <wallyworld> or it can wait. we just need to ensure we're fully synced up and plan the next most important bit to do for nextwork-get
[11:19] <wpk> wallyworld: jam: sorry, I am now if you're available
[11:20] <wallyworld> i can be in 10
[11:20] <wpk> k
[11:35] <wallyworld> wpk: jam: i am free now if you guys are
[11:35] <jam> wallyworld: we're on the hangout that wpk linked on the other channel
[20:52] <hml> is there a way to get juju upgrade-juju —build-agent to work on a model other than controller?
[20:55] <thumper> hml: no
[20:56] <thumper> hml: you need to upgrade the controller first
[20:56] <thumper> then specify the controller version for the hosted model
[20:56] <thumper> the hosted model then gets the same version as the controller
[20:58] <hml> thumper: how do i do the 2nd bit “specify the controller version for the hosted model”?
[20:58] <thumper> hml: ok, lets say you have upgraded the controller
[20:58] <thumper> and do a status on the controller
[20:58] <thumper> and it says version 2.2.3.2 say
[20:58] <thumper> you then go
[20:58] <thumper> juju upgrade-juju other-model --agent-version=2.2.3.2
[20:59] <thumper> probably a -m in there too
[20:59] <hml> thumper: okay, i’ll give it a shot
[22:24] <bdx> I'm experiencing some odd behavior using JAAS + AWS + network spaces
[22:25] <thumper> bdx: what sort of odd behaviour?
[22:25] <bdx> at a high level, things work (charms deploy successfully) if deployed in the default vpc, with no juju spaces defined
[22:28] <bdx> on the other hand, if I create a model in the non default vpc, add some spaces, and deploy the exact same bundle except to the defined space I get odd failures throughout
[22:28] <bdx> grabbing some material to support this omp
[22:30] <bdx> it might be specific to the charm, in this case, elasticsearch
[22:36] <bdx> ok
[22:37] <bdx> this is making me feel a bit crazy
[22:37] <bdx> but its entirely reproducable
[22:37] <bdx> so
[22:38] <bdx> I don't know what to say .... I think it might be a charm specific thing though and not juju or network related ..... I just don't get it
[22:38] <bdx> using upstream elasticsearch in the charmstore (cs:trusty/elasticsearch-19)
[22:39] <bdx> what happens is elasticsearch will be deployed and running on the machine deployed to the default vpc, and when elasticsearch deployed with a spaces constraint, the process just doesn't exist
[22:40] <bdx> all the dirs are there, the binary is ther
[22:42] <bdx> this model https://pastebin.com/NaLcqBFy is configured to a default vpc, no spaces added, and no constraints used
[22:43] <bdx> from that elasticsearch node, http://paste.ubuntu.com/25434884/
[22:43] <bdx>  <- `ps aux | grep elasticsearch`
[22:43] <bdx> we can see that elasticsearch is running
[22:44] <bdx> next, a model created with a non-default vpc and spaces added, and all the machines deployed to the spaces
[22:44] <bdx> http://paste.ubuntu.com/25434893/
[22:44] <bdx> http://paste.ubuntu.com/25434894/
[22:45] <bdx> heres the full yaml status http://paste.ubuntu.com/25434903/
[22:45] <bdx> and now, same as before, I'll ssh into elasticsearch box and ps aux
[22:46] <bdx> http://paste.ubuntu.com/25434906/
[22:46] <bdx> boom
[22:46] <bdx> java process for elasticsearch exists
[22:47] <bdx> odd because the machine has elasticsearch and all the deps (java) installed
[22:47] <bdx> lol
[22:47] <bdx> I've been ramming my head on this for days now .... just thinking my fork of elasticsearch is borked
[22:47] <bdx> lol
[22:49] <bdx> ^^ important correction s/java process for elasticsearch exists/java process for elasticsearch ceases to exist/
[22:50] <bdx> its quite baffling, really
[23:02] <bdx> thumper: ^that kind
[23:06] <babbageclunk> bdx: can you see logs for elasticsearch? It sounds like it might be trying to start and failing when you have space constraints.
[23:06] <bdx> babbageclunk: I was just about to report back with that
[23:07] <bdx> babbageclunk: the java process appears just for a second then disappears
[23:08] <bdx> the logs are clean though
[23:08] <bdx> because it looks as if it actually does start
[23:09] <bdx> so there is no error being logged or anything
[23:10] <bdx> I have this juju log for the failing instance http://paste.ubuntu.com/
[23:11] <bdx> wow ... the logs are entirely different
[23:12] <bdx> and this log for the working instance 2017-08-30 22:19:45 INFO juju.cmd supercommand.go:63 running jujud [2.2.2 gc go1.8]
[23:12] <bdx> 2017-08-30 22:19:45 DEBUG juju.cmd supercommand.go:64   args: []string{"/var/lib/juju/tools/unit-elasticsearch-0/jujud", "unit", "--data-dir", "/var/lib/juju", "--unit-name", "elasticsearch/0", "--debug"}
[23:12] <bdx> 2017-08-30 22:19:45 DEBUG juju.agent agent.go:543 read agent config, format "2.0"
[23:12] <bdx> 2017-08-30 22:19:45 INFO juju.jujud unit.go:151 unit agent unit-elasticsearch-0 start (2.2.2 [gc])
[23:12] <bdx> 2017-08-30 22:19:45 DEBUG juju.worker runner.go:319 start "api"
[23:12] <bdx> oh no
[23:12] <bdx> my bad - jeeze
[23:13] <bdx> and this log for the working instance http://paste.ubuntu.com/25435003/
[23:13] <thumper> weird
[23:15] <bdx> possibly one of my models has different logging config
[23:16] <bdx> geh
[23:16] <bdx> yep
[23:17] <bdx> logging-config was turned up on the model with spaces/non-working one