[00:34] axw: hml: babbageclunk: burton-aus: i'll be right back [00:56] wallyworld: aw man, I didn't see that you'd requested the review on GH - I guess that's close enough to voluntelling. I'll have a look this afternoon once I get charms uploading. [01:05] babbageclunk: no worries, you are busy, totally understand [02:06] FARK [02:11] thumper: ? [02:17] just trying to look at how mongo counts records in a collection [02:17] it is non-trivial [02:32] thumper: Can we use stats instead of counting? Might be accurate enough and much faster. [02:32] what stats? [02:32] I have a theory on the count [02:32] db..stats() [02:33] but need access to a big controller to test [02:33] https://docs.mongodb.com/manual/reference/method/db.collection.stats/ [02:33] babbageclunk: you would think that db.coll.find().count() would use that [02:33] but perhaps it doesn't [02:34] can we get access to the stats through mgo? [02:34] not sure [02:34] mmm, toasted sandwich for lunch [02:35] axw: if you have a chance, here's a PR for review. sadly it doesn't reflect 2 file renames so counts the full contents towards the +/- count :-( https://github.com/juju/juju/pull/7808 [02:35] var stats bson.D [02:35] session.DB("mydb").Run(bson.D{{"collStats", "mycollection"}}, &stats) [02:35] fmt.Println("Stats:", stats) [02:36] so... maybe [02:38] wallyworld: ok, looking [02:38] ta [02:39] wallyworld: is relationnetworks.go otherwise unchanged? [02:40] axw: sadly not - there's some tweaks to accommodate the egress stuff - same core structs but with a direction attribute - "ingress" or "egress" [02:40] pita the renames aren't better handled [02:40] reviewboard did it better [02:42] wallyworld: perhaps do the rename and change in separate commits next time. anyway, not too big, reviewing... [02:42] that would have worked, yeah [02:49] wallyworld: looks good [02:49] tyvm [03:00] anyone know what needs to be done to be able to run the mgo tests? [03:25] thumper: I've never got them running locally, but I think I had them running in travis against my github fork [03:35] thumper: that was pretty straightforward to set up [03:57] wallyworld: approved [03:57] babbageclunk: yay, ty [05:07] axw: just realised I need something from 2.2.3 in the upgrade juju2 tree (client side of the machine sanity check) - do you think I should just spot patch that in from 2.2.3, or update the tree to get it? [05:07] axw: While typing that up I think I decided, I'll just update that client. [05:08] axw: thanks for the help! [05:47] babbageclunk: hah :) FWIW I was going to say the same [05:47] ha, lucky! [05:59] axw: Can you take a look at https://github.com/juju/1.25-upgrade/pull/20 please? [05:59] babbageclunk: ok. just finishing up a PR, then will take a look [06:00] axw: thanks - no rush, I'm close to end of day so I'll look at comments tomorrow. [06:00] babbageclunk: okey dokey === frankban|afk is now known as frankban [06:29] jam: when you're free, can you please take a look at https://github.com/juju/juju/pull/7803? [10:35] jam: wpk: you free now for a HO? or soon? [10:36] wallyworld: I was just about to make coffee and then I'm available, i'm not sure about wkp [10:36] wpk [10:36] ok, just ping whenever [10:41] wallyworld: is it worth just you and I meeting? I feel like we're mostly in sync [10:42] jam: we are, we can do it again tomorrow with everyone [10:42] I'm off tomorrow, but maybe I'll be around the house (local 'vacation') [10:42] Its a national holiday, vs a planned vacation [10:43] or it can wait. we just need to ensure we're fully synced up and plan the next most important bit to do for nextwork-get [11:19] wallyworld: jam: sorry, I am now if you're available [11:20] i can be in 10 [11:20] k [11:35] wpk: jam: i am free now if you guys are [11:35] wallyworld: we're on the hangout that wpk linked on the other channel === petevg_ is now known as petevg === frankban is now known as frankban|afk [20:52] is there a way to get juju upgrade-juju —build-agent to work on a model other than controller? [20:55] hml: no [20:56] hml: you need to upgrade the controller first [20:56] then specify the controller version for the hosted model [20:56] the hosted model then gets the same version as the controller [20:58] thumper: how do i do the 2nd bit “specify the controller version for the hosted model”? [20:58] hml: ok, lets say you have upgraded the controller [20:58] and do a status on the controller [20:58] and it says version 2.2.3.2 say [20:58] you then go [20:58] juju upgrade-juju other-model --agent-version=2.2.3.2 [20:59] probably a -m in there too [20:59] thumper: okay, i’ll give it a shot [22:24] I'm experiencing some odd behavior using JAAS + AWS + network spaces [22:25] bdx: what sort of odd behaviour? [22:25] at a high level, things work (charms deploy successfully) if deployed in the default vpc, with no juju spaces defined [22:28] on the other hand, if I create a model in the non default vpc, add some spaces, and deploy the exact same bundle except to the defined space I get odd failures throughout [22:28] grabbing some material to support this omp [22:30] it might be specific to the charm, in this case, elasticsearch [22:36] ok [22:37] this is making me feel a bit crazy [22:37] but its entirely reproducable [22:37] so [22:38] I don't know what to say .... I think it might be a charm specific thing though and not juju or network related ..... I just don't get it [22:38] using upstream elasticsearch in the charmstore (cs:trusty/elasticsearch-19) [22:39] what happens is elasticsearch will be deployed and running on the machine deployed to the default vpc, and when elasticsearch deployed with a spaces constraint, the process just doesn't exist [22:40] all the dirs are there, the binary is ther [22:42] this model https://pastebin.com/NaLcqBFy is configured to a default vpc, no spaces added, and no constraints used [22:43] from that elasticsearch node, http://paste.ubuntu.com/25434884/ [22:43] <- `ps aux | grep elasticsearch` [22:43] we can see that elasticsearch is running [22:44] next, a model created with a non-default vpc and spaces added, and all the machines deployed to the spaces [22:44] http://paste.ubuntu.com/25434893/ [22:44] http://paste.ubuntu.com/25434894/ [22:45] heres the full yaml status http://paste.ubuntu.com/25434903/ [22:45] and now, same as before, I'll ssh into elasticsearch box and ps aux [22:46] http://paste.ubuntu.com/25434906/ [22:46] boom [22:46] java process for elasticsearch exists [22:47] odd because the machine has elasticsearch and all the deps (java) installed [22:47] lol [22:47] I've been ramming my head on this for days now .... just thinking my fork of elasticsearch is borked [22:47] lol [22:49] ^^ important correction s/java process for elasticsearch exists/java process for elasticsearch ceases to exist/ [22:50] its quite baffling, really [23:02] thumper: ^that kind [23:06] bdx: can you see logs for elasticsearch? It sounds like it might be trying to start and failing when you have space constraints. [23:06] babbageclunk: I was just about to report back with that [23:07] babbageclunk: the java process appears just for a second then disappears [23:08] the logs are clean though [23:08] because it looks as if it actually does start [23:09] so there is no error being logged or anything [23:10] I have this juju log for the failing instance http://paste.ubuntu.com/ [23:11] wow ... the logs are entirely different [23:12] and this log for the working instance 2017-08-30 22:19:45 INFO juju.cmd supercommand.go:63 running jujud [2.2.2 gc go1.8] [23:12] 2017-08-30 22:19:45 DEBUG juju.cmd supercommand.go:64 args: []string{"/var/lib/juju/tools/unit-elasticsearch-0/jujud", "unit", "--data-dir", "/var/lib/juju", "--unit-name", "elasticsearch/0", "--debug"} [23:12] 2017-08-30 22:19:45 DEBUG juju.agent agent.go:543 read agent config, format "2.0" [23:12] 2017-08-30 22:19:45 INFO juju.jujud unit.go:151 unit agent unit-elasticsearch-0 start (2.2.2 [gc]) [23:12] 2017-08-30 22:19:45 DEBUG juju.worker runner.go:319 start "api" [23:12] oh no [23:12] my bad - jeeze [23:13] and this log for the working instance http://paste.ubuntu.com/25435003/ [23:13] weird [23:15] possibly one of my models has different logging config [23:16] geh [23:16] yep [23:17] logging-config was turned up on the model with spaces/non-working one