[00:18] <redir> perrito666: I think a little while back I had to kill and recreate my lxd bridge. I don't recall exactly why, but not being able to deploy to lxd seems familiar.
[00:38] <perrito666> redir: remember how to do that?
[00:42] <redir> perrito666: well that was last year
[00:43] <redir> I think something like `sudo ip link set lxdbr0 down && sudo ip link delete lxdbr0 type bridge` followed by `sudo lxd init`
[00:44] <redir> but IIRC I prolly also had to remove all the lxd containers and images I had to init again
[00:44] <redir> I don't recall if there's an init only bridge command
[00:44] <redir> perrito666: ^
[00:47] <redir> perrito666: looks like there's a non-interactive option now -- `lxd init --auto`
[00:48] <redir> oh, and you prolly have to kick lxd with sysctl too
[00:48] <redir> erm, systemctl
[02:28] <redir> babbageclunk: were you still reviewing my pr from last year?
[02:29] <babbageclunk> redir: oh sorry - I haven't picked it back up - have you solved the timing problem you were talking about in the meeting?
[02:30] <redir> yep, not really timing, just order of apt installs
[02:30] <redir> and updated the QA steps with a note about fixing a kvm breaking network change
[02:31] <redir> for QA at least
[02:31] <redir> but I need 2 +1s since is it over 500 lines
[02:32] <babbageclunk> redir: ok, taking another look
[02:32] <redir> babbageclunk: much  obliged
[02:44]  * redir goes afk for a bit. I'll check back later
[04:15]  * redir is unclear about the failures on http://juju-ci.vapour.ws/job/github-check-merge-juju/485/
[04:15] <redir> if they look familiar to anyone let me know
[04:22] <babbageclunk> jam: could you take another look at https://github.com/juju/juju/pull/6735?
[04:22] <babbageclunk> jam: Also, happy new year!
[04:23] <babbageclunk> Hmm, I'm confused - why is my irc client completing jam when he's not here?
[04:28] <wallyworld> babbageclunk: ah, i just sent an email, thought you were EOD. when you start tomorrow, there's another PR to look at, sorry
[04:29] <babbageclunk> wallyworld: ok, will take a look soon - still going through redir's one at the moment!
[04:29] <jam> babbageclunk: I'm afk, but I have an IRC forwarder.
[04:29] <jam> babbageclunk: so technically I'm always here. :)
[04:30] <wallyworld> babbageclunk: ty. once we get this landed and in the hands of the GUI guys, we may see some nice progress for next week hopefully
[04:31]  * wallyworld needs to do a coffee run, state of emergency here, bbiab
[04:31] <babbageclunk> jam: Ah - I didn't see you in the list because you're an op!
[04:32] <jam> I am OP :)
[04:32] <jam> so technically I'm off this week, I'm only around because I'm sitting here doing my personal budgeting. but I can try and give it a look sometime
[04:34] <jam> babbageclunk: so what is DefaultCloudName() vs Cloud.Name? It isn't quite clear from the diff why you need both.
[04:34] <jam> is the latter the variable where you store the result of the former?
[04:35] <babbageclunk> jam: yes
[04:36] <jam> babbageclunk: I will fully admit to not doing as full of a review as last time, but LGTM
[04:36] <babbageclunk> jam: good enough for me! ;)
[04:36] <babbageclunk> jam: thanks - sorry to interrupt your holiday!
[07:09] <mup> Bug #1653888 opened: juju-db service using 30GB+ memory <juju-core:New> <https://launchpad.net/bugs/1653888>
[11:00] <voidspace> frobware: macgreagoir: if you have a chance https://github.com/juju/juju/pull/6761
[11:12] <macgreagoir> voidspace: Testing it now.
[11:21] <voidspace> macgreagoir: thanks
[11:44] <macgreagoir> voidspace: LGTM, fwiw
[11:44] <voidspace> macgreagoir: thanks
[11:45] <voidspace> macgreagoir: I'm going to merge then
[11:46] <macgreagoir> voidspace: ianagr :-) but OK by me.
[11:46] <voidspace> macgreagoir: gah, you should be by now
[12:02] <rick_h> macgreagoir: not a 'gr'?
[12:02] <rick_h> oh, reviewer
[12:02] <rick_h> heh
[12:02] <macgreagoir> :-)
[12:07] <perrito666> does anyone know how to nuke the lxd bridge and re-create it?
[12:12] <macgreagoir> perrito666: Does `lxd init ...` give you what you need?
[12:23] <perrito666> macgreagoir: meh, it says I have containers, which I dont
[12:23] <perrito666> macgreagoir: but most likely its what I need
[12:26] <macgreagoir> `brctl delbr` et cetera
[12:27] <voidspace> rick_h: desparate for coffee - will be 2 mins late!
[12:29] <perrito666> macgreagoir: oh I get it, It can not just run init on networking, that is kind of sad :(
[13:45] <perrito666> man, apt-get purge has lost some of its touch
[13:48] <natefinch> perrito666: I have a vsphere question for you
[13:49] <perrito666> natefinch: oh, for me? you shouldn't have bothered
[13:49] <natefinch> perrito666: well, just trying to start off the new year on the right foot ;)
[13:50] <natefinch> perrito666: I am testing my provider Ping method, which is really just calling govmomi.NewClient with the given URL... but I am getting back 400 bad request for some reason
[13:50] <perrito666> natefinch: well shoot, in a moment ill need to go fetch my wife's bd cake
[13:51] <natefinch> perrito666: I'll try to be quick
[13:51] <perrito666> you might be doing a bad request??
[13:51] <natefinch> perrito666: also, happy birthday to your wife :)
[13:51] <natefinch> perrito666: is there a specific path I need to give the NewClient function other than https://<someip> ?  I'm running against the vsphere that QA uses, in oil
[13:52]  * perrito666 checks code
[13:53] <perrito666> natefinch: first of all, remember you need to close that because they never time out
[13:53] <natefinch> perrito666: yeah, doing that
[13:53] <natefinch> perrito666: oh, hmm, looks like it might be /sdk
[13:54] <perrito666> look for providers/vsphere/client.go line 79
[13:54] <perrito666> yep
[13:54] <perrito666> I wonder if that is going to answer to you without credentials
[13:55] <perrito666> natefinch: on second thought, 40X is a good indicator of ping
[13:55] <perrito666> it means something is up there and can tell you "no"
[13:55] <natefinch> perrito666: well, almost anything will respond with 400 bad request, though
[13:56] <perrito666> true, I am not sure how accurate you need your ping to be
[13:57] <natefinch> perrito666: as accurate as possible.... with /sdk I get ServerFaultCode: Cannot complete login due to an incorrect user name or password.   which is probably good enough
[14:11] <perrito666> k bbl
[14:36] <frobware> macgreagoir, voidspace: updated https://github.com/juju/juju/pull/6758 with some unit tests. Would appreciate a look so that I can address any issues and try to get a CI run before tonight.
[14:36] <frobware> rick_h: ^^
[14:37]  * frobware bbiab
[14:55] <mup> Bug #1651260 changed: landscape bundle error when deployed via gui (KeyError in config changed hook in haproxy charm) <matrix> <juju:New> <https://launchpad.net/bugs/1651260>
[15:07] <mup> Bug #1651260 opened: landscape bundle error when deployed via gui (KeyError in config changed hook in haproxy charm) <matrix> <juju:New> <https://launchpad.net/bugs/1651260>
[15:10] <mup> Bug #1651260 changed: landscape bundle error when deployed via gui (KeyError in config changed hook in haproxy charm) <matrix> <juju:New> <https://launchpad.net/bugs/1651260>
[15:18] <perrito666> back
[15:32] <macgreagoir> voidspace: If you have a wee sec, please: https://github.com/juju/juju/pull/6762/
[15:35] <voidspace> macgreagoir: looks straightforward to me
[15:36] <macgreagoir> 'Tis :-)
[15:37] <voidspace> macgreagoir: LGTM
[15:38] <macgreagoir> Cheers
[15:38]  * voidspace lunches
[15:58] <natefinch> rick_h: lol, so, I was looking at my old change, and I think I know why it's causing a failure now.... because I stopped throwing away the error that we get from lxd init
[15:58] <rick_h> natefinch: heh, one step forward...
[15:59] <natefinch> rick_h: so I think we were always hitting this error, we just were logging it and ignoring it before.  The easy fix is to go back to ignoring it.  The harder fix is to try to only run lxd init if it's needed (or handle the "you already ran this, dummy!" error)
[16:01] <natefinch> probably #3 is easiest - still always run lxd init, but check for the "you already ran it" error and ignore just that one.  That way if it fails the first time, we actually fail early, rather than try to continue on even though there's no lxd running
[16:02] <frobware> natefinch: Logging an error causes too many false positives, IMO. I've seen bug reports / mail / IRC saying, "lxd is broken, it reports an error, see this error in the logs..."
[16:03] <natefinch> frobware: right, my point is, let's not log the error in this known case, since it's basically like the error when you try to create a file that already exists.
[16:04] <natefinch> frobware: but for other unknown errors, let's actually return them, not just log them and effectively ignore them
[16:06] <frobware> macgreagoir: ping
[16:06] <macgreagoir> frobware: pong
[16:07] <frobware> macgreagoir: do you have some time to look through https://github.com/juju/juju/pull/6758
[16:07] <macgreagoir> frobware: I have it loaded, just kicking a couple of other things first, sorry.
[16:08] <frobware> macgreagoir: ty
[16:32] <redir> happy wednesday juju-dev
[16:42] <frobware> redir: only 2 to go. :)
[16:47] <macgreagoir> G'day redir
[16:57] <macgreagoir> frobware: Review and comment. No show-stopper.
[16:57] <frobware> macgreagoir: wall clock in tests...
[16:58] <macgreagoir> :-)
[16:58] <macgreagoir> I have one like that and I'd like to see you solve it :-D
[16:59] <frobware> macgreagoir: ah, you found a bug.
[17:05] <frobware> macgreagoir: all the tests pass 0 for a timeout, which means indefinite, and therefore no timer/clock is used.
[17:06] <frobware> macgreagoir: there is one test that checks for timeout and passes 1s.
[17:06] <frobware> macgreagoir: because of this I chose not to use the testing clock.
[17:06] <macgreagoir> frobware: Maybe worth a comment so we don't scoop it up next time there's a testing sprint?
[17:12] <frobware> macgreagoir: done
[17:14] <frobware> voidspace: you about?
[17:14] <voidspace> frobware: yep
[17:14] <macgreagoir> frobware: acked in PR
[17:14] <frobware> voidspace: could you also take a look over https://github.com/juju/juju/pull/6758
[17:14] <voidspace> frobware: yep
[17:15] <voidspace> coffee first though
[17:15] <frobware> macgreagoir: you've gone all recursive on me.
[17:18] <redir> sinzui: yt?
[17:18] <sinzui> redir: yes
[17:19] <redir> happy new year:)
[17:19] <rick_h> natefinch: is it not something that we can check if lxd is configured and fail fast that way? I guess what's fail here? I mean it it's been run you're good and if it's not been run you need to run it.
[17:20] <redir> sinzui: you wrote me a while back about a power8 host. Can I use that for live testing kvm? Are the details in the usual place?
[17:21] <natefinch> rick_h: maybe there's a lxd command we can run that'll tell us if it is initialized?  I definitely don't want to write any heuristics outside of what lxd itself tells us though
[17:21] <sinzui> redir: I don't think I can help. QA has ppc64el guests. I am not sure we can run kvm on them.
[17:22] <redir> sinzui: OK. Good to know, I won't spend time trying
[17:22] <sinzui> redir: well maybe. this can be done because I know a bit about borbein and its vms
[17:22]  * redir backs up
[17:23] <natefinch> tych0: is there an lxd command that we can run that'll tell us if we need to run lxd init?
[17:23] <rick_h> natefinch: I guess the error you're getting *is* telling you that in some sense
[17:23] <sinzui> redir: I can get you on to a Juju QA host and on to a charm testing host. brobein has a special kernal to support libvirt. Juju wont just work because the ubuntu kernel needs to be special
[17:24] <natefinch> rick_h: yeah, just wondering if there's a better way.  Ideally something that doesn't require string matching on the lxd output for a specific error message
[17:26] <sinzui> redir: So borbein is your best chance to verify that juju can drive kvm, but surely you need a cloud that is ppc64el with the right kernel to test that juju can deploy kvm containers to an instance
[17:27]  * sinzui writes email with instructions.
[17:28] <redir> sinzui: yeah that sounds right -- that I need both. I just had a note to ask when it came timet to live test on a different arch. I'll just work with the arm and ppc folks where they can test it.
[17:28] <redir> sinzui: the code wants same host/arch to work
[17:29] <sinzui> redir: I understand. QA has been asking for proper ppc64el resources for more than a year
[17:29] <sinzui> redir: arm is a different, but also painful story
[17:30] <redir> sinzui: tyvm
[17:31] <sinzui> redir: We have a powerful arm64 host, but have never gotten libvirt working to build a vmaas. I really want to do this, but qemu/libvirt don't like the arch
[17:37] <frobware> The GIL is gone: https://opensource.googleblog.com/2017/01/grumpy-go-running-python.html
[17:37] <frobware> And C extensions too. But...
[17:43] <perrito666> frobware: lol, yeah, I think without C extensions its a bit useless though
[17:43] <perrito666> isnt ssl in python a C extension?
[17:43] <frobware> perrito666: it cuts out a LOT OF STUFF
[17:44] <perrito666> frobware: I presume its a "we just dont want to re-write all this code base in go"
[17:44] <frobware> perrito666: I think it just shows that if you are running your own stuff on your own infra you can do whatever makes sense _for_ _you_
[17:44] <perrito666> yep
[17:45] <natefinch> ahh, it's a transpiler... interesting
[17:46] <frobware> voidspace: any feedback. I need to EOD soon and want to merge before we scream "RC1 - ship it!"
[17:46] <voidspace> frobware: looks good to me I think
[17:47] <frobware> voidspace: when we rewrite the script in Go (^^) I think a lot of the testing friction will begin to disappear w.r.t. the bridge.py.
[17:47] <voidspace> I like the scriptrunner
[17:47] <voidspace> frobware: ok
[17:47] <voidspace> frobware: I struggle to believe that Go is *really* easier to test ;-)
[17:47] <voidspace> frobware: but possibly
[17:47] <voidspace> the impedance mismatch goes
[17:48] <frobware> voidspace: I think it's still the fact the script "does it all". Some of the invocation is in python, wrapped by Bash, invoked from Go.
[17:48] <voidspace> yep
[17:53] <frobware> voidspace: I would like to not expose dry-run too. :/
[17:53] <voidspace> frobware: yeah, if possible
[17:54] <frobware> voidspace:  that is purely an implementation detail.
[17:54] <voidspace> frobware: anyway, LGTM
[17:54] <frobware> voidspace: I'm happy to follow-up, but I think it's important to know get a CI test run on this _feature_ branch.
[17:54] <frobware> s/know/now
[17:54] <voidspace> frobware: yep, cool
[17:54] <voidspace> frobware: not sure if I'll get mine in today
[17:55] <voidspace> struggling
[17:55] <frobware> voidspace: want a review or something else?
[17:55] <voidspace> frobware: no, still trying to find the bug
[17:55] <voidspace> frobware: or at least find whereabouts user config are passed into lxd
[17:56] <voidspace> frobware: just getting back to it really, I'll give it a couple of hours or so before EOD
[17:56] <voidspace> be nice to find itt
[17:56] <frobware> SetConfig()?
[17:59] <voidspace> frobware: not sure
[17:59] <voidspace> I don't think so - or at least I'd like to see all the layers
[18:13] <perrito666> we should get  a free day every time we change something in the output of status
[20:09]  * perrito666 considers hacking his  desktop to support a 3rd monitor
[20:18] <natefinch> does it not?  My laptop supports integrated screen + 2 external
[20:22] <perrito666> natefinch: the furniture I meant :p
[20:22] <perrito666> the computer card supports like 8 screens
[20:30] <natefinch> perrito666: lol,
[20:47] <natefinch> rick_h: should this fix go to develop?
[20:47] <natefinch> rick_h: for the LXD init thing?
[20:47] <rick_h> natefinch: into the 2.1-dynamic-bridges branch and the 2.1 and develop
[20:47] <natefinch> okie
[20:47] <rick_h> natefinch: actually just the 2.1-dynamic-bridges and develop
[20:47] <rick_h> the 2.1 will get merged in
[20:52] <natefinch> https://github.com/juju/juju/pull/6764 and https://github.com/juju/juju/pull/6765
[20:53] <alexisb> babbageclunk, perrito666 is one of you around
[20:53] <perrito666> alexisb: I am
[20:53] <babbageclunk> alexisb: yup
[20:53] <babbageclunk> alexisb: perrito666 is.
[20:53] <babbageclunk> ;)
[20:53] <alexisb> babbageclunk, rick_h has a request
[20:53] <rick_h> babbageclunk: can you please look gat ^
[20:53] <rick_h> look at that is
[20:53] <rick_h> babbageclunk: and help natefinch get his PR landed
[20:54] <perrito666> babbageclunk: I can do it so your day start is not so rough
[20:54] <babbageclunk> rick_h: yup yup
[20:54] <babbageclunk> perrito666: ok, thanks
[20:54] <rick_h> natefinch: can you put a bit more background in that PR description please?
[20:55] <perrito666> natefinch: that is 6764 and 6765 right?
[20:57] <rick_h> perrito666: correcty
[20:57] <perrito666> k
[20:58] <natefinch> QA steps and description added
[20:59] <rick_h> ty natefinch
[21:01] <perrito666> natefinch: interesting "sudo dpkg-reconfigure -p medium lxd" <-- that error by lxd is also a lie
[21:05] <perrito666> natefinch: ship them
[21:08] <natefinch> merging
[21:21] <rick_h> natefinch: ty, balloons ^ once that hits we can start a full test run of the feature branch please
[21:27] <perrito666> since we are in it, could anyone review this? https://github.com/juju/juju/pull/6763
[21:30] <natefinch> perrito666: looking
[21:32] <natefinch> perrito666: why isn't this using model status
[21:32] <natefinch> perrito666: surely controller upgrading should cause the model to become busy until it's finished
[21:32] <perrito666> natefinch: you would think
[21:33] <perrito666> but no, status actually answers even while controller is upgrading
[21:33] <perrito666> I believe this patch will make us discover a whole new world of bugs where juju answers its upgrading when its not
[21:34] <natefinch> right but, what I mean is, the model-status is going to still say "available" when it's really not
[21:35] <natefinch> i.e. you can't juju deploy something while the controller is upgrading
[21:35] <natefinch> (presumably)
[21:37] <perrito666> natefinch: interesting, the rules gouvering statuses are a bit heavier than this
[21:39] <natefinch> perrito666: I'm not sure that that means
[21:41] <perrito666> natefinch: there is a logic behind when each status is set
[21:41] <perrito666> and we dont have a status that correctly reflects "upgrading" afaik nor think to add it, also adding a status might not be backwards nice
[21:42] <natefinch> perrito666: model-status is the correct place to put this.  We only recently created it, so there's no backward compatibility problem
[21:42] <natefinch> perrito666: that may be why the powers that be were thinking of just adding a boolean rather than using the status we already have in place.
[21:42] <natefinch> rick_h: ^
[21:43]  * rick_h reads back
[21:43] <perrito666> natefinch: nope, there was not much thought behind it
[21:44] <natefinch> rick_h: basically... we added a new boolean to Model status, but probably should just reuse the relatively new model-status field
[21:44] <rick_h> natefinch: perrito666 +1 to using the field. We don't want to make decisions based on checknig several different fields
[21:44] <natefinch> re: https://github.com/juju/juju/pull/6763
[21:44] <rick_h> natefinch: perrito666 adding a new "is-controller-doing-X" isn't ideal
[21:47] <perrito666> mm, I think I agree with both of you
[21:48] <perrito666> rick_h: what do you reckon is the status we should set? I presume I should be adding an "upgrading" message too
[21:49] <alexisb> does model-status only show up in yaml?
[21:51] <rick_h> natefinch: do you have a link to your PR from the sprint?
[21:51] <natefinch> rick_h: I can get it
[21:51] <natefinch> rick_h: https://github.com/juju/juju/pull/6661
[21:52] <perrito666> natefinch: that seems to also work on json right?
[21:52] <natefinch> alexisb: it's shown for juju models and juju status --format=yaml/json
[21:52] <rick_h> perrito666: ^ shows how the model status was updated to be able to be used generically and might help point to where we're thinking.
[21:53] <rick_h> perrito666: right, so the idea is that during an actual upgrade we'd update that as "upgrading" and so it could be "migrating" or "upgrading" or ...
[22:01] <natefinch> rick_h, perrito666: gotta run for dinner.... but I recommend Status: status.Busy and StatusInfo: "controller is upgrading"
[22:02] <perrito666> sounds like a fair start
[22:02] <rick_h> ty natefinch
[22:02] <perrito666> rick_h: ok there is some room for thought there but I believe that the rule should be something like if status != error && upgrading return  Status: status.Busy and StatusInfo: "controller is upgrading"
[22:03] <rick_h> perrito666: k
[22:03] <perrito666> I would not Set status for upgrading because we risk having other statuses shadowing that in the future
[22:03] <perrito666> sorry for rubber ducking you btw
[22:03] <rick_h> hmmm, yea thinking
[22:04] <rick_h> so...you're marking it busy though right?
[22:07] <perrito666> yes, but I am sort of hijacking that model.Status call and returning a status that is not the one set because the fact that an upgrade is in progress superseeds whatever model think its doing, makes sense?
[22:07] <perrito666> rick_h: ^
[22:07] <rick_h> perrito666: well, not really. I mean can the model be migrating and upgrading?
[22:08] <rick_h> perrito666: I'm concerned that going that route leaves us open to a non-finite state machine of what is going on
[22:08] <perrito666> should not
[22:08] <perrito666> I mean upgrade a non concurrent task, nothing else should be happening with upgrade
[22:08] <rick_h> perrito666: so I'd rather that it was actually set and that other code could rely on that information. e.g. a requested migration call should check that it's available before migrating and it then sets migrating?
[22:09] <rick_h> perrito666: and the same is true of upgrading, a requested upgrade would fail if not available and if it starts it updates status and is tracked with "since..." and such
[22:09] <rick_h> perrito666: like any other long running activity like that?
[22:09] <perrito666> rick_h: if you request anything that makes a change on juju while upgrading juju will say no
[22:09] <perrito666> not very politely
[22:09] <rick_h> perrito666: right, but by setting a firm status we're allow juju to get more polite?
[22:10] <perrito666> rick_h: we are, in this case we should set this status for all models before ugprading
[22:10] <rick_h> perrito666: and without tracking the actual change by setting status we don't have real visiblity as to when it started/etc like we do the migration? I guess it feels like we're doing this one different than migration/etc without something I see as really different
[22:10] <perrito666> then set.... mm I am not sure what to set after finishing
[22:10] <rick_h> perrito666: but you can upgrade a model at a time
[22:10] <rick_h> perrito666: only some models may be upgrading
[22:10] <rick_h> perrito666: available?
[22:10] <rick_h> since nothing else can take that while you own it
[22:11] <rick_h> perrito666: I understand where you're coming from, I'm asking for a bigger promise that Juju works
[22:11] <perrito666> rick_h: I am thinking in the not so long term, I mean if the model was in some other status and I mark upgrading I would be loosing that status right?
[22:11] <perrito666> also, upgrade cannot happen for a model only, can it?
[22:11] <perrito666> In a regular juju I mean
[22:12] <perrito666> in a regular juju install upgrading upgrades the controller binary and the mongodb for all models in that controller
[22:13] <rick_h> perrito666: yes, because the controller is upgraded on it's own, then the model is migrated at the same version it was, and then can be upgraded I thought
[22:13] <rick_h> perrito666: but the whole rule of "the model can't be on a version > than the controller" and migrations and all this comes to play
[22:14] <rick_h> nope
[23:00] <voidspace> rick_h: I'm going to have to rethink my approach to bug #1631254
[23:00] <mup> Bug #1631254: [2.0rc3] lxd containers do not autostart <rteam> <juju:In Progress by mfoord> <https://launchpad.net/bugs/1631254>
[23:00] <voidspace> rick_h: however, given that the way to reproduce it is now to *manually* stop the container
[23:00] <voidspace> rick_h: as the "hard shutdown" failure mode has been fixed in lxd
[23:00] <voidspace> rick_h: I think it can be demoted from critical and need not hold up 2.1.0
[23:00] <voidspace> rick_h: ok, for me to change it from critical to high?
[23:01] <voidspace> rick_h: (basically we need to keep the user namespace and special case the ones that don't need it I think - which is the opposite way to how I was doing it)
[23:03] <voidspace> rick_h: I haven't changed it from critical to high myself yet, and I'm now EOD (11pm) - g'night
[23:06] <mup> Bug #1492237 opened: juju state server mongod uses too much disk space <canonical-bootstack> <mongodb> <oil-2.0> <uosci> <juju:Triaged> <juju-core:New> <https://launchpad.net/bugs/1492237>
[23:06] <mup> Bug #1634390 opened: jujud services not starting after reboot when /var is on separate partition  <uosci> <juju:Triaged by rharding> <juju-core:New> <https://launchpad.net/bugs/1634390>