[00:23] <sinzui> hi davecheney, I see a 1.14 branch and series was created shortly after 1.13.3. Will the two fixes be merged into both 1.14 and trunk so that the release can be cut?
[00:25] <zradmin> Im trying to deploy charms with juju 1.13.3 t oa maas environment, and while the first charm allocates resources from maas, the next charm i try to deploy doesn't even though there are nodes in the ready state
[00:26] <zradmin> has anyone seen this before?
[00:31] <kurt_> zradmin: you may have clock/timing issues with oauth
[00:32] <zradmin> so time between the bottstrap noe and the maas region controller is off?
[00:32] <kurt_> zradmin: does the debug-log complain of oauth problems?
[00:32] <zradmin> bootstrap :)
[00:33] <kurt_> yes, it could be
[00:33] <kurt_> do both nodes have direct access to the internet?
[00:33] <zradmin> yes
[00:33] <zradmin> they should be set to pool.ntp.org
[00:34] <kurt_> what does debug log say?
[00:35] <zradmin> its still scrolling, i had the bug in 1.12 where the all-machines.log grew exponentially
[00:35] <kurt_> right, don't use 1.12
[00:36] <zradmin> yeah, i just found all the threads on that today it was driving me crazy
[00:36] <kurt_> I got off that as soon as I could.
[00:36] <kurt_> 1.13.3 is what I'm working with
[00:37] <zradmin> yeah i'm on that as well
[00:37] <zradmin> just a second, i deleted the log file and am reattempting to deploy the charm to get fresh data
[00:38] <kurt_> you don't need to delete the log.  I've never worried about that
[00:38] <kurt_> start up another terminal and do "watch debug-log" from your root node
[00:39] <kurt_> I run 3 terminals when I'm deploying
[00:39] <kurt_> 1 for commands, 1 doing a "watch juju status" and 1 doing "watch juju debug-log"
[00:39] <kurt_> that works well for me
[00:39] <zradmin> cool
[00:40] <zradmin> this is a sample of the garbage im getting in the log
[00:40] <kurt_> I sometimes have a 4th window to juju ssh  to whatever node I'm working iwth
[00:40] <zradmin> ceph3:2013-09-12 00:38:09 ERROR juju runner.go:211 worker: exited "api": websocket.Dial wss://juju.unity:17070/: dial tcp 10.10.33.1:17070: connection refused
[00:40] <zradmin> ceph3:2013-09-12 00:38:09 INFO juju runner.go:245 worker: restarting "api" in 3s
[00:40] <zradmin> ceph3:2013-09-12 00:38:12 INFO juju runner.go:253 worker: start "api"
[00:40] <zradmin> ceph3:2013-09-12 00:38:12 INFO juju apiclient.go:106 state/api: dialing "wss://juju.unity:17070/"
[00:40] <zradmin> ceph3:2013-09-12 00:38:12 ERROR juju apiclient.go:111 state/api: websocket.Dial wss://juju.unity:17070/: dial tcp 10.10.33.1:17070: connection refused
[00:40] <zradmin> ceph3:2013-09-12 00:38:12 ERROR juju runner.go:211 worker: exited "api": websocket.Dial wss://juju.unity:17070/: dial tcp 10.10.33.1:17070: connection refused
[00:40] <zradmin> ceph3:2013-09-12 00:38:12 INFO juju runner.go:245 worker: restarting "api" in 3s
[00:40] <zradmin> ceph3:2013-09-12 00:38:06 INFO juju runner.go:253 worker: start "api"
[00:40] <zradmin> ceph3:2013-09-12 00:38:06 INFO juju apiclient.go:106 state/api: dialing "wss://juju.unity:17070/"
[00:40] <zradmin> ceph3:2013-09-12 00:38:06 ERROR juju apiclient.go:111 state/api: websocket.Dial wss://juju.unity:17070/: dial tcp 10.10.33.1:17070: connection refused^CConnection to juju.unity closed.
[00:40] <kurt_> ARRGGGH
[00:40] <kurt_> don't paste in here...
[00:40] <zradmin> whoops
[00:40] <zradmin> sorry about that
[00:40] <kurt_> use pastebin
[00:40] <zradmin> ok
[00:40] <kurt_> pastebin.ubuntu.com
[00:41] <zradmin> http://pastebin.ubuntu.com/6094948/
[00:41] <zradmin> my apologies to everyone else in the room
[00:41] <kurt_> you are having connection issues
[00:42] <zradmin> the ceph service in the log deployed correctly and seems to be checing in to juju
[00:42] <zradmin> but that was deployed under 1.12 and then i upgraded the tools
[00:43] <zradmin> should i just destory it again and start clean from 1.13?
[00:43] <kurt_> I think you may need to destroy-env - but I can't answer for certain
[00:43] <zradmin> ok, I'll try that first
[00:43] <kurt_> also - from your node, make sure you can ping out to some internet hosts
[00:44] <zradmin> why is 1.12 listed under juju/stable? I've had nothing but issues since testing with it
[00:44] <zradmin> the python client never gave me too many issues
[00:45] <kurt_> I'm not with canonical, can't help you with that
[00:45] <kurt_> sorry
[00:45] <sarnold> zradmin: the juju team has chosen to do evens for stable, odd for unstable, and as I understand they just haven't done a 1.14 release yet..
[00:45] <zradmin> ah yeah, I meant to ask "do you know why"
[00:45] <kurt_> sarnold: isn't 1.14 for saucy?
[00:45] <sarnold> .. never mind the results of annoying bugs in the stable series :) but the intention was for 1.13 to be less stable than 1.12 because it was under more active development
[00:46] <zradmin> ah ok
[00:46] <kurt_> 1.12 is dead though I think I heard
[00:46] <sarnold> yeah probably best to think that :)
[00:46] <sarnold> 1.13.3 is still most recent in saucy, I ohpe they can fix that up before too much longer
[00:47] <kurt_> its been working pretty well for me so far
[00:47] <zradmin> cool, im hoping i can get my test openstack environment up and running soon
[00:48] <zradmin> ok now when trying to destroy the environment im getting a 409 conflict message
[00:49] <kurt_> did you juju resolve <service> before trying to do so?
[00:49] <kurt_> ah…wait
[00:49] <kurt_> you are destroying your environment completely
[00:54] <zradmin> yeah im destroying each service individually right now and then seeing if it will let me destroy the environment
[00:56] <zradmin> its not
[00:57] <zradmin> getting the mass 409 error still
[00:57] <zradmin> ok after removing all the nodes from maas it let me destory the environment
[00:59] <zradmin> ok well this is going to take a bit while the environment bootstraps itself again, but thank you kurt_ and sarnold for the assistance!
[00:59] <zradmin> (and the lesson in pastebin)
[01:00] <kurt_> zradmin: I figured you may need to do that.  you may be having some strange connectivity issues
[01:01] <sarnold> good luck zradmin :)
[01:02] <kurt_> sarnold: what does "agent-state-info: 'hook failed: "relation-changed"' mean?
[01:02] <kurt_> getting that from nova-cloud-controller after having deployed some other services
[01:03] <sarnold> kurt_: eek, no idea, sorry
[01:03] <kurt_> sarnold: ok, thnx anyways
[01:04] <zradmin> i had an issue with that under a .7, but got past it. what service are you adding a relation to nova-ccc when it gives you that message?
[01:06] <kurt_> zradmin: it happened sometime after deploying nova-compute and adding the relations
[01:08] <zradmin> kurt_: was it connected to keystone/rabbitmq etc. already? Also are you following this guide: https://wiki.ubuntu.com/ServerTeam/OpenStackHA
[01:08] <kurt_> zradmin: I'm loosely following that guide
[01:09] <zradmin> kurt_: if your following the HA guide, my biggest problem stemmed from ceph not actually setting up the osd's properly, but from some reason it let me stand up the rest of the services with no noticable errors. The root of that was essentially that the serves never started on the VIPs so Nova-CCC was the first service to report a problem for me
[01:10] <kurt_> yes, that's a common problem a few of us have run in to.  its about topology too
[01:11] <kurt_> I'm actually trying to consolidate services down to as few nodes as possible
[01:11] <kurt_> and I'm definitely not doing HA, and I'm 100% on VMs in Vmware Fusion
[01:12] <zradmin> yeah i was running alot of the core api services on vms as well
[01:13] <kurt_> I've gotten very close, but not having the topology right has bitten me more than once
[01:13] <kurt_> I've gotten everything done minus the ability to spin up VMs
[01:13] <kurt_> in openstack I mean
[01:13] <kurt_> I wasn't using ceph though...
[01:16] <zradmin> i see. I was able to upload imaged on my last attempt and start creating instances, but they would never finish deploying - quantum was my issue i think.
[03:52] <melmoth> with juju-core i cannot bootstrap on a maas installed when i m behind a proxy. It used to work (after some chaneg in MAAS) with pyjuju
[03:52] <melmoth> cloud-init error: https://pastebin.canonical.com/97399/
[03:52] <melmoth> any idea what it could be and if there s a way to fix it ?
[03:56] <thumper> melmoth: that log is from pyjuju not juju-core
[03:58] <melmoth> hmmm, so, i ended up with juju py on my bootstrap node...
[03:58] <melmoth> hu, actually, nope
[03:58] <melmoth> but there s nothing like juju installed on the bootstrap node
[03:59] <thumper> I can tell by the log format, and that it mentions py files not go files
[04:00] <melmoth> well, it mention 2013-09-12 03:19:23 ERROR juju supercommand.go:235 command failed: no reachable servers
[04:01] <melmoth> but still; none version of juju seems installed, and i did not spotted any error (like an apt-get install failing) before that one
[04:01] <melmoth> ahhh
[04:01] <melmoth> Sep 12 03:08:00 bootstrap [CLOUDINIT] cc_apt_update_upgrade.py[WARNING]: Source Error: ppa:juju/stable:add-apt-repository failed
[04:01] <thumper> ah so it did
[04:02] <thumper> ah, it is the cloud init python failure
[04:02]  * thumper sighs
[04:03] <melmoth> it was not able to install the ppa fo juju-core, most probably because the gpg key stuff failed ehind a proxy (i had to change that in maas, used to work with pyjuju)
[04:05] <melmoth> ahhh, i think i know, my previous change only added the ppa:juju/pkgs and here i think it s trying ppa:juju/devel
[04:51] <melmoth> where can i find the list of command that cloud-init feeds to the bootstrap node ?
[04:59] <thumper> melmoth: on the machine or in the code?
[04:59] <melmoth> in the code, so i can change it.
[04:59] <thumper> melmoth: mostly in juju-core/environs/cloudinit/cloudinit.go
[04:59] <melmoth> thanks
[05:00] <melmoth> is it like python , compiled on the fly so i can change it withot repakcaging the whole stuff ?
[05:00] <thumper> no
[05:00] <thumper> go is a compiled language
[05:00] <melmoth> grumble
[05:00] <thumper> and it creates a statically linked executable
[05:01] <thumper> I know that there is effort around making sure that juju works in private clouds
[05:01] <thumper> with firewalls etc
[05:01] <thumper> so please document your issues to the juju mailing list
[05:45] <bradm> any charmers about who feel like reviewing my squid-reverseproxy fixes?  they're pretty minor, but actually let the charm work on juju > 0.7
[06:10] <davecheney> bradm: this one ?
[06:10] <davecheney> https://code.launchpad.net/~charmers/charms/precise/squid-reverseproxy/trunk/+merge/185202
[06:11] <davecheney> diff, he is empty
[06:14] <bradm> davecheney: huh
[06:15] <bradm> davecheney: I did the merge proposal against http://bazaar.launchpad.net/~brad-marshall/charms/precise/squid-reverseproxy/http-port-config/revision/42, I thought
[06:15] <bradm> davecheney: I must have screwed it up somehow
[06:15]  * bradm retries.
[06:17] <bradm> davecheney: https://code.launchpad.net/~brad-marshall/charms/precise/squid-reverseproxy/http-port-config/+merge/185204
[06:19] <bradm> davecheney: looks better?
[06:20]  * davecheney looks
[06:52] <bradm> davecheney: let me know if there's a problem with that merge
[06:53] <davecheney> bradm: change looks good
[06:53] <davecheney> i'll have to wai for marcoceppi
[06:53] <davecheney> i'm just a baby charmer
[06:53] <davecheney> i stil hvae my training wheels attached
[06:53] <bradm> davecheney: cool
[06:53] <bradm> davecheney: I'm redoing python-moinmoin charm in python too, using charmhelpers
[06:53] <davecheney> noice
[06:55] <bradm> davecheney: https://code.launchpad.net/~brad-marshall/charms/precise/squid-reverseproxy/fixed-ports-path/+merge/184023 is still open too, if you can get someone to merge it that'd be great :)
[06:57] <davecheney> bradm: looking
[06:57] <davecheney> small changes are good
[06:57] <davecheney> lets do more of those
[06:57] <davecheney> ok, same deal
[06:57] <davecheney> need marcoceppi to show me the ropes
[06:57] <bradm> davecheney: I figure small, bite sized chunk changes make it easier on everyone
[06:58] <bradm> davecheney: since its obvious what they're doing, and I'm new at this too :)
[07:35] <gnuoy> I have a charm which populates data into a database when the db-relation-{joined,changed} hooks are fired. The process of loading the data can take > 30mins. The hook does not return until the load completes however juju status reports that the hook has successfully completed well before it actually has. Currently the charm doesn't log anything while the load is running, if that’s relevant. Is this the expected behaviour?
[07:37] <wellsb> I'm looking into implementing multiplayer functionality into an Ubuntu Touch game.  What charms should I be looking at?
[07:55] <mgz> gnuoy: I wondered whether a timeout might be involved, but as far as I can see, it's just Cmd.Wait when running hooks and no other logic
[07:59] <mgz> wellsb: what kind of thing are you after? I don't think there are any charms for game servers, unless nyancat counts. Looking at any charm which uses expose should give you some ideas though.
[08:01] <wellsb> mgz: I'm thinking about something like the channels api provides for google app engine.  Perhaps I can use node.js in tandem w/ haproxy to create a socket server?  Then my clients can connect to that?  I really don't have much experience in this area
[08:04] <wellsb> I guess without a pomelo or maple charm, this isn't really possible
[08:12] <mgz> wellsb: personally, I'd have a charm specific to your game server, rather than looking for a generic game-server charm that you then customise
[08:13] <mgz> so, you'd write a charm that installs and uses nodejs/pomelo, rather than trying to have a generic pomelo charm with enough configurability to work with any game
[08:18] <stub> gnuoy: This is Bug #1200267, or at least a facet of it.
[08:18] <_mup_> Bug #1200267: Expose when stable state is reached <canonical-webops> <papercut> <juju-core:Triaged> <https://launchpad.net/bugs/1200267>
[08:19] <gnuoy> stub, thats the badger by the looks of it
[08:19] <stub> gnuoy: juju status just tells you that the -joined hook has successfully run, which it probably has (given it probably needed to wait until the db's -joined hook had run and databases exist etc.)
[08:21] <gnuoy> stub, thanks
[09:41] <yolanda> hi, i'm trying to add some nagios functionality to a gerrit charm, and i need some advice. I see other charms like memcached, postgres... that are using nagios plugins for it, but there isn't a nagios plugin for gerrit, what should be the best way to proceed?
[09:43] <lifeless> yolanda: you can monitor gerrits basic availability using the http plugin
[09:44] <mgz> that sounds like a good starting point at least
[09:45] <yolanda> lifeless, that for http, and if i want to check the ssh port maybe i use the check_tcp one?
[10:55] <marcoceppi> davecheney: you around?
[12:29] <fwereade_> evilnickveitch, ping
[12:31] <evilnickveitch> fwereade_, hi
[12:34] <fwereade_> evilnickveitch, I wanted to check in quickly about the docs I proposed to see whether they were seeming sane?
[12:34] <evilnickveitch> fwereade_, it looks good so far, I haven't finished them all yet :)
[12:34] <evilnickveitch> you did quite a bit of work there
[12:35] <fwereade_> evilnickveitch, cool, I feel like I should do more really
[12:35] <mgz> there are a few XXXX type bits
[12:35] <mgz> but really the current needs landing I think
[12:35] <fwereade_> mgz, fuck, yeah, I just remembered I didn't do sample charm metadata and config files
[12:35] <fwereade_> mgz, were there more you spotted?
[12:35] <mgz> just the examples I think
[12:37] <fwereade_> mgz, thanks, well spotted
[12:37] <fwereade_> evilnickveitch, but also to say that there's an effort underway to get all useful-for-developers docs collected in one place
[12:38] <evilnickveitch> fwereade_, yeah, i know that, we aqre working out how that can be done
[12:38] <fwereade_> evilnickveitch, and that initial indications suggest that something like restructured text docs in juju-core itself may be the most suitable source format
[12:38] <fwereade_> evilnickveitch, but, yeah, the important thing is that you're aware
[12:39] <fwereade_> evilnickveitch, anyway I tried to cover all the stuff I could think of that's relevant for charm authors
[12:40] <fwereade_> evilnickveitch, the major holes are the subordinates page (which I think I recognise as basically the original spec document) and the implicit relations page (which I couldn't really make head or tail of)
[12:40] <fwereade_> evilnickveitch, but I didn't touch those for fear of never finishing
[12:41] <evilnickveitch> heh, I think m_3 already went over subordinates, but once the dust settles on all the new bits we should reappraise it
[12:46] <fwereade_> evilnickveitch, ah, cool, I may be talking about the docs as of a few versions ago
[13:52] <wedgwood> stub: I'm also curious if run should be in its own module
[13:53] <wedgwood> which sort of goes along with your API stability question
[13:54] <stub> wedgwood: yeah, it does look a little lonely
[13:56] <stub> it should go in with the fixture - nothing else about it is charm specific so its only purpose in charm-helpers is support the fixture.
[13:57] <wedgwood> stub: ok, so while I'm doing a proper review...
[13:58] <wedgwood> in keeping with the 1.0 goals, I think both modules can be combined and they need more docs. An example in the module docstring would be excellend.
[13:58] <wedgwood> *t
[14:04] <wedgwood> stub: ah, and you'll also need to handle python-fixtures installation.
[14:06] <stub> wedgwood: I need to declare the dependency if it is in contrib? Or is this because you want it moved to core?
[14:07] <wedgwood> stub: I think that there will be things in core that handle their own dependency installation. like the fetch and archive modules. I want to keep the actual dependencies (as in setup.py) down
[14:09] <wedgwood> stub: see charmhelpers.fetch.bzrurl
[14:09] <stub> I don't think there is any sane way I can help with the python-fixtures dependency apart from documenting it.
[14:11] <wedgwood> stub: ^^ and also, I don't mean for it to be in charmhelpers.core.testing, just at charmhelpers.testing
[14:12] <stub> oh, yeah. that is better.
[14:19] <wedgwood> stub: If I understand the use well enough, I *think* the API is solid. Adding additional kwargs to handle variations on placement shouldn't break anything.
[14:23] <AskUbuntu> juju - how to set environment variable before running script inside hooks/install | http://askubuntu.com/q/344687
[14:53] <stub> wedgwood: Ta. I'll do those changes tomorrow.
[14:54] <wedgwood> stub: cool. don't know if you noticed that I commented on the MP. thanks man and have a good night.
[14:55] <stub> wedgwood: yes, just saw the notification come through.
[14:56] <stub> o/
[15:47] <ahasenack> does anybody know if relation-ids can return relation ids in a broken state?
[15:47] <ahasenack> or are all relation ids that are returned guaranteed to be in a working state?
[15:47] <marcoceppi> ahasenack: there was talk on the list about this.
[15:48] <ahasenack> marcoceppi: I was wondering if I could rely on relation-ids to know if a relation is established or not
[15:48] <marcoceppi> ahasenack: I'm not 100% sure, looking int he archives
[16:11] <avoine> do you guys have any idea how I could end up with this error on the agent of a lxc machine:
[16:11] <avoine> ERROR juju machine.go:286 running machine 1 agent on inappropriate instance: machine-0:26b172cc-.....
[16:12] <avoine> where machine-0:26b172cc is the Nonce that I got
[16:15] <marcoceppi> avoine: how can we reproduce that error?
[16:18] <avoine> I guess you should have the error when deploying using lxc
[16:19] <ahasenack> marcoceppi: I have a service that will only start (initscript-wise) after a db relation is joined
[16:19] <ahasenack> marcoceppi: was wondering what's the best way to track that
[16:19] <ahasenack> marcoceppi: touch a file at the end of db-relation-changed for example?
[16:19] <ahasenack> marcoceppi: the problem is actually in config-changed, it tries to start the service at the end. But the hook execution order at deploy time is
[16:20] <ahasenack> marcoceppi: install hook and then config-changed hook
[16:20] <ahasenack> so it's that run of config-changed where the start will fail, because it's not related to the db yet
[16:21] <avoine> ahasenack: you can list relations and check if there is a db one
[16:21] <marcoceppi> ahasenack: that's how I do it, touch files, etc
[16:21] <ahasenack> avoine: the question then becomes, will relation-ids only return established relations, or does it include broken ones in its output? Relations with errors
[16:21] <marcoceppi> ahasenack: I touch a file, then run config-changed at the end of every hook, which will run hooks/start
[16:21] <ahasenack> marcoceppi: ok
[16:27] <avoine> relation_get_all() in the haproxy charm loop over relations and fetch relation information for each ones
[16:27] <avoine> then use that to configure haproxy.cfg
[16:27] <avoine> marcoceppi: I'll dig up more and come back with a bug report if I found something
[16:28] <marcoceppi> avoine: cool, I've not seen that error, but if you run everything with like --debug and -v you should get plenty of information
[16:28] <avoine> ok
[16:28] <avoine> thanks
[18:40] <kurt_> marcoceppi:  can you tell me how multihomed AND statically assigned IP address systems are achieved with maas and juju (ie. for openstack)?
[18:40] <kurt_> I was considering putting this out there to ask ubuntu, but thought maybe someone here could answer this
[18:45] <jcastro> kurt_: did you ever sort your quantum thing?
[18:45] <kurt_> jcastro: not yet.  
[18:47] <kurt_> jcastro: did you get everything working yesterday?
[18:49] <jcastro> no, it's on my todo this weekend
[18:49] <kurt_> jcastro: do you have any ideas on my question?  should I put it out there to ask ubuntu?  I'll be jamespage could answer it easily.
[18:50] <jcastro> yeah
[18:50] <jcastro> marcoceppi: 33 unanswered `juju` questions
[18:50] <jcastro> oh hey sinzui
[18:51] <jcastro> we should put unanswered questions from askubuntu tagged with "juju" in the review queue as well
[18:51] <jcastro> just like a link to: http://askubuntu.com/questions/tagged/juju?sort=unanswered&pageSize=50
[18:51] <marcoceppi> jcastro: i've got the review queue baring down on me. Questions will have to wait
[18:52] <sinzui> jcastro, ack, jcsackett , can you report a bug about that so that we can include it in your efforts
[18:52] <jcastro> marcoceppi: I was mentioning you as sort of "just nod and validate my idea!"
[18:52] <jcastro> sinzui: rock, on ... lp:charmworld?
[18:52]  * marcoceppi nods to jcastro
[18:52] <sinzui> yep
[19:01] <kurt_> jcastro: was the problem you saw yesterday in adding a relationship between nova-compute and nova-cloud-controller?  I keep seeing "        agent-state-info: 'hook failed: "relation-changed"'"
[19:02] <kurt_> both can be deployed without issues, but as soon as I try to join them, nova-cloud-controller errors out
[19:03] <kurt_> http://pastebin.ubuntu.com/6098340/
[19:03] <lifeless> kurt_: there is a debug thing
[19:03] <lifeless> kurt_: I think you'll need to do that to see whats failing
[19:03] <jcastro> I'm not even getting past install hooks on some of them, but remember I'm on the local provider, there's a bunch of issues there left to resolve
[19:03] <jcastro> this weekend I'm going to try to fire up the bundle on HP
[19:04] <kurt_> lifeless: you are talking about the debug hooks, right?
[19:04] <kurt_> I was considering trying that next
[19:04] <lifeless> kurt_: no, the watch-all-the-logs and related drop-into-a-shell-when-a-hook-fires thing.
[19:04] <lifeless> jcastro will know what I'm blathering about
[19:05] <jcastro> juju debug-logs
[19:05] <kurt_> yes, I do that
[19:05] <kurt_> output is above in pastebin :)
[19:06] <jcastro> kurt_: we're in your neck of the woods next month, I am wondering if we should just get together for beers with your stuff
[19:06] <jcastro> and get you sorted for real
[19:06] <kurt_> ah yeah sure :)
[19:06] <kurt_> where and when are you talking?
[19:08] <kurt_> lifeless: I was referring to debug-hooks - I am wondering how useful that will be here
[19:08] <jcastro> yikes, debug-hooks, not -logs
[19:08] <jcastro> sorry, long day!
[19:09] <kurt_> ah…ok, that makes more sense :D
[19:09] <jcastro> kurt_: week of 21 October, though hopefully it won't be this same issue, heh
[19:11] <jcsackett> sinzui: there's a card on our kanban for askubuntu now.
[19:12] <sinzui> thank you jcsackett
[19:34] <zradmin> is there a way in 1.13 to destroy subordinate services yet?
[19:36] <marcoceppi> zradmin: yeah, you can destro subs for some time
[19:36] <marcoceppi> zradmin: just remove the relation
[19:37] <zradmin> marcoceppi: odd, i removed the relation and the subordinate service and the main service have been stuck in a dying state for hours now
[19:38] <marcoceppi> zradmin: can I see the juju status output?
[19:39] <zradmin> marcoceppi: here it is http://pastebin.ubuntu.com/6098492/
[19:40] <marcoceppi> zradmin: it says the agents are stopped
[19:41] <marcoceppi> zradmin: anyways, run `juju resolved mysql-hacluster/0; juju resolved mysql-hacluster/1`
[19:41] <marcoceppi> zradmin: that should finish the removal of the subs, and then the final cleanup of the juju status
[19:41] <marcoceppi> zradmin: whenever a unit (or sub) is in an error state all future events for that unit are queued and event processing is stopped, even on a destroy-service or removal of a sub
[19:41] <marcoceppi> zradmin: you need to mark the error as resolved in order for juju to process the next event
[19:42] <marcoceppi> zradmin: that's why you see it in life: dying but it's not dead, because it's stuck with an error
[19:42] <lifeless> zombie!
[19:43] <marcoceppi> I would yell at evilnickveitch because this isn't in the docs yet, but he just quit
[19:44] <zradmin> marcoceppi: ah ok, that makes sense - still relearning all the new changes in the go rewrite. in .7 it just seemed to do everything instantly
[19:45] <zradmin> marcoceppi: that worked btw, so tyvm!
[19:45] <marcoceppi> zradmin: yeah, that was a bug (technically) that has been fixed in the rewrite
[19:45] <marcoceppi> zradmin: you're welcome!
[20:11] <kurt_> Do debug-hooks work in 1.13.3?  It appears I need to manually set a bunch of environmental variables.  Maybe its not intended to work with the add-relations hooks?
[20:22] <zradmin> trying to deploy an haproxy for mysql im getting an error now where corosync isn't starting because its missing some principle? i've got the VIP etc set in my config file so i don't know what is missing to finish starting the service properly. here's the debug-log section thats relevant http://pastebin.ubuntu.com/6098666/
[21:46] <marcoceppi> zradmin: haproxy for mysql, I dont' think those two play with each other
[21:49] <AskUbuntu> Juju debug-hooks for add-relation? | http://askubuntu.com/q/344862
[21:52] <zradmin> marcoceppi: its worked in the past, but it configures in active/passive mode (its also whats in the public documentation im following on https://wiki.ubuntu.com/ServerTeam/OpenStackHA)
[21:53] <zradmin> marcoceppi: i got a little farther with it, apparently maas and juju now deploy the nics as bridges (for lxc support maybe?) so i had to adjust the config for that but the VIP didn't come up