[00:21] <miken> Hrm... I'm seeing "error trying to stop watcher: connection is shut down" consistently when trying to upgrade a charm (rabbitmq). That is, it happened 2 out of 2 times on the initial upgrade-charm, and resolved itself 2 out of 2 on a resolved --retry. Does anyone see anything in this paste that could indicate an issue with the charm? http://paste.ubuntu.com/11380563/ (or a way I could avoid it)
[01:25] <lazyPower> miken: thats odd, and doesn't indicate something is awry with the charm that I can see. it looks like something in the env tanked briefly and resolved itself on the next go
[01:26] <miken> lazyPower: yeah, that's what I thought too initially, which is why I was surprised when I re-bootstrapped to test again and had exactly the same behaviour the second time (ie. upgrade-charm failed initially with that error, then resolved --retry resolved without issue)
[01:26] <lazyPower> hmm...
[01:26] <lazyPower> config-get is what failed though
[01:26] <lazyPower> or rather, returned > 1
[01:27] <lazyPower> can you attach to the unit after a deploy, then run the upgrade-charm sequence, and call config-get and examine the output?
[01:27] <lazyPower> debug-hooks in a consistent failure case like that should get us enough of a reproduceable env to see whats happening
[01:28] <miken> Sure - let me redeploy. My guess is that I'll see a similar api connection error.
[01:28]  * miken does that.
[01:28] <lazyPower> ta miken
[01:47] <thumper> miken: anything you see with something like "error trying to stop watcher" is a juju bug, not a charm bug AFAICT
[01:47] <thumper> miken: which version of juju?
[01:51] <miken> 1.23.2-trusty-amd64
[01:52] <miken> Yes, as above, I did assume it was a juju bug, but given that it reproduced itself twice only for that charm, I didn't know if there was something specific to the charm triggering it. I'm trying for a third time now, with debug-hooks.
[01:53] <miken> Right, I should have said s/issue with the charm/something in the charm that might trigger/
[02:04] <miken> thumper, lazyPower: Didn't trigger the error on the third redeploy with debug-hooks. Oh well :/ (if it was important, I'd redeploy again and try for a 4th time without debug-hooks, but otherwise I'll just move on)
[02:04] <lazyPower> miken: without being able to capture the env and debug whats happening - i think you're fine to move on for now. but if it crops up, by all means please capture logs and file a bug
[02:05] <thumper> agreed
[02:05] <lazyPower> miken: appreciate the time spent attempting to reproduce for us. *hattip*
[02:24] <redelmann> hi there, someone online for a quick help?
[02:25] <lazyPower> redelmann: we can try, whats the trouble?
[02:25] <redelmann> lazyPower, hi!
[02:25] <redelmann> lazyPower, i recently install docker inside a juju unit
[02:26] <redelmann> lazyPower, docker create a docker0 bridge network
[02:26] <redelmann> lazyPower, now all relation to that unit get provate-address of bridge0
[02:26] <redelmann> lazyPower, this happend in MAAS but not in AWS
[02:27] <redelmann> lazyPower, it's a know bug? or something im doing wrong?
[02:27] <lazyPower> thumper: hey have you seen this before? MAAS unit-agent is picking up on a bridge interface and not the host device
[02:27] <lazyPower> redelmann: this is most definately a networking bug in the juju unit agent if its picking up the wrong interface to be providing as a private address
[02:27] <thumper> lazyPower: oh... yeah...
[02:27] <thumper> rings a bell
[02:27] <lazyPower> ok, so if you've heard of it chances are we have a bug somewhere
[02:27] <lazyPower> let me fish this up and see if there's anything listed as a work around until it gets patched redelmann
[02:28] <thumper> the network selection logic is a bit fubared on maas right now
[02:28] <lazyPower> 1 moment
[02:28] <thumper> I know it is being worked on
[02:28] <thumper> but I don't know the status
[02:28] <thumper> one horrible thing you could do...
[02:28] <redelmann> lazyPower, i search a lot, but i couldn't find anything
[02:28] <thumper> is to name it alphabetically after 'eth0'
[02:28] <thumper> that *might* work
[02:29] <lazyPower> thumper: seriously?
[02:29]  * thumper shrugs
[02:29] <redelmann> thumper, i try to disable Docker0 bridge, but now i don't get any private-address in relation :P
[02:29] <thumper> I didn't write any of that
[02:29] <lazyPower> i'm not blaming you, i swear
[02:29] <lazyPower> not this time anyway
[02:29] <thumper> IIRC it choses the first network adapter
[02:29] <thumper> chooses
[02:30] <thumper> i may be wrong
[02:30] <lazyPower> redelmann: so you disabled the docker bridge, and it doesn't show up in ifconfig -a, but its no longer sending an address on the wire for private-address?
[02:30] <thumper> wallyworld: do you know the status of the maas networking stuff?
[02:30] <redelmann> thumper, lazyPower: netroks are in alphabetic order (eth0: noip, juju-br0: private ip, etc"
[02:30] <thumper> ah...
[02:30] <thumper> phoey
[02:30] <wallyworld> thumper: not in any detail
[02:30] <redelmann> lazyPower, exactly.
[02:30] <lazyPower> redelmann: ok, i'm not finding a bug searching launchpad either. Can you file a bug detailing this issue?
[02:31] <lazyPower> we need to get it triaged and on the docket for someone to look at it
[02:31] <thumper> if in doubt, file a bug
[02:31] <redelmann> lazyPower, is like it try to get address of first interface
[02:31] <lazyPower> i know there's a whole herd of networking stuff inc. and this may be on that ticket, but without a bug we wont know.
[02:31] <thumper> the juju team can always either mark as a dupe or invalid
[02:31] <thumper> but better to have the bug than not
[02:31] <lazyPower> https://bugs.launchpad.net/juju-core/+filebug
[02:31] <thumper> lazyPower: cheers
[02:32] <thumper> someone should make a bot that files bugs for us :)
[02:32] <thumper> an irc bot
[02:32]  * lazyPower kicks mup
[02:32] <thumper> probably need a valid person list
[02:32] <thumper> with some auth
[02:32]  * thumper thinks:
[02:32] <redelmann> lazyPower, ok. i'am in home right now. but tomorrow in work i will create an issue.
[02:32] <redelmann> lazyPower, thumper: thank yoy very much
[02:33] <lazyPower> redelmann: awesome. if you need me to take a look at it feel free to ping me here and i'll look it over and route some eyes at it.
[02:33] <lazyPower> redelmann: sorry you ran into that :( Not a very stellar experience ot have a unit just drop a private-address once a new bridge pops up
[02:33] <redelmann> lazyPower, for the record, it happen in KVM enciroment.
[02:33] <redelmann> lazyPower, environment. Sorry my english :P
[02:33] <lazyPower> that'll be good info to have in the bug, so if it happens in KVM but not on a BM host we can isolate it :)
[02:34] <lazyPower> make sure you include juju-version, maas version, and steps to reproduce
[02:34] <redelmann> lazyPower, OK, thanks
[02:34] <lazyPower> cheers redelmann
[02:37] <lazyPower> thumper: are you going to make this bot your new friday freetime project?
[02:38] <thumper> lazyPower: ha, nah
[02:40] <lazyPower> :) just teasin anyway. Its 11, i'm outy5000. See you in the am o/
[09:40] <jamespage> Tribaal, morning - we have an issue with http://bazaar.launchpad.net/~charm-helpers/charm-helpers/devel/revision/373?start_revid=373
[09:40] <jamespage> its breaking anything todo with the hacluster charm due to the way that the data is passed for that relation
[09:55] <jamespage> Tribaal, I've reverted it for now - under bug 1459175
[09:55] <mup> Bug #1459175: relation_set is broken <Charm Helpers:Fix Committed> <https://launchpad.net/bugs/1459175>
[10:39] <stub> jamespage, Tribaal : That will affect Bug #1458546 and the corresponding MP (different issue probably - py3 vs. settings being lost)
[10:39] <mup> Bug #1458546: relation_set broken under Python3 <Charm Helpers:In Progress by stub> <https://launchpad.net/bugs/1458546>
[10:43] <stub> I didn't see any dataloss with 1.23, so not sure what went wrong with the hacluster charm
[10:44] <stub> (unless it is py3 and you were swallowing the AttributeError?)
[12:51] <jamespage> coreycb, hey - minor feedback on that charm-helpers merge
[12:51] <jamespage> stub, the hacluster charm uses some fairly horrid serialization techniques for the data that gets passed
[12:52] <jamespage> I suspect this is what breaks
[12:54] <coreycb> jamespage, good catch! fixing
[12:56] <Tribaal> jamespage: we'll need this to be in at some point however, without this the size of arguments passed to a relation is limited to the system's command-line size limit
[12:57] <jamespage> Tribaal, don't doubt that
[12:57] <jamespage> Tribaal, we have a horrid workaround in nova-cc due to that issue
[12:57] <Tribaal> jamespage: ok, so we're not the only ones to need the --file thing to work at least :)
[12:57] <jamespage> Tribaal, maybe it could be opt-in rather than on by default?
[12:58] <Tribaal> we need it because we set relatively complex errorpage stazans on the haproxy charm
[12:58] <Tribaal> jamespage: whatever works. We could make it another charmhelpers method (set_rlation_with_file) or something
[12:58] <Tribaal> but that feels dirty
[12:59] <Tribaal> ideally, there should be only one obvious way to do this :)
[12:59] <Tribaal> what breaks for the hacluster charm?
[13:14] <coreycb> jamespage, fixed c-h and the rest of the charms and resent you an email with mp links as I'd changed the names of the branches which changed the mp names
[13:51] <mbruzek> good morning jcastro
[13:55] <jcastro> hi!
[13:55] <jcastro> how was your break?
[13:56] <mbruzek> jcastro: Good I went camping and got rained on
[13:56] <mbruzek> jcastro: how was canada
[13:59] <jcastro> mbruzek: lovely
[14:48] <lazyPower> jcastro: Nice! re: shirts
[16:07] <redelmann> lazyPower, hi there. i report this https://bugs.launchpad.net/juju-core/+bug/1459327
[16:07] <mup> Bug #1459327: Juju MAAS netwoking with custom bridge inside service <juju-core:New> <https://launchpad.net/bugs/1459327>
[16:07] <redelmann> lazyPower, from the last chating with you
[16:07] <lazyPower> redelmann: perfect, thanks for following up. i'm in ameeting give me a moment to take a look
[16:08] <redelmann> lazyPower, ok. tellme if more info is needed
[16:15] <jcastro> kwmonroe: heya, so the texaslinuxfest guys just pinged me reminding us the deadline for talks is tomorrow, and we haven't submitted anything.
[16:17] <kwmonroe> wat jcastro?!?  http://2015.texaslinuxfest.org/content/call-papers says June 28, 2015 11:59 p.m.: Deadline for proposals
[16:17] <jcastro> oh shit
[16:17] <jcastro> that's 30 days away.
[16:17] <jcastro> my bad
[16:18] <kwmonroe> did they tell you the 28th and you just assumed may, or did they say may?
[16:18] <jcastro> well they sent me the mail today with a reminder, I saw 28th and basically panicked.
[16:19] <jcastro> kwmonroe: but hey, if you do it by tomorrow it's a full 30 days out of the way!
[16:19] <kwmonroe> i see what you're doing there
[18:36] <arosales> upper--, hello
[18:36] <upper--> arosales: :)
[19:07] <nevermam> Hi Antonio..I am an IBM charmer.Can you please share with me the charm review checklist, which we can use as guidelines/best practise, while creating charm..
[19:08] <tvansteenburgh> arosales: ^
[19:12] <tvansteenburgh> neverman: i'd start here https://jujucharms.com/docs/stable/authors-charm-store
[19:12] <tvansteenburgh> nevermam: also read "Charm store policy" and "Best practices"
[19:16] <arosales> nevermam, hello and welcome :-0
[19:21] <arosales> nevermam, in addition to the links tvansteenburgh gave you I am working on creating a checklist I would like to propose to the juju list and get feedback on.
[19:21] <arosales> nevermam, that may be the list you are after, and once I get a +1 from folks I propose adding it to the review docs.
[20:09] <jcastro> also if anyone has any things to add to those doc pages I'd be happy to integrate them if it helps the next person
[21:10] <lazyPower> hey marcoceppi, i think i know why clearwater-bono didn't make it into the revq. There was no branch attached - its a pointer to their GH repository
[21:11] <lazyPower> iirc, attaching a branch was part of the requirements
[21:12] <lazyPower> redelmann: i apologize for taking so long to get back to you, but this is exactly what we were looking for. Thanks for the bug report
[21:12] <redelmann> lazyPower, great. i get a ¿workaround?
[21:13] <lazyPower> Nothing as of yet, but its now in the queue for review by the core team. If someone on the core team knows how to work aroudn this bug, they should be chiming in on the bug soon, otehrwise we'll get an official patch landed to address it soon
[21:13] <redelmann> lazyPower, just setting docker0 to some strange network like: "128.1.0.0" and juju start working again
[21:13] <lazyPower> redelmann: really? thats interesting...
[21:13] <redelmann> lazyPower, a little reserch from here: https://github.com/juju/juju/blob/master/network/network.go
[21:13] <lazyPower> redelmann: have you tried using something like the docker core bundle to deploy your docker + networking solution?
[21:14] <lazyPower> not the most ideal situation, but adding flannel networking will reconfigure the docker0 bridge w/ an ip from etd and that may resolve the issue in a juju-centric manner
[21:14] <lazyPower> https://jujucharms.com/u/lazypower/docker-core/4
[21:15] <redelmann> lazyPower, we already have our docker scripts for the projects, it was easier to use docker from the same project
[21:16] <lazyPower> ack, just curious
[21:16] <redelmann> lazyPower, but that's is too much infrastucture for us roght now.
[21:16] <redelmann> right
[21:17] <redelmann> lazyPower, we are using docker for isolating proccess from network
[21:17]  * lazyPower nods