[01:22] <ZonkedZebra> Is there a way to modify the config of a charm from within a hook?
[01:23] <davecheney> ZonkedZebra: no, there is no config-set hook command
[01:24] <ZonkedZebra> davecheney: best approach to auto load new code from a remote repo? cron? hooks? config bool that you set back and forth?
[01:25] <davecheney> ZonkedZebra: why not
[01:25] <davecheney> juju set revision=XXX
[01:26] <davecheney> which will fire the config-changed hook on your units
[01:26] <ZonkedZebra> davecheney: if I juju set to the value it already is does that trigger the config-changed hook?
[01:27] <ZonkedZebra> (The goal is to have production and staging that track the appropriate git branch with minimal intervention)
[01:29] <davecheney> ZonkedZebra: no
[01:29] <davecheney> Buuuuuut, remember as a charm author, we do not guarentee that hooks will not be run only once
[01:29] <davecheney> so they should be written to expect this
[01:31] <ZonkedZebra> ZonkedZebra: yep, thats fine. I've got all the appropriate checks to make sure multiple runs don't cause issues. Just looking for the easiest way to poke it a little to have pull the new changes from git
[01:32] <davecheney> ZonkedZebra: why do you want config-set (sic?)
[01:32] <davecheney> it smells like you are trying to tell someone else that the charm changed something
[01:35] <ZonkedZebra> i just created a config property that I would going to set to true, and then reset to false in config-changed
[01:36] <davecheney> what would it resetting to false mean ?
[01:36] <ZonkedZebra> That i could consistently do "juju set app pull=true"
[01:36] <davecheney> i'm not trying to troll btw, just trying to understand your problem to fit it into the (sometimes limiting) model that Juju offers
[01:36] <davecheney> what does pull=true do ?
[01:37] <davecheney> ie, why not just setup a cron on the unit ?
[01:37] <ZonkedZebra> it would fire config-changed (where git pull happens) and then set back to false to be ready to get pull=true set again
[01:38] <davecheney> ZonkedZebra: so, something would be cron'd on the client to setup pull=true ?
[01:38] <davecheney> why not just setup the cron on the unit ?
[01:38] <ZonkedZebra> No, I would do that by hand
[01:38] <davecheney> ok, in that case i'd recommend
[01:38] <davecheney> not pull=true
[01:38] <ZonkedZebra> do work, commit, do work, commit, juju set app pull=true
[01:38] <davecheney> but revision=XXXX
[01:39] <ZonkedZebra> Would also work, but I would like the units associated with branches, not a particular rev, and as we discussed, if branch is already dev then set branch=dev would not trigger the pull
[01:40] <davecheney> sure, have two config setings
[01:40] <davecheney> branch=... rev=...
[01:40] <davecheney> if you just need a trigger
[01:41] <davecheney> juju set app rev=$(pwgen 100)
[01:41] <davecheney> just to set it to some random garbage
[01:41]  * ZonkedZebra nods
[01:41] <ZonkedZebra> davecheney: that will do, thanks
[01:41] <davecheney> np
[01:42] <davecheney> pwgen 100 may be overkill
[01:42] <davecheney> maybe call the config valye nonce or trigger or something
[01:45] <ZonkedZebra> davecheney: timestamp so at least it will be slightly useful
[01:47] <davecheney> suredate +%N
[01:47] <davecheney> date +%N
[01:47] <davecheney> maybe
[02:03] <julianwa_> davecheney:  hi, I use juju add-machine and OS installation failed due to some post install script failure. now I can't juju terminate-machine. the life-cycle is dying...  what can I do here?
[03:07] <jose> hey marcoceppi, is it possible to get the postfix charm on the store before oct 1st?
[03:17] <davecheney> julianwa_: i'm sorry you got bit by this
[03:17] <davecheney> this is an open bug/freature request
[03:18] <davecheney> your best bet is to delete the machine using aws or whateve ryou use
[03:18] <davecheney> then ignore the broken record in the juju status
[03:18] <davecheney> it's been a known issue for a long time
[03:18] <davecheney> i'm trying to get it bumped up th epriority list
[03:18] <davecheney> but please don't take that as a forward looking statement
[03:21] <julianwa_> davecheney: you mean leave the dying machine there? but the dying server will have same maas-name. is that ok?
[03:24] <davecheney> oh, you're using maas
[03:24] <davecheney> hmm
[03:25] <freeflying> lol
[03:25] <davecheney> julianwa_: how did the machine get killed ?
[03:25] <davecheney> did you use mass to kill it when the terminate-machine failed ?
[03:26] <julianwa_> davecheney:  not killed. one post-install script failed when juju add-machine. Then I execute terminate-machine
[03:26] <julianwa_> davecheney: it's still in MAAS
[03:27] <davecheney> julianwa_: are you Canonical ?
[03:27] <julianwa_> davecheney:  yes...
[03:27] <davecheney> lets talk in that other channel
[03:27] <davecheney> sorry folks, i'll post a wrap up
[03:27] <davecheney> when I figure out the problem
[07:30] <fwereade> marcoceppi, jcastro, ping
[08:31] <gnuoy> hi, I've added a new unit to an existing juju deployment and when I do a juju status all the other machines report in fine but the new unit reports "agent-state-info: '(error: failed to list contents of container: juju-stagingstack-geonames"
[08:31] <gnuoy> I can query the juju-stagingstack-geonames"
[08:32] <gnuoy> bucket fine and have downloaded bootstrap-verify, provider-state and the tools tgz without a problem
[08:32] <gnuoy> I tried removing the unit and adding another one but I get the same error
[10:30] <bloodearnest> heya all - I've been getting "cannot log in to admin database" immediately after a bootstrap on Openstack
[10:30] <bloodearnest> this is with 1.14.1 on raring
[11:54] <marcoceppi> fwereade: pong
[11:55] <fwereade> marcoceppi, hey, I was pinging you as a possible evilnickveitch proxy, but I see he's online now
[11:55] <marcoceppi> gnuoy: I've not seen that error before, what provider are you using? What version of Juju?
[11:55] <marcoceppi> fwereade: ack
[11:56] <fwereade> evilnickveitch, ping
[11:56] <evilnickveitch> fwereade, hi
[11:56] <gnuoy> marcoceppi, openstack and 1.13.2-1 bzr revno 1670
[11:56] <marcoceppi> bloodearnest: is this /immediately/ after bootstrap, or after you can verify that the bootstrap is running via juju status
[11:56] <fwereade> evilnickveitch, I was wondering if there was anything I could do to ease the passage of the docs I gave you a whileback?
[11:56] <gnuoy> marcoceppi, what object is it trying to access ?
[11:56] <fwereade> evilnickveitch, a casual look seemed to indicate they weren't up yet
[11:56] <marcoceppi> gnuoy: I have no idea, did the machine come online?
[11:57] <fwereade> evilnickveitch, if the problem is, say, that they're crap, I'd like to help make them less so :)
[11:57] <evilnickveitch> fwereade, oh, thanks for the offer - they are not, because I have taken the opportunity to include them in more of a restructure, but they will be up later today
[11:57] <gnuoy> marcoceppi, nova thinks its active, I'll try ssh'ing to it
[11:57] <fwereade> evilnickveitch, ok, that's awesome, tyvm
[11:57] <marcoceppi> gnuoy: if you can ssh in to it, then you can get the /var/log/juju/machine-*.log file - should help shed some light
[11:58] <evilnickveitch> fwereade, no, they aren't crap at all :)
[11:58] <evilnickveitch> I will let you know when they go up, would be good to get your feedback
[11:58] <fwereade> evilnickveitch, jolly good, i felt obliged to check ;p
[11:58] <fwereade> evilnickveitch, cheers
[11:59] <gnuoy> marcoceppi, well it looks like I was looking in the wrong machine and the new machine never came. I'll have a dig around and see if  I can see why. thanks
[12:01] <marcoceppi> gnuoy: also, you might want to consider moving to 1.14.1 as it's the latest "stable"
[12:01] <gnuoy> marcoceppi, absolutely
[12:01] <marcoceppi> I noticed the environment was named "staging", but the latest dev will be 1.15 so 1.14.1 is the truely latests
[12:02] <marcoceppi> it should also be easier to upgrade between stable versions than dev releases when doing in-place upgrades
[12:02]  * marcoceppi is so giddy about in-place juju upgrades
[12:04] <bloodearnest> marcoceppi: I get that running juju status
[12:04] <bloodearnest> marcoceppi: it times out after about 7min with "Unable to connect to environment "openstackshredder""
[12:05] <marcoceppi> bloodearnest: that's interesting. Can you destroy then bootstrap again with `--debug -v` options, then run `juju status -v --debug`
[12:05] <bloodearnest> marcoceppi: ack
[12:05] <marcoceppi> bloodearnest: also, you have admin-secret set, correct?
[12:05] <marcoceppi> bloodearnest: in your environments.yaml
[12:06] <bloodearnest> marcoceppi: yep - freshly generated with generate-config
[12:06] <marcoceppi> bloodearnest: excellent, if you could pastebin those when you get them that should help shed some light
[12:09] <bloodearnest> marcoceppi: destroy fails: https://pastebin.canonical.com/98138/
[12:09] <bloodearnest> marcoceppi: the --debug points at opendns issues
[12:10] <bloodearnest> some kinda redirect issues
[12:10] <marcoceppi> bloodearnest: that's annoying
[12:11] <bloodearnest> marcoceppi: yeah - am trying from a canonistack instance I use for dev, but I'm having similar problems there too
[12:24] <bloodearnest> marcoceppi: bootstrap output: https://pastebin.canonical.com/98139/
[12:25] <bloodearnest> marcoceppi: status output: https://pastebin.canonical.com/98140/
[12:26] <marcoceppi> bloodearnest: yeah, it's successfully connecting to the bootstrap, just not logging in for some reason :\
[12:27] <bloodearnest> marcoceppi: for completeness, destroy output (3 lines): https://pastebin.canonical.com/98141/
[12:27] <bloodearnest> all this is done from a nother vm on the same OS environment
[12:28] <marcoceppi> bloodearnest: I've not encountered this, not quite sure how to debug past here. You might find more information on the bootstrap node in /var/log/juju/
[12:35] <bloodearnest> marcoceppi: don't know if it's related, but I can't ssh into the boostrap node - publickey denied
[12:35] <marcoceppi> bloodearnest: well, that's also interesting
[12:41] <bloodearnest> marcoceppi: in another env, I seem to be able to ssh in
[14:15] <Nelson111> Assuming i joined the right ubuntu catch up..... hello geeks nerds and all :)
[16:40] <jamespage> marcoceppi, charm-tools uploaded to saucy - got accepted
[16:40] <sylvaing> hi jcastro i just send mail about bluemind and juju ;-)
[16:40] <marcoceppi> jamespage: \o/ Thank you!
[16:41] <jamespage> marcoceppi, hey np
[16:53] <marcoceppi> jamespage: the next step is to get charm-tools in to backports for precise, when I run requestbackport it says no published binaries in saucy. Is this just a waiting game?
[17:01] <jcastro> Charm Championship submission charm school on http://ubuntuonair.com in a few minutes!
[17:08] <jamespage> marcoceppi, give it a chance to get into the release pocket
[17:09] <marcoceppi> jamespage: ack, figured
[17:09] <marcoceppi> jcastro: you need me there, or is this a Mims and you thing?
[17:11] <arosales> Hello were are getting kicked off on the charm school, "How to enter the Charm Championship."
[17:11] <marcoceppi> jcastro: oh bugger. The new package removes charm-helper-sh, which is provided in saucy. I suppose that's going to be a problem during the backport req
[17:11] <marcoceppi> jamespage: ^^, not jcastro
[17:12] <marcoceppi> bah, that whole sentance is wrong
[17:12] <marcoceppi> jamespage: oh bugger. The new package removes charm-helper-sh, which is provided in the precise version. I suppose that's going to be a problem during the backport req
[17:14] <arosales> if you would like to follow along for the charm school it is at http://ubuntuonair.com/
[17:15] <arosales> YouTube direct link is @ https://www.youtube.com/watch?v=c6wTtWDyXsc
[17:24] <m_3> sound went out completely :-(
[17:24] <m_3> I can't hear anything... gonna try to reconnect... sorry for the technical difficulties
[17:40] <ktubilgisayar> hi
[19:36] <zradmin> has anyone else been trying to setup HA openstack with Juju? I've been following the guides posted and have my setup 90% there.... instances are launching and running but are not getting an IP from quantum at all. When I check the logs they just show APMQ messages succesfully crossing. anyone have similar issues?
[20:33] <marcoceppi> zradmin: a few people have been setting up openstack, let me see if I can recall their names
[20:38] <zradmin> marcoceppi: thanks!
[20:38] <marcoceppi> kurt_: were you the one working on deploying openstack?
[20:39] <kurt_> yup
[20:43] <marcoceppi> kurt_: did you ever get far enough to experience quantum not assigning IP addresses?
[20:43] <kurt_> I'm finished and it all works for me.
[20:43] <kurt_> I had to manually configure quantam.
[20:45] <kurt_> marcoceppi: I could never get the charm to work out of the box
[20:45] <kurt_> I only allowed the charm to do the basic install, but did all post-configuration myself
[20:54] <zradmin> kurt_: what was the post configuration? the charm i specified eth1 as extnet but it looks like something in juju changed so it uses alot of lxc bridges
[20:54] <zradmin> kurt_: I have nodes with 2 nics, one on the "internal" switch and one on the "external" switch
[20:55] <kurt_> right - one nic should connect to your oam net, the other to you external lan
[20:55] <kurt_> zradmin - do you do evernote?  I've put it all in to that format so you can see
[20:56] <zradmin> kurt: not currently but I can create an account real fast
[20:56] <kurt_> do that and I'll share the note
[20:56] <zradmin> kurt: evernote username is zradmin :)
[20:58] <kurt_> k, hang on a sec
[20:59] <kurt_> Actually you may not need account
[20:59] <kurt_> see if you can see this
[20:59] <kurt_> https://www.evernote.com/shard/s244/sh/37674b81-51af-4579-9579-8058b4cf3a9a/aca1835adea4ca6cb52e7d0091ced91c
[20:59] <zradmin> yup got it
[21:00] <kurt_> There you go
[21:00] <kurt_> that should answer your questions
[21:00] <zradmin> thanks, I'll let you know how it turns out :)
[21:00] <kurt_> good stuff
[21:01] <kurt_> FYI - the process was shamelessly borrowed from Kentb on the security team
[21:30] <zradmin> kurt_: hmmm it looks like it does the same thing I was doing in horizon to configure the ext_net, I created a new project and created the networks via commandline and am still having the same issue
[21:32] <zradmin> kurt: this is what I get on the instance http://pastebin.ubuntu.com/6164559/
[21:44] <kurt_> zradmin: is your ext_net hooked up to a separate network?
[21:44] <kurt_> it looks like its trying to bring it up on eth0 instead of eth1 too
[21:45] <zradmin> yeah it is set to configure to eth1, but when I do an ifconfig on the nova-compute node it doesnt show anything configured on eth1
[21:45] <kurt_> you need to specify eth1 for the quantum charm
[21:45] <zradmin> yeah i did that
[21:46] <kurt_> no ip address
[21:46] <zradmin> this is in the syslog on the node m7q49 dnsmasq-dhcp[2221]: DHCP packet received on qvo6a26ce05-ae which has no address
[21:46] <kurt_> are you certain your eth1 is alive and connected to a network other than your eth0?
[21:47] <kurt_> are you doing this with physical hosts or virtual hosts?
[21:48] <zradmin> physical, building on an m1000e blade
[21:49] <kurt_> so ensure your eth1 is actually wired to a second network.  you may need to test that part - because I think that's where your problem is
[21:49] <kurt_> also - are you specifying precise:grizzly?
[21:50] <kurt_> here - have a look at my local.yaml - make sure yours is similar
[21:50] <kurt_> http://pastebin.ubuntu.com/6164613/
[21:51] <kurt_> mine goes for a single node rather than multinode installation
[21:51] <kurt_> well, let me rephrase...
[21:51] <kurt_> I am going the non-HA route for now as proof of concept
[21:51] <kurt_> I installed on 6 virtual hosts
[21:52] <zradmin> ah i see it, my switch stack is messing up my vlans
[21:52] <zradmin> its tagging the traffic on the port
[21:53] <kurt_> ;)
[21:54] <kurt_> I need to take off for a while - I'll be back in an hour or so.
[21:54] <kurt_> ping me if you are still having problems after you figure out your tagging problem
[22:02] <_mup_> Bug #1232282 was filed: maas provider: bucket download failures not handled well <theme-oil> <juju:New> <https://launchpad.net/bugs/1232282>
[22:16] <ZonkedZebra> Is there a one liner to obtain the public address of a unit?
[22:21] <marcoceppi> ZonkedZebra: uh, kind of
[22:21] <ZonkedZebra> juju status api/0 | grep public-address | cut -d ":" -f 2 | tr -d " "
[22:21] <ZonkedZebra> Something like that?
[22:21] <marcoceppi> basically, I would have used awk, but that's because I <3 awk
[22:21]  * ZonkedZebra is not an awk user
[22:21] <ZonkedZebra> awk cleaner?
[22:22] <marcoceppi> ZonkedZebra: that's highly subjective :P
[22:22] <ZonkedZebra> what would it be in awk?
[22:24] <marcoceppi> ZonkedZebra: juju status wordpress/0 | grep -m1 public-address | awk '{print $2}'
[22:24] <marcoceppi> -m1 is to only do the first match, in case there are subordinates
[22:24] <ZonkedZebra> Worth learning I guess, After all, everything has awk
[22:25] <marcoceppi> ZonkedZebra: awk is it's own language, but at the surface it's pretty easy to use
[22:25] <sarnold> it's awesome for one-liners. beyond that I lose interest, hehe
[22:25] <marcoceppi> it's like vi/vim. You learn one basic flag and life is good, then when you need to dig deeper, you can
[22:26] <marcoceppi> (that flag is -F)
[22:26] <sarnold> lol
[22:26] <sarnold> yes, -F is awesome. :)
[22:28] <marcoceppi> I only learned of awks amazing depth about two years ago, up until then it was my go-to "cut"
[23:08] <kurt_> see + awk are awesome
[23:08] <kurt_> sed + awk are awesome
[23:09] <kurt_> old as the hills but still as good as gold
[23:14] <ZonkedZebra> This is a new error for me, "error: no relation id specified". Seems to be triggered by a call to relation-get. Ideas?
[23:15] <ZonkedZebra> If only the hook tools had real documentation....
[23:18] <ZonkedZebra> Probably because I'm calling a script shared by multiple hooks
[23:19] <ZonkedZebra> Whats the common practice for sharing code/functionality between hooks?
[23:33] <kurt_> marcoceppi: ping
[23:38] <zradmin> kurt_: i fixed the switch issue and restarted the compute node... but it still dosn't seem like traffic is going accross the bridge here's my interfaces: http://pastebin.ubuntu.com/6164893/
[23:40] <kurt_> zradmin: you've got a whole lot more on my quantum-gateway than I do
[23:40] <kurt_> did you use my local.yaml as your deployment template?
[23:41] <kurt_> here are my interfaces
[23:41] <kurt_> http://pastebin.ubuntu.com/6164899/
[23:42] <zradmin> yeah my settings for those charms match up
[23:43] <zradmin> hmm I dont have a br-ext
[23:53] <kurt_> again - you must be having some issues with your eth1
[23:53] <kurt_> a bridge can't be created
[23:54] <kurt_> look for hints in /var/log/syslog or dmesg
[23:54] <zradmin> yeah im still looking into it... i hope something isn't wrong with that test node
[23:54] <kurt_> try to manually create the bridge
[23:54] <zradmin> ty
[23:55] <kurt_> once you can manually create the bridge, you should be golden
[23:55] <kurt_> maybe you have some spanning tree issues?
[23:58] <kurt_> zradmin: look in dmesg to see what's happening - look at my entries around br-ex
[23:58] <kurt_> http://pastebin.ubuntu.com/6164928/