[00:01] <blackboxsw> hi folks, anyone know why a config-get cmdline call from inside a debug-hooks would not match the values returned from an external call to "juju get haproxy"?
[00:02] <blackboxsw> I'm finding all "type: int"   values are being replaced with an empty unicode string for the haproxy charm when I'm inside a debug-hooks config-changed
[00:03] <marcoceppi> blackboxsw: that's odd, what version of juju?
[00:03] <blackboxsw> marcoceppi, 1.15.0-saucy-amd64
[00:03] <blackboxsw> used --upload-tools on my bootstrap
[00:04] <blackboxsw> deploying to cloud currently, will try lxc and see if I can reproduce the problem.
[00:04] <blackboxsw> string type variables seem to be intact.
[00:15] <blackboxsw> yeah will have to play in lxc land for a bit to see if the error is reproducible thx marcoceppi
[00:18] <blackboxsw> marcoceppi, also, I was working off of juju trunk, heading back to the ppa
[00:37] <blackboxsw> marcoceppi, yep problem solved... was versionitis of an old trunk juju updating to the PPA in saucy fixed it: 1.16.3-saucy-amd64
[00:44] <lazypower> Is it safe to assume we will always be using ubuntu in a charm? or should I decidedly do sanity checks on environment for custom configurations?
[00:45] <lazypower> To scope this question specifically, rsyslog has been default fo ra while now. Should I assume that's the default or should I build in detection for other systems like legacy syslog, syslog-ng?
[00:48] <sarnold> lazypower: well, there's charms published to the juju charmstore and then there's the charms that other groups might write; I could imagine a sles, rhel, or centos user liking what juju has to offer but wanting to build upon their existing tools, and choosing to write charms that use e.g. systemd journal as their logging service of choice
[00:52] <lazypower> Ok thats insightful. So to promote adoption I should do sanity checks for additional daemons and provide those utilities as well - this just changed scope. Thank you sarnold.
[01:20] <sarnold> lazypower: well, that's up to you to define the scope of what you'll support with your charms.
[01:21] <sarnold> lazypower: some shops might be all-fedora and want to embrace the systemd way of life; other shops might be heterogenous and want to deploy charms on top of anything. They'll have more work than shops with tighter goals..
[01:22] <lazypower> I'm in fairly constant contact with upstream about the charm I'm working on. They support everything under the sun - and if the guys on the other side of the fence want to help report bugs, I'll help them maintain it - to an extent. I dont want to try to make one charm ot rule them all, I'm currently running that mentality at my 9-5 and its exhausting.
[01:22] <lazypower> but I see no reason to pigeon hole anyone because I'm lazy and dont want to read a configuration guide.
[01:23] <sarnold> sounds like you'll have happy users :)
[01:23] <lazypower> Lets hope, here's to keeping my promises
[01:23] <sarnold> :)
[01:28] <lazypower> Is there a good place for offtopic juju chat or have the IRC channels boiled down to business over pleasure?
[02:01] <lazypower> looks like the hooks documentation just turned into chunky salsa after an edit or deployment - https://juju.ubuntu.com/docs/authors-hook-kinds.html
[05:37] <lazypower> So, i've been tasked with finding an alternative to the remote_syslog rubygem. I found nxlog as a viable alternative but the repositories don't exactly provide it for 12.04 LTS. What would an acceptable alternative be for the charm? Is maintaining a PPA a viable alternative or is PPA use frowned upon in charming?
[05:40] <sarnold> hey lazypower :) I'd again say this is up to you as a charmer to decide the scope. I'd say it is probably fine to use a PPA if the software is clearly not in the archive or your PPA version provides significant functionality beyond the version in the archive; it'd be best to document in the README which PPA is used and perhaps make archive/ppa/compile-from-source options -- or allow specifying which ppa to use...
[05:41] <lazypower> ... choices... i dont know what to do with myself.
[05:42] <sarnold> it's tough; if you make too many choices available to the end user they might wonder what benefit you provide compared to doing it all themselves. There are definitely times when it helps to be opinionated -- I love reading advice from opinionated people, even if I don't follow the advice. :)
[05:46] <lazypower> Well played sir.
[13:31] <X-warrior> I destroyed a service but it keeps on juju status list marked as 'life: diying' but never goes away. How to proceed/
[13:41] <X-warrior> http://pastebin.com/z3Mbi8HJ
[13:45] <marcoceppi> X-warrior: are any of the elasticsearch units in an error state?
[13:46] <X-warrior> nope, "agent-state: started"
[13:49] <X-warrior> marcoceppi: nope, there is no machine/service on error state as far as I can see
[14:18] <X-warrior> marcoceppi: did you say something my internet dropped
[14:35] <ev> Following https://bugs.launchpad.net/juju-core/+bug/1257705 would someone mind kicking off a new upload of juju to brew?
[14:38] <marcoceppi> X-warrior: nope. Can you show the full juju status output?
[14:39] <marcoceppi> ev: as soon as a new juju is out? sinzui did we have another release?
[14:39] <sinzui> marcoceppi, juju 1.16.4 in a few hours
[14:40] <X-warrior> http://pastebin.com/B11PkKZt
[14:40] <X-warrior> marcoceppi:
[14:40] <marcoceppi> sinzui: excellent! thanks
[14:40] <ev> cool, thanks
[14:40] <marcoceppi> ev: homebrew will be updated in a few hours
[14:40] <ev> whoop
[14:58] <X-warrior> marcoceppi: did you find anything wrong?
[14:58] <marcoceppi> X-warrior: nothing immediate, can you juju ssh elasticsearch/0 and run ps -aef | grep juju
[14:59] <marcoceppi> wonder if there's a long running hook or something
[15:00] <X-warrior> marcoceppi: I dont think so. http://pastebin.com/UU63aFqK
[15:01] <marcoceppi> X-warrior: correct. What happens if you run juju destroy-service again?
[15:01] <X-warrior> it just runs normally
[15:02] <marcoceppi> X-warrior: actually, what machine was logstash-indexer on?
[15:02] <X-warrior> 11
[15:02] <marcoceppi> X-warrior: juju ssh 11; ps -aef | grep juju
[15:02] <X-warrior> ok
[15:03] <X-warrior> marcoceppi:  http://pastebin.com/ifZGMSt6
[15:29]  * X-warrior tired
[15:38] <iri-> I tried to `juju set-environment access-key=ASDF secret-key=GHJK` and I get `ERROR The AWS Access Key Id you provided does not exist in our records.`. Ping @rogpeppe
[15:39] <X-warrior> marcoceppi: so any idea about this?
[15:39] <iri-> (I did also change it in the environment.yaml first, which may have been a mistake)
[15:41] <rogpeppe> iri-: changing it in environments.yaml first *shouldn't* have made a difference
[15:41] <rogpeppe> iri-: where do you see the error being printed?
[15:41] <iri-> rogpeppe: in the terminal where I ran juju set-environment
[15:42] <rogpeppe> iri-: do you have a go dev environment on your local machine?
[15:42] <iri-> rogpeppe: yes
[15:43] <rogpeppe> iri-: perhaps you could try this:
[15:43] <rogpeppe> iri-: go get code.google.com/p/rog-go/cmd/ec2
[15:43] <rogpeppe> (that fetches a little utility i wrote for dealing with ec2 stuff)
[15:43] <rogpeppe> iri-: then:
[15:44] <rogpeppe> iri-: export AWS_ACCESS_KEY_ID=ASDF
[15:44] <rogpeppe> iri-: export AWS_SECRET_ACCESS_KEY=GHJK
[15:44] <rogpeppe> iri-: ec2 instances
[15:44] <rogpeppe> iri-: if your key is ok, that should print your current set of running instance ids
[15:45] <iri-> rogpeppe: indeed it does.
[15:45] <rogpeppe> iri-: hmm
[15:46] <rogpeppe> iri-: what output do you get if you run juju status?
[15:48] <rogpeppe> iri-: actually, that was probably a bad suggestion, as status takes ages
[15:49] <rogpeppe> iri-: what do you see if you add --debug to the set-environment flags?
[15:49] <iri-> rogpeppe: nothing out of the ordinary
[15:50] <iri-> INFO juju.provider.ec2 ec2.go:193 opening environment "ec2" and the usual line containing a giant json object
[15:53] <rogpeppe> iri-: so it didn't fail then?
[15:53] <iri-> it said "ERROR" (as in the first line I said to the channel) and then didn't seem to update the environment
[15:54] <rogpeppe> iri-: no other info on the ERROR line?
[15:54] <iri-> rogpeppe: nope, it's as I showed
[16:04] <rogpeppe> iri-: have you deprecated the old keys already?
[16:05] <iri-> I made them inactive, yes
[16:05] <iri-> (@ rogpeppe)
[16:06] <rogpeppe> iri-: could you try reactivating them and then doing the set-environment again?
[16:07] <rogpeppe> iri-: unfortunately we can't tell *which* access key id doesn't exist in their records
[16:07] <iri-> rogpeppe: that worked! Damn good job I didn't delete the old credentials..
[16:07] <iri-> rogpeppe: I wouldn't have expected to need the old one to work in order to revoke it..
[16:07] <rogpeppe> iri-: indeed
[16:08] <rogpeppe> iri-: i'm not quite sure why it does (the latest version of juju definitely does not)
[16:09] <rogpeppe> iri-: i've managed to duplicate your error anyway
[16:09] <iri-> rogpeppe: great.
[16:10] <rogpeppe> iri-: does this file exist for you: ~/.juju/environments/ec2.jenv ?
[16:10] <iri-> yes
[16:10] <rogpeppe> iri-: ah, ok (i thought you were using an earlier juju version)
[16:11] <rogpeppe> iri-: in that case, *that* is the place that's consulted for current environment keys by the client
[16:11] <iri-> rogpeppe: so.. I should have edited that file? I'm not following..
[16:11] <rogpeppe> iri-: it has an entry "bootstrap-config" containing all the keys that the environment was bootstrapped with
[16:12] <rogpeppe> iri-: it shouldn't be necessary, but yes, editing that file would have fixed the problem
[16:12] <rogpeppe> iri-: i'm just looking to find out why it failed the way it did
[16:18] <rogpeppe> iri-: ah, i understand now
[16:18] <rogpeppe> iri-: it does need the provider credentials to read the s3 bucket that contains the details of how to find the bootstrap instance
[16:18] <rogpeppe> iri-: in the future we will be caching that instance's address locally, but in general there's no easy way around it
[17:05] <yolanda> jamespage, i updated heat charm with different auth key generation, and adding more tests, coverage is now 85%
[17:21] <jamespage> yolanda, better!
[17:21]  * jamespage looks
[19:59] <ashipika> hi all..  any idea why mongo would fail to start on bootstrap?
[20:02] <ashipika> Starting MongoDB server (juju-db)
[20:02] <ashipika> Connection to 10.0.0.1 closed
[20:02] <ashipika> ERROR juju.environs.manual bootstrap.go:105 bootstrapping failed, removing state file: exit status 1
[20:19] <lazypower> ashipika: is there any other relevant information in the unit log?
[20:20] <lazypower> I've eperienced this behavior a few times when I've hastily tried to add shards to the cluster before the master node is spun up, but i attribute that to user error.
[20:20] <ashipika> looking at it.. it's on a livecd.. so might be it ran out of disk space..
[20:20] <ashipika> but i doubt it..
[20:21] <lazypower> I wont be able to help much aside from pointing you in places to look, in about 3 hours i'll be at home and can try to reproduce what you're seeing if that helps.
[20:22] <ashipika> ok.. i'll try to pastebin the mongodb log
[20:22] <ashipika> if that helps
[20:23] <lazypower> certainly. I'd be happy to look over the output
[20:23] <ashipika> any other logs that might help?
[20:25] <lazypower> Can you include the juju controller log for completeness? I'd like to see the communication between the nodes
[20:25] <ashipika> http://paste.ubuntu.com/6521644/
[20:26] <ashipika> hmm.. i'm doing all this on the same machine
[20:26] <ashipika> same VM that is.. and there are no juju logs... strange
[20:27] <lazypower> LXC?
[20:27] <ashipika> just /var/log/juju/all-machines.log, which is  empty..
[20:27] <ashipika> vmware player
[20:27] <lazypower> ok, well looking at your mongodb log - this line itself http://paste.ubuntu.com/6521644/
[20:27] <lazypower> there's something going on in one of the hooks that is restarting the daemon possibly?
[20:28] <ashipika> this: got kill or ctrl c or hup signal 15 (Terminated), will terminate after current cmd ends ?
[20:28] <lazypower> how about any logs in $HOME/.juju/local/logs?
[20:28] <lazypower> Correct. That line says the deamon proces received an interrupt
[20:28] <ashipika> no such logs :(
[20:28] <lazypower> hmm. What environment are you running JUJU in?
[20:29] <ashipika> juju version: 1.17.0-precise-amd64
[20:29] <ashipika> precise livecd
[20:29] <ashipika> null environment
[20:29] <lazypower> ah ok. I have not startd experimenting with the null environment
[20:29] <lazypower> the behavior changes a bit when using the null provider
[20:29] <ashipika> i know.. experimental :)
[20:30] <ashipika> that's what i'm doing.. experimenting..
[20:30] <ashipika> trying to create a custom livecd that would boot a bootstrapped juju host
[20:30] <lazypower> nice
[20:30] <ashipika> crazy :)
[20:30] <lazypower> it can be both :D
[20:30] <ashipika> seems so, yes :)
[20:31] <ashipika> hmm mongo version 2.0.4
[20:31] <ashipika> latest stable mongo version should be 2.4.8
[20:31] <ashipika> but that should not be an issue
[20:32] <ashipika> lazypower: do you know, perhaps, who is working on null environment?
[20:46] <dpb1> Is local provider not working currently on 1.16.3 and .4?   My machines stay in pending.
[20:47] <dpb1> I get this in the cloud-init.log: http://paste.ubuntu.com/6521479/
[20:47] <dpb1> It also seems like the instances don't get addresses assigned, and the console spins and waits for the network to come up.
[20:51] <marcoceppi> ashipika: are you using mongodb from the cloud-tools archive?
[21:03] <ashipika> sorry.. had to grab a bite.. i ran: juju bootstrapp --upload-tools.. so whatever is used there
[22:48] <hazinhell> dpb1, you need the cloud-init-output.log to see what's actually happening
[22:50] <hazinhell> dpb1, lxc seeds cloud-init with a direct file inject for userdata
[23:03] <dpb1> hazinhell: I found the answer: http://curtis.hovey.name/2013/11/16/restoring-network-to-lxc-and-juju-local-provider/
[23:03] <dpb1> basically, an old dhcp package
[23:03] <dpb1> removing the cached images fixed it
[23:04] <hazinhell> ah.. 12.0.4.2 in lxc cache
[23:04] <hazinhell> yeah.. that's bit me before
[23:04] <hazinhell> there really isn't a good way to tell what the heck version that is the cache because its a totally generic name
[23:04] <dpb1> it was from February!
[23:04] <dpb1> ya.
[23:05] <dpb1> I think maybe juju should ship with a maintenance job for it or something, but there are concerns with that approach too. :)
[23:05] <hazinhell> really it should be lxc's responsibility..
[23:06] <dpb1> ya, you are probably right
[23:06] <hazinhell> its the one keeping the cache, it should be responsible for sanely updating it
[23:06] <dpb1> true.  agreed.