[09:34] <cargill_> hi, trying to set up juju on debian, and I'm getting "INFO juju.environs.sync sync.go:235 built 1.16.5.1-unknown-amd64 (4540kB); ERROR supercommand.go:282 invalid series "unknown""
[09:35] <cargill_> using the saucy ppa, since the env is mostly sid
[09:36] <cargill_> where does it get the "unknown" tag?
[10:11] <eagles0513875> hey all
[10:11] <eagles0513875> hey fwereade :)
[11:42] <bloodearnest> heya all - having issues with my openstack setup (on canonistack)
[11:42] <bloodearnest> juju status tells me in cannot connect, I need to bootstrap
[11:42] <bloodearnest> juju bootstrap tells me already bootstrapped
[11:43] <bloodearnest> nova boot works fine, and I'm well within my quota
[11:43] <melmoth> bloodearnest, when i use juju on canonistack, i use a ... canonistack vm to run juju
[11:43] <bloodearnest> melmoth: me too
[11:43] <melmoth> if it tells me it s alreadh bootstraped, just destroy environment
[11:44] <bloodearnest> melmoth: already tried that
[11:44] <melmoth> may be it bootstrapped but the bootstrap node vm was not able to run
[11:44] <bloodearnest> ...and it's working
[11:45] <melmoth> so the bootstrap node is up and running ?
[11:45] <melmoth> ssh into it, and try to find some juu related logs
[11:45] <bloodearnest> melmoth: looks like
[11:47] <bloodearnest> melmoth: I did multiple destroy-environments already, I'm sure
[11:47] <bloodearnest> and juju status looks like it's timing out
[11:47] <melmoth> ssh into the node
[11:47] <melmoth> when you bootstrap , it create a vm.
[11:47] <bloodearnest> yeah
[11:47] <melmoth> ssh into it
[11:48] <melmoth> if you cant, that explain why juju is not working
[11:48] <melmoth> if you can, you need to find out what s wrong in it.
[11:48] <melmoth> but i have no clue how the internal works
[11:48] <melmoth> so appart from looking atr /var/log randomly, not sure
[11:48] <melmoth> (well, randomly... cloud-init.log cloud-init-output.log and anything that looks like juju)
[11:50] <bloodearnest> couple of request failures from swift in the logs
[11:50] <bloodearnest> melmoth: /var/log/juju/all-machines.log
[11:51] <bloodearnest> but status is working again
[11:51] <bloodearnest> trying a deploy
[11:52] <bloodearnest> so it seems to be working
[11:52] <noodles775> rogpeppe: jfyi, I updated juju-core and retried an amulet test, I still see bug 1269519 : http://paste.ubuntu.com/6791133/
[11:52] <_mup_> Bug #1269519: Error on allwatcher api <juju-core:In Progress by rogpeppe> <juju-deployer:Confirmed> <https://launchpad.net/bugs/1269519>
[11:52] <rogpeppe> noodles775: bother.
[11:53] <rogpeppe> noodles775: i shall recheck.
[11:53] <noodles775> Yeah, sorry (It's not blocking me or anything, so don't prioritise it unless it's affecting others)
[12:02] <teknico_> fyi, filed bug #1271144
[12:02] <_mup_> Bug #1271144: br0 not brought up by cloud-init script <juju-core:New> <https://launchpad.net/bugs/1271144>
[13:11] <cargill_> hi, trying to set up juju on debian, and I'm getting "INFO juju.environs.sync sync.go:235 built 1.16.5.1-unknown-amd64 (4540kB); ERROR supercommand.go:282 invalid series "unknown""
[13:11] <cargill_> that's juju-local
[13:14] <mgz> cargill_: you probably need to teach juju about debian series names
[13:17] <eagles0513875> cargill_: i was trying to make a push for the use of juju for the Document foundation but was asked if it works with debian lol
[13:18] <eagles0513875> cargill_: would be nice if juju was a bit platform neutral be it for debian or any debian derivatives
[13:19] <mgz> cargill_: see updateDistroInfo in environs/simplestreams/simplestreams.go - you'll want to make that read debian csv as well, and go from there
[13:27] <rogpeppe1> noodles775: i can't reproduce the problem any more with latest juju-core trunk
[13:27] <rogpeppe1> noodles775: i see "The rabbitmq-server passed this test."
[13:28] <noodles775> rogpeppe1: let me check the repro instructions and see how it differs from my amulet test.
[13:29] <rogpeppe1> noodles775: previously, i got 100% failure rate, so i think that *something* has been fixed
[13:30] <noodles775> rogpeppe1: yeah - you were always checking with the rabbitmq-server instructions right? (as my instructions didn't work because you didn't have python-requests installed).
[13:30]  * noodles775 can try with the other instructions too.
[13:32] <rogpeppe1> noodles775: i was trying with the basic_deploy_test.py in bzr branch lp:~mbruzek/charms/precise/rabbitmq-server/tests
[13:33]  * noodles775 does the same.
[13:38] <rogpeppe1> noodles775: you *did* "go install" the latest juju, did you?
[13:38] <rogpeppe1> noodles775: (and do a fresh bootstrap with it)
[13:40] <noodles775> rogpeppe1: I did this: http://paste.ubuntu.com/6791620/
[13:40] <noodles775> (and yes, I'm rebootstrapping for every run)
[13:42] <rogpeppe1> noodles775: and i presume that "which juju" prints "/home/michael/golang/bin/juju" ?
[13:43] <rogpeppe1> noodles775: could you paste me the contents of ~/.juju/local/logs/machine-0.log?
[13:44] <noodles775> rogpeppe1: first, here's the run with the rabbitmq log: http://paste.ubuntu.com/6791637/
[13:44] <noodles775> er, rabbit-mq steps to repeat.
[13:45] <noodles775> rogpeppe1: and here's the machine-0.log - http://paste.ubuntu.com/6791655/
[13:47] <rogpeppe1> noodles775: that doesn't look like output from the latest version
[13:48] <rogpeppe1> noodles775: the latest version prints "connection from" and "connection terminated" lines when API connections are made and dropped
[13:48] <noodles775> rogpeppe1: excellent - so, let me find out how I could be possibly running the old version given the steps taken.
[13:49] <rogpeppe1> noodles775: i didn't see your entire terminal log - for example, i didn't see the bootstrap step, or the contents of your $PATH, so i can't be sure
[13:50] <noodles775> They're what you'd expect, I'll paste with a re-run (while trying to find out what else juju may still have running)
[13:50] <rogpeppe1> noodles775: try destroying the environment and re-bootstrapping with "--debug"
[13:51] <noodles775> hrm, old tools?
[13:51] <noodles775> 2014-01-21 13:51:17 INFO juju.provider.local environ.go:473 tools location: /home/michael/.juju/local/storage/tools/releases/juju-1.17.0.1-saucy-amd64.tgz
[13:52] <noodles775> Right, even the binary version is 1.17.0, let me paste.
[13:52] <rogpeppe1> noodles775: the --debug flag should induce bootstrap to say where it's getting the jujud binary from
[13:54] <noodles775> rogpeppe1: Right - lots of 1.17.0 in there... http://paste.ubuntu.com/6791688/
[13:54] <cargill_> mgz: that means rebuilding juju, right? where's the sources located at?
[13:54] <rogpeppe1> noodles775: ah, i know the problem
[13:55] <rogpeppe1> noodles775: it's frickin' sudo
[13:55] <rogpeppe1> noodles775: which ignores your $PATH
[13:55] <noodles775> Urg... right :/
[13:55] <mgz> cargill_: I assumed you had, which would be how you got the -unknown- there
[13:55] <mgz> cargill_: but, lp:juju-core and see README
[13:56]  * noodles775 sudo bootstraps with an explicit path.
[13:56] <rogpeppe1> noodles775: i've aliased sudo to http://paste.ubuntu.com/6791714/
[13:57] <noodles775> Thanks rogpeppe1
[13:58] <rogpeppe1> noodles775: i consider it a real problem that sudo doesn't use the same PATH, although i'm aware the writers of sudo consider it a feature
[13:58] <cargill_> it's a debian machine with a saucy ppa set up, I'm working on getting an ubuntu lxc container, but don't have that ready yet
[13:59] <noodles775> Yeah, I'd agree that it should be different, ideally we shouldn't need sudo to bootstrap, but there's a plan for that I guess.
[15:09] <Ming> Does charm still use 'revision' file to maintain the version?
[15:10] <Ming> Docs said deprecated but don't know what is alternative
[15:16] <lazypower> Ming, from what I understand it still uses the revision file to maintain the currently deployed version, otherwise I do believe it uses the bzr revision information. (needs citation)
[15:16] <Ming> Thx
[15:17] <marcoceppi> Ming: not quite
[15:17] <marcoceppi> Ming: revision is only used for local deployments
[15:17] <marcoceppi> Ming: otherwise, the charmstore maintains a seperate revision of the charm
[15:52] <marcoceppi> dpb1: is this ready for review? https://code.launchpad.net/~davidpbritton/charms/precise/haproxy/fix-service-entries/+merge/202387
[16:01] <dpb1> marcoceppi: yup.  I just addressed all of sideni's comments
[16:06] <lazypower> is there something specific i need to add to my juju environment to get the `charm test` command to work proper? I've to tests in $CHARM_PATH/tests, yet when i run charm test, it complains about 'None does not exist in ~/.juju/environments.yaml'
[16:06] <lazypower> is this related to the null provider?
[16:06] <lazypower> s/to/got
[16:31] <eagles0513875> hey guys question if i use the --to flag do i need to specify the ip address or can a domain name which has a dns entry for that server work as well
[16:33] <lazypower> eagles0513875, let me try and find out 1 moment while i bootstrap an AWS environment
[16:33] <eagles0513875> thanks lazypower
[16:34] <eagles0513875> lazypower: reason im asking is im planning on potentially using juju to help ease deployments for me to my vps provider.
[16:34] <eagles0513875> so far my tests with getting use to the command line are quite nice and successful :)
[16:34] <lazypower> Thats great news :) I've been doing the same in my time out of work.
[16:34] <eagles0513875> lazypower: right now just using the local provider
[16:34] <eagles0513875> on my laptop which is very nice for testing and development of charms etc
[16:35] <lazypower> local has really streamlined the process. The "null provider" or manual provisioning stuff is still fairly experimental.
[16:35] <jamespage> marcoceppi, we never really taked about default: "" again
[16:35] <jamespage> marcoceppi, what is the reasoning behind doing that?
[16:36] <lazypower> jamespage, to offer a "sane default", that passes the lint test. charm linting via charm proof throws a W: if there isn't a default provided.
[16:36] <marcoceppi> lazypower: right, but that was a recent addition to charm proof
[16:36]  * marcoceppi looks through the commit history
[16:37] <jamespage> lazypower,  yes - but it changes the way config-get --format=json behaves
[16:37] <jamespage> and empty string is not the same as unset
[16:37] <lazypower> ah, i had not considered that.
[16:41] <Ming> Thx Marco, just back from a meeting
[16:43] <marcoceppi> jamespage: I think this came from a discussion on the list about empty strings and unset options
[16:43] <lazypower> eagles0513875, so the workflow as I'm seeing it, you have to juju add-machine(dnsname) then you can specify --to with the machine ID that the orchestration node assigns the machine.
[16:43] <marcoceppi> ultimately that there was no juju unset and not way to have an unset item so empty strings should be preferred
[16:43] <lazypower> eagles0513875, trying to specify --to with a DNS Name results in an error. error: invalid --to parameter "ec2-54-205-81-94.compute-1.amazonaws.com"
[16:43] <eagles0513875> ok
[16:44] <eagles0513875> i think the --to should work with either a dns name or ip address
[16:44] <marcoceppi> eagles0513875 lazypower --to should be a machine number
[16:44] <marcoceppi> from juju status output
[16:44] <lazypower> marcoceppi, thats what I just validated :)
[16:44] <marcoceppi> ah, missed that line
[16:45] <Ming> Does juju can help to map Amazon instance private ip and public ip?
[16:45] <jamespage> marcoceppi, how do I unset a non-string item then?
[16:45] <lazypower> Ming, yes. it aggregates all of the above.
[16:45] <marcoceppi> jamespage: there are only int and booleans
[16:45] <marcoceppi> jamespage: outside of strings, iirc
[16:46] <marcoceppi> jamespage: since NULL is technically not a string, it's a mismatched type/value
[17:40] <vila> I'm encountering issues trying to 'juju bootstrap' on canonistack lcy02  with 1.17.0-saucy-amd64
[17:42] <vila> In one case it times out after 10 mins failing to connect to node 0 in the other I got: https://pastebin.canonical.com/103325/
[17:55] <mgz> vila: if you `nova --debug list` does it succeed in talking to the same api endpoint?
[17:55] <vila> mgz: that was then, nova list is empty right nwo
[17:56] <vila> mgz: there seems to be something fishy with lcy02 as the bootstrap succeeded in lcy01
[17:56] <mgz> right, it's probably an ask-IS moment
[17:57] <vila> mgz: also, is there a way to put --constraints "mem=1G" somewhere below ~/.juju ?
[17:57] <mgz> nope.
[17:57] <mgz> it should be the default though, something is a little borked
[17:58] <vila> mgz: damn 'nova list' is still empty, yet juju says I'm already bootstrapped >-/
[17:59] <mgz> just do `juju destroy-enviroment` and try again
[18:00] <vila> ok, going further after destroy --force/bootstrap 1G
[18:00] <vila> mgz: well, further... back at 'Attempting to connect to ...:22'
[18:06] <vila> mgz: that connection attempt is quite early in the bootstrap process right ?
[18:07] <mgz> yeah, did you run with -v?
[18:07] <vila> mgz: --show-log ? (Neither -v not --show-log appear in juju help bootstrap AFAICS)
[18:08] <mgz> no, they're top level juju things
[18:08] <mgz> like bzr flag things
[18:08]  * vila coughs
[18:08] <vila> at least bzr display them under --help no ?
[18:09] <vila> hmm, may be not, irrelevant anyway ;)
[18:10] <vila> mgz: https://pastebin.canonical.com/103330/ times out after 10 mins
[18:10] <vila> mgz: and 'juju status' won't work until sshutle is started right ?
[18:12] <mgz> right
[18:14] <vila> mgz: thanks, I was confused about which command was requiring the tunnel, sounds obvious that it's status in retrospect...
[18:16] <vila> mgz: so, any idea on what is going on ?
[18:17] <mgz> looks like the nova endpoint for lcy02 is down, that's why I wondered what `nova --debug list` showed
[18:18] <vila> mgz: urgh, you know who to ping about that ?
[18:19] <mgz> #is vanguard and ask, if that shows the dns as unreachable like your juju log did
[18:20] <vila> mgz: but that occurred only once and doesn't seem to be the case right now, will check #is but they bounced us to another channel ;)
[18:20] <vila> mgz: rats, bad timing, Vanguard just switch to : None :-/
[21:11] <arosales> bloodearnest, ping
[21:42] <Ming_> in depart hook which variable can tell this is the leaving node?
[21:45] <lazypower> Ming_, example?
[21:46] <Ming_> I have a cluster let's my 5 nodes. one nodes is destroyed by "juju destroy-unit" all node will run *depart hooks
[21:46] <lazypower> ahh, i dont know. let me find out for you.
[21:46] <Ming_> but the leaving node will handle this differently than others
[21:47] <Ming_> k. thx
[21:48] <_sEBAs_> hey!
[21:48] <marcoceppi> o/ _sEBAs_
[21:50] <lazypower> Ming_, i'm not getting a quick response, but i'll keep running legwork until I get an answer and will ping when I have an update.
[21:50] <Ming_> no problem
[21:52] <Ming_> another question, is there a way to hide conga variables not expose to user in charm GUI? We have some public internal variables in config.yaml
[21:57] <lazypower> Ming_, is this charm going to be for internal use only? or are you writing a charm that will eventually be submit for charm store review?
[21:58] <Ming_> yes. very soon
[22:00] <lazypower> Ok, Since this will be submit for the charm store, I can't think of a good way to hide configuration options persay  - as its not readily apparent to the end user. You can alternatively provide a sane default for the configuration options that suit 90% of the use cases and that would satisfy the requirement.
[22:00] <lazypower> Otherwise if you are going to be using this extensively internally, read from an external file - or fork the config and maintain one for iternal use, and maintain another for public releases.
[22:01] <lazypower> are a few of your options.
[22:03] <Ming_> k
[22:41] <_sEBAs_> thank you all!