[07:18] <freeflying> why I specify auth_url in environment.yam as http, but juju boostrap still tries to connect to the url over ssl
[07:18] <freeflying> is it a feature of juju to force use ssl
[07:19] <freeflying> provider is openstack
[07:35] <davecheney> freeflying: force juju to use or not use ssl ?
[07:36] <freeflying> davecheney, I wanna use normal http, but it was forced t connect to https, even I have it configured as http://
[07:37] <davecheney> freeflying: sorry, we only support ssl urls
[07:38] <freeflying> davecheney, ok, that explain, thanks for clarifying
[07:38] <davecheney> freeflying: we do support self signed certificates
[07:39] <davecheney> if that helps
[07:41] <freeflying> davecheney, not sure
[07:43] <freeflying> davecheney, I think default keystone charm doesn't provide such thing
[08:18] <ashipika> hi all.. manual provisioning... when i bootstrap a host and look at /var/log/juju/machine-0.log i see the following repeating over and over:
[08:18] <ashipika> worker: start "lxc-provisioner"
[08:18] <ashipika> worker: exited "lxc-provisioner": no state mserver machies with addresses found
[08:18] <ashipika> worker: restarting "lxc-provisioner" in 3s
[08:19] <axw> ashipika: are you using the null provider?
[08:19] <ashipika> yes (manual provisioning)
[08:20] <axw> ashipika: sorry, it may sound like a dumb question - there are two parts to manual provisioning (one of which isn't supported). but you're not using htat, so it's ok
[08:20] <axw> anyway
[08:20] <axw> which version of juju?
[08:20] <ashipika> 1.17.0-saucy-amd64
[08:21] <ashipika> axw: sorry.. total beginner with juju. love the idea so i try to follow the documentation for null provider.. i really do appreciate all the help
[08:22] <axw> ashipika: no worries, just wanted to make sure I understand what you're doing
[08:22] <axw> ashipika: would you mind pastebinning your log file? is it small enough?
[08:23] <ashipika> axw: sure.. you want the machine-0.log or something else?
[08:23] <axw> yes, machine-0.log please
[08:24] <ashipika> axw: http://paste.ubuntu.com/6472787/
[08:26] <ashipika> axw: just a stray thought.. during boostrap i saw some problems with locale (python warnings)... which i believe is due to ssh-ing into a host..
[08:26] <ashipika> axw: sorry soorry.. perl waring.. where is my head today..
[08:27] <ashipika> perl: warning: Falling back to the standard locale ("C").
[08:27] <axw> I don't think that's a problem
[08:28] <axw> ashipika: dumb question- have you done a "juju status"?
[08:28] <ashipika> environment: "null"
[08:28] <ashipika> machines:
[08:28] <ashipika>   "0":
[08:28] <ashipika>     agent-state: started
[08:28] <ashipika>     agent-version: 1.17.0.1
[08:28] <ashipika>     dns-name: ubuntu.d.xlab.lan.
[08:28] <ashipika>     instance-id: 'manual:'
[08:28] <ashipika>     series: precise
[08:28] <ashipika>     hardware: arch=amd64 cpu-cores=1 mem=987M
[08:28] <ashipika> services: {}
[08:29] <axw> had you done that before you pasted the log?
[08:29] <axw> I ask because the act of doing "juju status" finalises the bootstrap process
[08:29] <ashipika> oh.. no. i have not..
[08:30] <axw> take a look at the log file now, it should have stopped logging that error
[08:30] <ashipika> http://paste.ubuntu.com/6472815/
[08:30] <axw> yikes, what is going on there
[08:31] <ashipika> lxc-ls missing executalbe
[08:31] <axw> indeed
[08:31] <axw> not sure why it wants it
[08:31] <ashipika> should i destroy the environment and try boostraping again with a new VM? just to see if i can reproduce the issue?
[08:31] <axw> ashipika: there are some changes going on that will make this problem go away, but I suspect you could just "apt-get install lxc" on that maachine for now to make it be quiet
[08:32] <ashipika> axw: installing...
[08:32] <axw> ashipika: I think it's just a matter of the manual provider not installing lxc (it shouldn't need to, but there's a bug there that will be fixed soon)
[08:33] <ashipika> axw: yaay! Starting up provisioner task machine-0
[08:34] <axw> cool :)
[08:34] <ashipika> axw: now on to new frontiers.. adding new machines :)
[08:34] <axw> good luck!
[08:37] <ashipika> axw: oh.. stuck on an issue that the bootstrapped host needs a hostname that can be resolved in the DNS
[08:37] <ashipika> all i have are IPs
[08:38] <ashipika> dialing "wss://ubuntu.d.xlab.lan.:17070/
[08:40] <axw> ashipika: we'll probably want a bug for that one
[08:40] <axw> for now you'll probably have to hack /etc/hosts :(
[08:41] <ashipika> is that a know bug?
[08:41] <ashipika> i'm ok hacking /etc/hosts for now..
[08:41] <axw> ashipika: we don't have anything in for it at the moment. there's a vaguely similar one in that the CLI attempts to connect to the reverse lookup of bootstrap-host
[08:42] <axw> which can fail for various reasons
[08:53]  * ashipika does a little jig: Provisioned machine 1
[08:53] <axw> woohoo :)
[08:58] <ashipika> axw: trying to deploy juju-gui..  in status i get: agent-state-info: 'hook failed: "start"'
[08:58] <axw> ashipika: pastebin machine-1.log please?
[08:59] <ashipika> i see the error already.. :) again.. wss://ubuntu.d.xlab.lan... need a bit more /etc/hosts magic
[09:00] <freeflying> ashipika, I'd rather you can set up a local dns server
[09:01] <ashipika> freeflying.. roger that... will restart everything from scratch.. maybe a good idea to put this into the documentation..
[09:02] <freeflying> ashipika, and use ddns to update your dns record
[09:03] <axw> I'll raise a bug and we will either document a requirement or change it to not require DNS
[09:07] <freeflying> axw, dnsmasq worked with local provider to resolve dns name I remember
[09:12] <mgz> morning!
[09:17] <ashipika> axw: removed everything from the bootstrapped host..
[09:17] <ashipika> tried bootstrapping again.. now i am again on : restarting "lxc-provisioner" in 3s
[09:17] <axw> ashipika: did you do juju status again?
[09:18]  * ashipika stupid
[09:19] <ashipika> ok..
[09:20] <ashipika> dns working.. but when i provision another machine i get:  http://paste.ubuntu.com/6472942/
[09:21] <ashipika> the xmaas-1.d.xlab.lan is resolvable via dns
[09:22] <axw> any errors in machine-0.log?
[09:22] <ashipika> last log message on machine-0: juju.provisioner provisioner_taks.go:243 machine 1 already started as instance "manual:xmaas-2.d.xlab.lan"
[09:22] <ashipika> sda1: WRITE SAME failed. Manually zeroing..
[09:23] <ashipika> oh.. and just before these two lines: WARNING juju.worker.addressupdater updater.go:219 cannot get addresses for instance "manual:xmaas-2.d.xlab.lan": no instance found
[09:25] <ashipika> and juju status says: status missing for the machine-1
[09:28] <axw> hold on... 37017... that's the mongo port
[09:28] <axw> I think someone broke the code. how was it working for you before though? are you working off the source tree?
[09:29] <axw> ashipika: ^^
[09:29] <ashipika> go get -v launchpad.net/juju-core/...
[09:29] <axw> I see
[09:29] <ashipika> go install -v launchpad.net/juju-core/...
[09:29] <ashipika> :)
[09:29] <axw> ok, just a moment - you're going to have to patch a file manually I'm afraid
[09:29] <ashipika> sure :)
[09:31] <axw> ashipika: http://bazaar.launchpad.net/~go-bot/juju-core/trunk/view/head:/environs/manual/provisioner.go#L185
[09:31] <axw> please locate that on disk, and modify StateAddrs to be APIAddrs
[09:31] <axw> (only on that line)
[09:32] <ashipika> on which host?
[09:33] <axw> ashipika: on whichever host you built juju on
[09:33] <ashipika> ok
[09:33] <axw> afterwards, you'll have to rebuild juju and rebootstrap
[09:34] <ashipika> L185: 		Addrs:    configParameters.APIAddrs,
[09:34] <ashipika> correct?
[09:34] <axw> let me just confirm before I waste your time
[09:34] <axw> ashipika: yes that is correct
[09:34] <ashipika> axw: you're not wasting my time.. you're helping.. thnx!
[09:34] <axw> no problems
[09:35] <axw> manual provisioning is my baby ;)
[09:35] <axw> an ugly baby, but my baby nonetheless
[09:44] <ashipika> damn.. still the same error.. maybe i did not rebuild the entire juju.. how do i clean the previous installation?
[09:44] <ashipika> ok.. have to go to a meeting.. be back in 20m
[09:45] <axw> ashipika: "go get -v launchpad.net/juju-core/..." is all you should need to do. make sure your target env is totally clean before reattempting. I may not be here in 20m, but I'll be back online at the same time tomorrow
[10:21] <ashipika> axw: reinstalled, re-bootstrapped
[10:23] <ashipika> still: machine-1 -> juju status: instance-state: missing
[10:24] <ashipika> tried deploy of mongodb to machine-1: 'hook failed: "install"'
[10:40] <davecheney> ashipika: juju ssh 1
[10:40] <davecheney> less /var/log/juju/unit-*
[10:49] <ashipika> ah, sorry.. destroye my environment.. reinstalling VMs, retrying from 0
[10:53] <aktau> Hey guys!
[10:53] <aktau> Looking to parse some YAML in go with your goyaml package
[10:54] <aktau> So to be flexible I unpack a yaml file into a map[string]interface{}
[10:54] <aktau> But it appears goyaml decides to unpack hashes into map[interface{}]interface{}
[10:54] <aktau> Which makes me unable to marshal it to JSON
[10:55] <aktau> What would you guys recommend for me to get around this?
[11:10] <ashipika> davecheney: on mongodb deploy -> HOOK ImportError: No module named yaml
[11:11] <ashipika> davecheney: HOOK File: /var/lib/juju/agents/unit-mongodb-0/charm/hooks/install
[14:21] <jcastro> sinzui, thanks for putting that OSX bash completion in the release, that's classy!
[14:22] <sinzui> jcastro, np, I was desperate to get some code landed in anyone's project to raise my self-esteem
[16:01] <bloodearnest> hey all. I hitting an issue about with lxc provider about git not being installed. I think I had this problem some time ago, and it turned out to be by ISP returning matches for invalid DNS
[16:02] <bloodearnest> something in juju I think looks for a particular DNS and some point in container start up? And does something if it's not found?
[16:07] <marcoceppi> bloodearnest: this might also have to do with an apt proxy
[16:08] <marcoceppi> do you have a proxy set up for your machine's apt?
[16:09] <bloodearnest> marcoceppi: hmm so I do
[16:09]  * bloodearnest has no memory of this place
[16:09] <marcoceppi> bloodearnest: if your proxy on your machine is set up to read from 127.0.0.1 or another address
[16:09] <marcoceppi> that address needs to be reachable in the containers
[16:09] <marcoceppi> if it's not, apt will fail and git won't install
[16:10] <marcoceppi> the local provider automatically inherits your proxy settings from the host machine
[16:10] <marcoceppi> if you isntall squid-deb-proxy or another package, it may have automatically created the rules for you bloodearnest
[16:10] <marcoceppi> either way, either update the rules so that lxc can use them or remove them and re-bootstrap
[16:11] <bloodearnest> marcoceppi: does lxc provider still require an apt proxy?
[16:11] <marcoceppi> it doesnt' require a proxy at all
[16:11] <marcoceppi> bloodearnest: it will simply re-use the one on your host machine, as most host machines with a proxy often have it because they can't access the archives directly
[16:11] <marcoceppi> so if you have a caching service set up on your host machine, the rule is usually 127.0.0.1, that won't work inside LXC
[16:12]  * bloodearnest nukes apr-cacher-ng from obrit
[16:12] <marcoceppi> no proxy is required, it's just a feature that exists in juju, where the local provider will inherit those settings
[16:12] <bloodearnest> right
[16:12] <marcoceppi> jcastro: we should probably document that caveat on the local provider page
[16:13] <jcastro> huh
[16:13] <jcastro> I am using a proxy and I don't have that issue
[16:23] <marcoceppi> jcastro: depends n on the address for the proxy
[16:24] <stub> Which reminds me, I need to tune apt-cacher-ng to be more aggressive. apt is still the slowest part of spawning new lxc instances.
[16:28] <bloodearnest> marcoceppi: ok, so I removed apt-cacher-ng altogther, but still get this problem
[16:29] <bloodearnest> marcoceppi: I think it's probably related to my crappy ISP rerouting DNS
[16:35] <bloodearnest> marcoceppi: hm, so I can resolve archive.ubuntu.com from inside the container
[16:38] <bloodearnest> ah, it was a canonical vpn issue, it seems
[16:45] <bloodearnest> unrelated question - juju-core doesn't like deploying from local symlinks (pyjuju was ok with that). Is there a workaround for this?
[17:09] <jcastro> evilnickveitch, bundle doc MP incoming from me!
[17:41] <jcastro> bloodearnest, ok so you have charms in a directory somewhere
[17:41] <jcastro> and you have those symlinked?
[17:42] <bloodearnest> jcastro: yeah, the specific case a mini test repository for a charm
[17:42] <jcastro> huh I didn't even know we supported that in the first case
[17:42] <jcastro> can you file a bug on it on juju-core?
[17:43] <bloodearnest> jcastro: can do
[17:43] <jcastro> I am not sure if we supported symlinks on purpose or by accident, heh
[17:44] <bloodearnest> jcastro: it may be a security feature - I get a message like 'ERROR cannot bundle charm: symlink "." links out of charm: ".." '
[17:45] <bloodearnest> yeah, we used to use it for testing/dev with pyjuju, with gojuju we've have to moved to developing out of a 'precise' parent dir
[17:45] <bloodearnest> which is cumbersome
[17:46] <bloodearnest> jcastro: it's particularly useful when developing a subordinate charm, as you have to have a real charm as well in order to test it at all
[17:46] <jcastro> well, you had me at "we use it", so if it's useful for you then I figure might as well file it
[17:47] <bloodearnest> so you can have both your subordinate and a dummy test charm in a local repository
[17:47] <jcastro> I don't like forcing it to have series in the path anyway. *shakes fist*
[17:47] <jcastro> juju deploy <any directory and who cares about the the structure>
[17:50] <bloodearnest> jcastro: +1000
[17:52] <bloodearnest> jcastro: so I think it may be related to https://bugs.launchpad.net/juju-core/+bug/1129319
[17:52] <_mup_> Bug #1129319: Local charm deployment not working if symlinks are used <juju-core:Fix Released by fwereade> <https://launchpad.net/bugs/1129319>
[17:53] <jcastro> hey so I guess we can just reopen this
[17:53] <jcastro> what version of juju core are you on?
[17:54] <bloodearnest> jcastro: 1.16.3-saucy-amd64
[17:54] <jcastro> ok, leave a comment there and I'll reopen it!
[17:55] <bloodearnest> jcastro: will do thanks
[18:00] <evilnickveitch> jcastro, ok, i fixed it, should go live in 30 minutes
[18:00] <jcastro> evilnickveitch, heh, what was wrong with it?
[18:00] <evilnickveitch> jcastro, it was quite a good effort for you! I just rewrote some bits in English
[18:01]  * jcastro claps slowly
[18:02] <evilnickveitch> jcastro, i liked the video, but i kept thinking you were going to go "ta dah!" at some point
[18:02] <jcastro> I was going to make it an animated gif
[18:02] <jcastro> but the results were crappy
[18:02] <jcastro> I had intended it to not have audio at all
[18:03] <evilnickveitch> there will be no GIFs in the docs!
[23:01] <zradmin> does anyone know if the havana release of openstack moved neutron into the nova-ccc charm instead of it being in the quantum gateway charm?
[23:37] <negronjl> zradmin, still the quantum-gateway charm