=== CyberJacob is now known as CyberJacob|Away === freeflying is now known as freeflying_away === freeflying_away is now known as freeflying [07:18] why I specify auth_url in environment.yam as http, but juju boostrap still tries to connect to the url over ssl [07:18] is it a feature of juju to force use ssl [07:19] provider is openstack [07:35] freeflying: force juju to use or not use ssl ? [07:36] davecheney, I wanna use normal http, but it was forced t connect to https, even I have it configured as http:// [07:37] freeflying: sorry, we only support ssl urls [07:38] davecheney, ok, that explain, thanks for clarifying [07:38] freeflying: we do support self signed certificates [07:39] if that helps [07:41] davecheney, not sure [07:43] davecheney, I think default keystone charm doesn't provide such thing [08:18] hi all.. manual provisioning... when i bootstrap a host and look at /var/log/juju/machine-0.log i see the following repeating over and over: [08:18] worker: start "lxc-provisioner" [08:18] worker: exited "lxc-provisioner": no state mserver machies with addresses found [08:18] worker: restarting "lxc-provisioner" in 3s [08:19] ashipika: are you using the null provider? [08:19] yes (manual provisioning) [08:20] ashipika: sorry, it may sound like a dumb question - there are two parts to manual provisioning (one of which isn't supported). but you're not using htat, so it's ok [08:20] anyway [08:20] which version of juju? [08:20] 1.17.0-saucy-amd64 [08:21] axw: sorry.. total beginner with juju. love the idea so i try to follow the documentation for null provider.. i really do appreciate all the help [08:22] ashipika: no worries, just wanted to make sure I understand what you're doing [08:22] ashipika: would you mind pastebinning your log file? is it small enough? [08:23] axw: sure.. you want the machine-0.log or something else? [08:23] yes, machine-0.log please [08:24] axw: http://paste.ubuntu.com/6472787/ [08:26] axw: just a stray thought.. during boostrap i saw some problems with locale (python warnings)... which i believe is due to ssh-ing into a host.. [08:26] axw: sorry soorry.. perl waring.. where is my head today.. [08:27] perl: warning: Falling back to the standard locale ("C"). [08:27] I don't think that's a problem [08:28] ashipika: dumb question- have you done a "juju status"? [08:28] environment: "null" [08:28] machines: [08:28] "0": [08:28] agent-state: started [08:28] agent-version: 1.17.0.1 [08:28] dns-name: ubuntu.d.xlab.lan. [08:28] instance-id: 'manual:' [08:28] series: precise [08:28] hardware: arch=amd64 cpu-cores=1 mem=987M [08:28] services: {} [08:29] had you done that before you pasted the log? [08:29] I ask because the act of doing "juju status" finalises the bootstrap process [08:29] oh.. no. i have not.. [08:30] take a look at the log file now, it should have stopped logging that error [08:30] http://paste.ubuntu.com/6472815/ [08:30] yikes, what is going on there [08:31] lxc-ls missing executalbe [08:31] indeed [08:31] not sure why it wants it [08:31] should i destroy the environment and try boostraping again with a new VM? just to see if i can reproduce the issue? [08:31] ashipika: there are some changes going on that will make this problem go away, but I suspect you could just "apt-get install lxc" on that maachine for now to make it be quiet [08:32] axw: installing... [08:32] ashipika: I think it's just a matter of the manual provider not installing lxc (it shouldn't need to, but there's a bug there that will be fixed soon) [08:33] axw: yaay! Starting up provisioner task machine-0 [08:34] cool :) [08:34] axw: now on to new frontiers.. adding new machines :) [08:34] good luck! [08:37] axw: oh.. stuck on an issue that the bootstrapped host needs a hostname that can be resolved in the DNS [08:37] all i have are IPs [08:38] dialing "wss://ubuntu.d.xlab.lan.:17070/ [08:40] ashipika: we'll probably want a bug for that one [08:40] for now you'll probably have to hack /etc/hosts :( [08:41] is that a know bug? [08:41] i'm ok hacking /etc/hosts for now.. [08:41] ashipika: we don't have anything in for it at the moment. there's a vaguely similar one in that the CLI attempts to connect to the reverse lookup of bootstrap-host [08:42] which can fail for various reasons [08:53] * ashipika does a little jig: Provisioned machine 1 [08:53] woohoo :) [08:58] axw: trying to deploy juju-gui.. in status i get: agent-state-info: 'hook failed: "start"' [08:58] ashipika: pastebin machine-1.log please? [08:59] i see the error already.. :) again.. wss://ubuntu.d.xlab.lan... need a bit more /etc/hosts magic [09:00] ashipika, I'd rather you can set up a local dns server [09:01] freeflying.. roger that... will restart everything from scratch.. maybe a good idea to put this into the documentation.. [09:02] ashipika, and use ddns to update your dns record [09:03] I'll raise a bug and we will either document a requirement or change it to not require DNS [09:07] axw, dnsmasq worked with local provider to resolve dns name I remember [09:12] morning! [09:17] axw: removed everything from the bootstrapped host.. [09:17] tried bootstrapping again.. now i am again on : restarting "lxc-provisioner" in 3s [09:17] ashipika: did you do juju status again? [09:18] * ashipika stupid === freeflying is now known as freeflying_away [09:19] ok.. [09:20] dns working.. but when i provision another machine i get: http://paste.ubuntu.com/6472942/ [09:21] the xmaas-1.d.xlab.lan is resolvable via dns [09:22] any errors in machine-0.log? [09:22] last log message on machine-0: juju.provisioner provisioner_taks.go:243 machine 1 already started as instance "manual:xmaas-2.d.xlab.lan" [09:22] sda1: WRITE SAME failed. Manually zeroing.. [09:23] oh.. and just before these two lines: WARNING juju.worker.addressupdater updater.go:219 cannot get addresses for instance "manual:xmaas-2.d.xlab.lan": no instance found [09:25] and juju status says: status missing for the machine-1 [09:28] hold on... 37017... that's the mongo port [09:28] I think someone broke the code. how was it working for you before though? are you working off the source tree? [09:29] ashipika: ^^ [09:29] go get -v launchpad.net/juju-core/... [09:29] I see [09:29] go install -v launchpad.net/juju-core/... [09:29] :) [09:29] ok, just a moment - you're going to have to patch a file manually I'm afraid [09:29] sure :) [09:31] ashipika: http://bazaar.launchpad.net/~go-bot/juju-core/trunk/view/head:/environs/manual/provisioner.go#L185 [09:31] please locate that on disk, and modify StateAddrs to be APIAddrs [09:31] (only on that line) [09:32] on which host? [09:33] ashipika: on whichever host you built juju on [09:33] ok [09:33] afterwards, you'll have to rebuild juju and rebootstrap [09:34] L185: Addrs: configParameters.APIAddrs, [09:34] correct? [09:34] let me just confirm before I waste your time [09:34] ashipika: yes that is correct [09:34] axw: you're not wasting my time.. you're helping.. thnx! [09:34] no problems [09:35] manual provisioning is my baby ;) [09:35] an ugly baby, but my baby nonetheless [09:44] damn.. still the same error.. maybe i did not rebuild the entire juju.. how do i clean the previous installation? [09:44] ok.. have to go to a meeting.. be back in 20m [09:45] ashipika: "go get -v launchpad.net/juju-core/..." is all you should need to do. make sure your target env is totally clean before reattempting. I may not be here in 20m, but I'll be back online at the same time tomorrow === freeflying_away is now known as freeflying [10:21] axw: reinstalled, re-bootstrapped [10:23] still: machine-1 -> juju status: instance-state: missing [10:24] tried deploy of mongodb to machine-1: 'hook failed: "install"' === CyberJacob|Away is now known as CyberJacob [10:40] ashipika: juju ssh 1 [10:40] less /var/log/juju/unit-* [10:49] ah, sorry.. destroye my environment.. reinstalling VMs, retrying from 0 [10:53] Hey guys! [10:53] Looking to parse some YAML in go with your goyaml package [10:54] So to be flexible I unpack a yaml file into a map[string]interface{} [10:54] But it appears goyaml decides to unpack hashes into map[interface{}]interface{} [10:54] Which makes me unable to marshal it to JSON [10:55] What would you guys recommend for me to get around this? [11:10] davecheney: on mongodb deploy -> HOOK ImportError: No module named yaml [11:11] davecheney: HOOK File: /var/lib/juju/agents/unit-mongodb-0/charm/hooks/install === gary_poster|away is now known as gary_poster [14:21] sinzui, thanks for putting that OSX bash completion in the release, that's classy! [14:22] jcastro, np, I was desperate to get some code landed in anyone's project to raise my self-esteem === freeflying is now known as freeflying_away === freeflying_away is now known as freeflying [16:01] hey all. I hitting an issue about with lxc provider about git not being installed. I think I had this problem some time ago, and it turned out to be by ISP returning matches for invalid DNS [16:02] something in juju I think looks for a particular DNS and some point in container start up? And does something if it's not found? [16:07] bloodearnest: this might also have to do with an apt proxy [16:08] do you have a proxy set up for your machine's apt? [16:09] marcoceppi: hmm so I do [16:09] * bloodearnest has no memory of this place [16:09] bloodearnest: if your proxy on your machine is set up to read from 127.0.0.1 or another address [16:09] that address needs to be reachable in the containers [16:09] if it's not, apt will fail and git won't install [16:10] the local provider automatically inherits your proxy settings from the host machine [16:10] if you isntall squid-deb-proxy or another package, it may have automatically created the rules for you bloodearnest [16:10] either way, either update the rules so that lxc can use them or remove them and re-bootstrap [16:11] marcoceppi: does lxc provider still require an apt proxy? [16:11] it doesnt' require a proxy at all [16:11] bloodearnest: it will simply re-use the one on your host machine, as most host machines with a proxy often have it because they can't access the archives directly [16:11] so if you have a caching service set up on your host machine, the rule is usually 127.0.0.1, that won't work inside LXC [16:12] * bloodearnest nukes apr-cacher-ng from obrit [16:12] no proxy is required, it's just a feature that exists in juju, where the local provider will inherit those settings [16:12] right [16:12] jcastro: we should probably document that caveat on the local provider page [16:13] huh [16:13] I am using a proxy and I don't have that issue === adam_g_afk is now known as adam_g [16:23] jcastro: depends n on the address for the proxy [16:24] Which reminds me, I need to tune apt-cacher-ng to be more aggressive. apt is still the slowest part of spawning new lxc instances. === freeflying is now known as freeflying_away [16:28] marcoceppi: ok, so I removed apt-cacher-ng altogther, but still get this problem [16:29] marcoceppi: I think it's probably related to my crappy ISP rerouting DNS [16:35] marcoceppi: hm, so I can resolve archive.ubuntu.com from inside the container [16:38] ah, it was a canonical vpn issue, it seems === freeflying_away is now known as freeflying [16:45] unrelated question - juju-core doesn't like deploying from local symlinks (pyjuju was ok with that). Is there a workaround for this? [17:09] evilnickveitch, bundle doc MP incoming from me! [17:41] bloodearnest, ok so you have charms in a directory somewhere [17:41] and you have those symlinked? [17:42] jcastro: yeah, the specific case a mini test repository for a charm [17:42] huh I didn't even know we supported that in the first case [17:42] can you file a bug on it on juju-core? [17:43] jcastro: can do [17:43] I am not sure if we supported symlinks on purpose or by accident, heh [17:44] jcastro: it may be a security feature - I get a message like 'ERROR cannot bundle charm: symlink "." links out of charm: ".." ' [17:45] yeah, we used to use it for testing/dev with pyjuju, with gojuju we've have to moved to developing out of a 'precise' parent dir [17:45] which is cumbersome [17:46] jcastro: it's particularly useful when developing a subordinate charm, as you have to have a real charm as well in order to test it at all [17:46] well, you had me at "we use it", so if it's useful for you then I figure might as well file it [17:47] so you can have both your subordinate and a dummy test charm in a local repository [17:47] I don't like forcing it to have series in the path anyway. *shakes fist* [17:47] juju deploy [17:50] jcastro: +1000 [17:52] jcastro: so I think it may be related to https://bugs.launchpad.net/juju-core/+bug/1129319 [17:52] <_mup_> Bug #1129319: Local charm deployment not working if symlinks are used [17:53] hey so I guess we can just reopen this [17:53] what version of juju core are you on? [17:54] jcastro: 1.16.3-saucy-amd64 [17:54] ok, leave a comment there and I'll reopen it! [17:55] jcastro: will do thanks [18:00] jcastro, ok, i fixed it, should go live in 30 minutes [18:00] evilnickveitch, heh, what was wrong with it? [18:00] jcastro, it was quite a good effort for you! I just rewrote some bits in English [18:01] * jcastro claps slowly [18:02] jcastro, i liked the video, but i kept thinking you were going to go "ta dah!" at some point [18:02] I was going to make it an animated gif [18:02] but the results were crappy [18:02] I had intended it to not have audio at all [18:03] there will be no GIFs in the docs! === CyberJacob is now known as CyberJacob|Away === gary_poster is now known as gary_poster|away [23:01] does anyone know if the havana release of openstack moved neutron into the nova-ccc charm instead of it being in the quantum gateway charm? [23:37] zradmin, still the quantum-gateway charm === freeflying is now known as freeflying_away