[04:25] Hi all. I'm trying to learn enough amulet to get my charm accepted: https://bugs.launchpad.net/charms/+bug/999439 Are there any examples of charms which get the "charmers seal of approval" for good amulet tests? [04:25] Bug #999439: Need charm for quassel-core [04:27] And a related question: Is there a recommended workflow for getting local results from my tests rather than having to wait for them to churn through the various cloud testing setups? === JoshStrobl is now known as JoshStrobl-AFK === vahid is now known as Guest31053 [05:38] blahdeblah: Setup a makefile so that 'make test' runs your tests against an environment you already have bootstrapped (with JUJU_ENV set if it is not your default). [05:39] blahdeblah: Or there is a new docker image that can be used to get the same environment as the jenkins setup. Similar process, except you run the tests inside the vm. [05:44] stub: Thanks - I might have a look into those next time. Next Q: if I want to run the same set of tests against a precise deploy and a trusty deploy, what's the best way to do it? I don't want to duplicate whole modules. [05:46] I stick "SERIES := $(juju get-environment default-series)" at the top of my Makefile, then reference that environment variable. [05:47] So it uses the default series, unless I override it on the make command line. [05:50] The jenkins setup sets up the juju environment to have a default series corresponding to your branch, so you unfortunately need to push your branch to two locations to get it to run the tests against two series. [05:50] Cool - thanks stub [06:02] blahdeblah: Were you doing ntp charming? I have a charm where the units require clock syncronization. At the moment I'm just installing the ntp package, but was wondering if that is good enough. [06:02] stub: I was helping bradm with it a bit [06:03] I could say 'you must also use this subordinate', but it would be nice if that was optional. [06:03] stub: If you really want it to work, you need to deploy a few ntp masters, then deploy ntp clients (which will auto-relate to the masters) on the hosts whose clocks you want synchronized. [06:04] Cause now I think of it just installing the package will fail with some egress restrictions. [06:04] NTP egress restrictions, or something else? [06:04] NTP egress restrictions [06:05] Is there a reason I can't rely on the default NTP masters you get when you install the ntp package? Apart from egress? === lp|sprint is now known as lazyPower === Murali_ is now known as Murali === Murali_ is now known as Murali === Murali_ is now known as Murali [08:32] gnuoy`: hi [08:32] where does the nova-api-metadata server run on a default juju openstack deployment? [08:33] apuimedo: nova-cloud-controller, I believe [08:33] apuimedo: Though Calico installs and runs it as part of its neutron-calico subordinate charm [08:33] lukasa: I can't find it there [08:33] I think it's part of nova-api, which does run there [08:33] I saw it only in the quantum-gateway charm [08:33] Oh interesting [08:34] This isn't a problem we had because we distribute nova-api-metadata, so maybe it is only part of quantum-gateway [08:34] ;-) [08:34] When it comes to the quantum-gateway charm I know almost nothing about it because we don't use it. =d [08:34] lukasa: I saw that you put it as a required service for Calico in the charm-helpers ;-) [08:35] (nova-api-metadata, I mean) [08:35] Yeah. =) Wherever we can we distribute services as widely as possible, nova-api-metadata is one of them. [08:35] ;-) [08:35] No reason to have metadata queries wandering all over the network if we can avoid it [08:35] true [08:36] But having it in quantum-gateway sounds plausible too [08:36] Though I expect that if you don't add neutron-api, it runs on nova-cloud-controller [08:36] Regardless, try making a metadata query to nova-cloud-controller and see what happens, I wouldn't be surprised if it responds [08:36] (I don't have a juju deployment up at the minute) [08:36] celebdor@nx02 ~/code/nova-cloud-controller $ bzr grep nova-api-metadata [08:36] hooks/charmhelpers/contrib/openstack/neutron.py: 'nova-api-metadata'], [08:36] hooks/charmhelpers/contrib/openstack/neutron.py: 'nova-api-metadata']], [08:37] =) I mean literally spinning one up and then curling the endpoint. =) [08:37] and I believe those two mentions are calico's :P [08:37] I think they are too. =D [08:38] However, IIRC nova-api-os-compute also includes nova-api-metadata [08:38] makes sense [08:39] And that *is* deployed by nova-cloud-controller === scuttle|afk is now known as scuttlemonkey === scuttlemonkey is now known as scuttle|afk === JoshStrobl-AFK is now known as JoshStrobl === Murali_ is now known as Murali [12:50] bug 1431286 [12:51] Bug #1431286: juju bootstrap fails when http_proxy is set in environments.yaml === zz_CyberJacob is now known as CyberJacob [15:12] hi ctlaugh, fyi that change has landed in the nova-compute "next" (development) charm. https://code.launchpad.net/~clark-laughlin/charms/trusty/nova-compute/arm64-patch-1/+merge/252682 ps thanks, gnuoy` [15:13] np, thanks for the contribution ctlaugh [15:20] gnuoy`, https://code.launchpad.net/~openstack-charmers/charm-helpers/0mq/+merge/254590 [15:20] if you have 5 secs [15:22] jamespage, approved === brandon is now known as Guest37764 === Guest37764 is now known as web [16:04] Is there a variable I can call on `config-change` hook that gets the name of a service as it is, like `mongo-master` not `mongo-master/0`. Or will I need to regex the line? [16:05] oops I mean relation change [16:06] on `relation-change`. Thought this would do but it is not: hostservicename=`relation-get $JUJU_RELATION` [17:01] web: You need to extract the service name from the unit name as you suggest. It is not available as a separate variable. [17:03] stud: thank you. So I would use `relation-get $JUJU_UNIT_NAME`, correct? [17:04] sorry to ask and not test myself. Don't want to spin up a server to test value. I'm being lazy... :) [17:12] web: Yes, $JUJU_UNIT_NAME [17:13] web: But it is an environment variable, so don't use relation-get [17:15] `echo $JUJU_UNIT_NAME | sed -e 's|/.*||'` [17:15] stud: if I need the relation unit name then?? [17:17] thats more elegant then mine: `expr "$host_service_name_string" : '^\(.[a-z]*\)'` [17:22] oh i see it : $JUJU_REMOTE_UNIT [17:23] stub: thank you again [17:35] never mind I was wrong not right [17:41] now i see ) [17:41] now i feel stupied [18:32] web: you can also do ${JUJU_UNIT_NAME%/*} [18:33] it's bash substitution, but as long as your hooks are /bin/bash and not /bin/dash, that will work [18:36] When running openstack-install what kind of name do I set JUJU_BOOTSTRAP_TO so that it creates an lxc container on the maas server? [18:38] marcoceppi_ : Always improving things! [18:41] drbidwell: link to the openstack-install instructions/download? i'm not familiar with the software but I canprobably figire it out === brandon is now known as Guest24534 === Guest24534 is now known as web [18:50] haha http://www.newegg.com/Product/Product.aspx?Item=9SIA6N42616299&nm_mc=KNC-GoogleMKP-PC&cm_mmc=KNC-GoogleMKP-PC-_-pla-_-All+Cases+%26+Covers-_-9SIA6N42616299&gclid=Cj0KEQjw6OOoBRDP9uG4oqzUv7kBEiQA0sRYBAv_3qEC_e-rZ2s-jRzIYjGQiso_DaDj3RMnUuMEhx4aAphS8P8HAQ&gclsrc=aw.ds [18:55] jcastro: marcoceppi_: I think I broke juju [18:56] trying to update my canonistack deployment [18:56] juju debug-log's last lines are: [18:56] machine-0: 2015-03-17 16:36:33 ERROR juju.rpc server.go:554 error writing response: EOF [18:56] machine-0: 2015-03-17 16:36:36 ERROR juju.state.apiserver debuglog.go:101 debug-log handler error: write tcp 10.172.65.78:57110: connection timed out [18:56] and no juju command I issue seems to have any impact on the actual instances [18:57] oh, wait...darnit, I'm in the wrong environment [18:58] you need to check your environment variables and make sure your connecting to the correct api [18:58] i think [18:58] haha [18:58] or that [18:59] also, it seems to be doing things now (even on the wrong environment), so maybe it was just slow to respond [19:00] mhall119: when you run commands use --debug and -v to help give more verbose information [19:01] marcoceppi_: will keep that in mind, thanks [19:01] it's all running now, and at least this environment is just another canonistack instance of the same app, so minimal damage done [19:04] is there a way to see which uvt- commands juju runs when creating kvm machines? when add-machine for kvm fails, I can only see that it failed, but nothing in the logs clearly shows what command failed (I assume a uvt-kvm create, but you can't tell). [19:05] rharper: did you check the machine-*.log? [19:05] I'll check this time, I did but didn't see anything uvt related [19:05] and ofcourse it works this time since I removed all of the uvt-kvm images first [19:06] well, I'll look there next time === bic2k_ is now known as bic2k [19:22] okay here is a fun question I wish I thought of before. Can I deploy a charm from a git (ie: github)? [19:30] web: there's a plugin for that [19:31] web: https://pypi.python.org/pypi/juju-git-deploy [19:31] what plug-ins now. I need to read those docs again don't I. [19:31] web: :) lots of cool plugins. Just wait until they show you the dhx video [19:33] :x I wish I could focus on researching new stuff right now :( one month left to finish graduate work no time :( so behind and baby on the way [19:33] have to use what I know [23:06] Is MasaS, juju, or the charm responsible for ssh-keygen on nodes? | http://askubuntu.com/q/603317 === brandon is now known as Guest58519