[02:44] thomi : which juju version are you using? [02:45] lazypower: 1.25.0-0ubuntu1~15.10.1 [02:45] thomi which substrate is this against? local, aws, openstack, et-al? [02:45] lazypower: I destroyed the environment and re-created it. I no longer see that error (I see other errors instead) [02:46] lazypower: this is all against local [02:46] well, thats progress [02:46] Are the new errors blocking you? [02:47] lazypower: they are, but I'm at my EOD, so I'll attack them again tomorrow with a fresh brain [02:47] thanks for your help tho [02:48] Sorry i wasn't more help. Cheers thomi === stub` is now known as stub === Guest78115 is now known as jose === jose is now known as Guest38423 === Guest38423 is now known as jose === axino` is now known as axino [08:26] gnuoy, http://paste.ubuntu.com/13640557/ [08:26] was the diff I have locally for generating a list of related unit private-addresses [08:26] it also switches the scope to SERVICE from GLOBAL [08:27] that said I think that the interface stubb should provide methods that return primitives such as lists as a core - joining with ',' is really an openstack specific bit === Guest32965 is now known as CyberJacob [10:23] jamespage, got a moment for a layers review? https://code.launchpad.net/~gnuoy/charms/+source/interface-rabbitmq/+git/interface-rabbitmq/+merge/279417 [13:22] How to study: http://qr.ae/RbCtvx [13:23] Oops, wrong channel! bye. [14:55] gnuoy, reviewed - we should drop 'hostname' from the interface - its legacy and should not be used afaict [14:56] jamespage, it opens an interesting question about what other aspects of the charm could be considered 'legacy' [15:31] charmers: I don't think that I can fix these errors, they seem to come from the test environment. Can someone give me suggestions? http://paste.ubuntu.com/13632412/ [15:31] tvansteenburgh: ^? [15:31] tvansteenburgh: disregard [15:32] jrwren: that's an issue with OSCI, beisner is your best bet [15:32] We have two CI services running, one for general testing and one for OpenStack (OSCI) testing [15:32] beisner: help. :) [15:33] hi jrwren - undercloud woes hit ya there. serverstack is back in black, i can re-trigger that now. [15:33] jrwren, can you link me to your MP? [15:33] beisner: oh please retrigger, thank you. https://code.launchpad.net/~evarlast/charms/trusty/mongodb/fix-dump-actions/+merge/277191 [15:36] jrwren, you have a passing result which ran after that failure [15:36] beisner: I do? I think I didn't get the email. [15:36] jrwren, see comment chain @ https://code.launchpad.net/~evarlast/charms/trusty/mongodb/fix-dump-actions/+merge/277191 [15:37] * beisner thought he already retriggered this one ;-) [15:37] beisner: I see the comment. Sorry for the false request. Any reason it didn't email me the success message? [15:37] jrwren, not sure. our bot doesn't email you. launchpad does. [15:38] beisner: ah! I guess I never realized those messages were just LP messages on the MP. Thanks! [15:38] jrwren, ps thanks for the work on beating those tests into shape [15:38] beisner: i'll be giving charmers lots of fun pokes of fake greif if I ever see 'em all IRL again :) [15:39] *grief [15:39] ha! [15:39] I think next request is rerun these: http://review.juju.solutions/review/2357 ? [15:40] jamespage, I have banished hostname [15:47] marcoceppi, hrm, can't figure out how to retrigger tests @ http://review.juju.solutions/ it seems like there used to be a button. it could just be not-enough-coffee.. [15:47] beisner jrwren retriggered [15:47] marcoceppi, woot thx man [15:48] marcoceppi: thank you. [15:51] gnuoy, \o/ [15:51] jamespage, what are we celebrating? [15:52] hostname banishment [15:59] gnuoy, fwiw I think we're to agressive on removing the 'avaliable' and 'connected' state - the handler is applicable for 'broken' but not 'departed' which would apply when a single unit exits the relationship [16:00] gnuoy, but please merge your proposal for now [16:00] jamespage, good point [16:00] jamespage, ack [16:46] tvansteenburgh: I think this is you from whom I need help. lxc test env broken? http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/1652/console ERROR there was an issue examining the environment: cannot use 37017 as state port, already in use [16:47] jrwren: ugh. thanks for the heads-up, fixing... [16:55] jrwren: new test running http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/1654/console [16:56] tvansteenburgh: thank you. === verterok is now known as verterok-away === verterok-away is now known as verterok [19:17] hey #juju [19:18] running up maas/juju/openstack first time ever [19:18] following the canonical guide - got to the part where we're to be bootstrapping a node [19:18] juju quickstart [19:18] juju quickstart v1.3.1 [19:19] stuck at bootstrapping the maas environment (type: maas) [19:19] :( [19:19] little help? [19:19] just sits there and does nothing [19:19] TheJeff: has it acquired any of the nodes in your maas cluster? [19:19] yes it did [19:19] as in, do any of the nodes in maas have a powered on logo? [19:20] says deployed [19:20] ok, so the curtain process can take a bit of time. has the node come online fully after cutain? [19:20] with the owner as the maas [19:20] cutain? [19:20] and its been a solid 15 minutes of no action [19:20] 4th attempt [19:20] 15 minutes does seem excessive [19:20] let it wait 30 before [19:21] typically its done within 2 or 3 minutes in most cases [19:21] so theres 1 of 2 things happening [19:21] there's a networking issue [19:21] or we've found a bug, but i'm apt to point a finger at a networking issue [19:21] when you deploy a node in maas, can your workstation reach the instance that its deployed? [19:22] no, but the maas box is the one running juju [19:22] and it can hit ssh [19:22] but actually i tested that and it just kicks back pub key nogo [19:22] so the daemon is reachable [19:22] could that be related? do i need to get an ssh key in? [19:22] Did you put your pubkey in maas for the juju user? [19:22] oo [19:22] nope, doc did not mention that [19:23] cat ~/.ssh/id_rsa.pub and place that in your maas users ssh keys [19:23] also TheJeff - which version of juju? 1.25 i assume? [19:24] 1.24.7-trusty-amd64 [19:27] TheJeff: once that ssh key is added, tear down the unit that was in progress and retry. That ssh key will be added during the curtain process w/ cloud init [19:27] and you *should* be g2g from there [19:27] yep all that makes sense [19:27] doing now [19:27] really really appreciate the tip [19:27] TheJeff: additionally which guide were you following? I'd like to file a bug against it if its missing the pubkey instructions. [19:28] coreycb: are there restrictions on what things should be called in the ./reactive subdir of a layered charm? like does ./reactive/myStuff have to end in .py or .sh or anything? [19:28] er, cory_fu ^^ [19:28] lazypower: after searching it does mention the key [19:28] but doesnt actually state 'do this' [19:28] https://help.ubuntu.com/lts/clouddocs/en/Installing-Juju.html [19:28] the only mention of it is This key will be associated with the user account you are running in at the time, so should be the same account you intend to run Juju from. [19:29] doesnt explicitly mention to add it to the user [19:29] kwmonroe: Anything with .py is imported as a Python source file. Anything that's executable is considered an ExternalHandler. Anything else should be ignored [19:30] TheJeff: ah - yeah. looks like we have a new feature here that allows you to inline the authorized key path for hte maas provider - https://jujucharms.com/docs/stable/config-maas [19:30] i say new as its new to me, it may have existed for a while :) [19:30] hm well I actually did that [19:30] and the key wasnt present in the user profile [19:32] in the interest of one thing at a time, if that doesn't resolve your woes, we'll dive into deeper debugging. What may be helpful is to run juju bootstrap with the --debug flag so you get more verbose output [19:32] neat thats a great tip too [19:32] its still in a deploying state, fingers crossed === ericsnow is now known as ericsnow_afk [19:47] marcoceppi o/ [19:47] \o lazypower [19:47] do you have a second to expand on charms.docker #3? [19:47] i'm not sure i understand what you were asking me [19:47] https://github.com/juju-solutions/charms.docker/pull/3 [19:48] lazypower: I'm curious how this works. Do you still need to increment setup.py version when you tag a release or does travis just do that? [19:48] From what I understood in the docs, you still have to rev setup.py's version number. The GH tag doesn't have to necessarily reflect wha the version is, but it will trigger travis to run the deploy routine. [19:49] lazypower: cool, thanks for the clarification [19:49] so, if i leave it as v0.0.1, push an update, tag v0.0.1-1, it'll still publish to pypi as 0.0.1 (assuming you can publish the same version multiple times, i have not tried that) [19:49] lazypower: you can't [19:49] ok, so the build should fail then :) [19:49] interesting [19:50] it'll try to deploy and teh build should fail [19:50] wanna test it? [19:50] merge that and lets make some tags [19:50] but if you were to move the tag you'd have to force push on master [19:50] which is bad [19:50] since its semver, just increment the minor rev [19:50] wait, I'm not 100% on that actually [19:50] but it's annoying that you've burnt the 0.0.1-1 release [19:51] annoying i can deal with because thats problems for me [19:51] not for the consumers using pypi [19:51] i'm not sure that force updated tags will collide though [19:52] git push origin --tags --force doesn't seem to do anything that would collide, it just moves the hash reference [19:52] ah but i see what you're saying [19:52] the version thats in setup.py, and the now moved tag, would not reflect whats in pypi potentially. [19:53] yeah, i think i understand. seems annoying, and is worth investigating [19:54] well, also, if you force push on master, you potentially mess up other people's repos by rewriting history [19:55] not with tags [19:55] but code, yes [19:56] yeah, i realized as I said it earlier I wasn't 100% on that since a tag is just a ref [19:56] I don't have a problem with the merge, I was more curious how it worked [19:56] I'm not 100% on that either [19:56] ok, so after trying again with the ssh key in there, and failing again I've run it with --debug [19:56] but wanted to investigate and possibly use this as a nicer method to keep it updated. [19:56] lazypower: 2015-12-03 19:54:13 DEBUG juju.provider.common bootstrap.go:254 connection attempt for shy-yam.maas failed: ssh: Could not resolve hostname shy-yam.maas: Name or service not known [19:57] looks like name resolution failing? [19:57] how would it know that name?? do I need to create a host entry? [19:57] TheJeff: yes, this is a common problem [19:57] TheJeff: not at all [19:57] TheJeff: the one way I've gotten around this is to set my nameserver to the maas-master server [19:57] TheJeff: Maas runs a bind server which updates w/ the hostnames as they are deployed and allocate addressing [19:58] TheJeff: so you'll need to add an entry to the resolv.conf templates, to point to itself. nameserver 127.0.0.1 \n search maas [19:58] and that should get you sorted w/ hostname resolution [19:58] lazypower: wait, were are you recommending that? [19:58] it issue is from from juju client -> dns, not from the nodes AFAIU [19:59] marcoceppi : TheJeff is running juju on the maas master [19:59] on the region controller i believe [19:59] yep [19:59] it's all on a sanitized network... [19:59] no extra hosts or workstation access [19:59] TheJeff: nslookup/dig will answer you where its looking for the hostname :) [19:59] yep! [20:00] oh, oops, sorry. Didn't see that [20:00] i'm 90% certain that you just need to add the entry for looking up .maas tld on the region controller, and you should be sorted [20:00] yeah, adding `search maas` to the bottom of /etc/resolv.conf to test. If that works, it won't survive reboot, but you can edit the templates like lazypower suggested and have it persist [20:06] yeah I can resolve the name now, will be a few minutes to reboot the box and redeploy [20:06] (blasted server bioses are snails!) [20:06] so many boot roms :( [20:07] TheJeff: i felt your pain yesterday standing my 2u back up after an 8 month resting phase :P [20:07] 8 month not bad [20:07] from what i hear, every time you reboot a linux server a kitten dies. [20:08] https://bugs.launchpad.net/maas/+bug/1522566 [20:08] Bug #1522566: MAAS TLD names are not resolveable by maas-region-controller by default [20:08] marcoceppi ^ [20:10] +1 [20:11] 2015-12-03 20:08:46 DEBUG juju.provider.common bootstrap.go:254 connection attempt for shy-yam.maas failed: Warning: Permanently added 'shy-yam.maas,10.7.1.102' (ECDSA) to the list of known hosts. [20:11] /var/lib/juju/nonce.txt does not exist [20:11] :( [20:11] getting trolled by a shy-yam, what a thursday adventure [20:12] so i'm googling this [20:12] i booted via the host's console but it pxe booted from maas [20:13] ^ 's/booted/powered\ on/' [20:29] TheJeff - seems like you're running into this https://bugs.launchpad.net/juju-core/+bug/1314682 [20:29] Bug #1314682: Bootstrap fails, missing /var/lib/juju/nonce.txt (containing 'user-admin:bootstrap') [20:35] lazypower: totally === natefinch is now known as natefinch-afk [20:36] TheJeff: there's a few work arounds listed in there. Highly suggested that you click the link "This bug affects me", and give some of those a try. I'll ping @jcastro to bring it up as an item to look at on the next meeting as there's a few users here that have hit this bug and it seems to be long running [20:36] looking [20:37] going to check it out - maybe I'm going about things wrong? The hosts boot from PXE (but we kick the boot off via remote bios) [20:37] afaik cloud-init generally working, as maas was able to spin these hosts up and make them nodes in the first place [20:37] I'll holler this over to the core team, seems to have plenty of data attached [20:43] so whats the contents of this nonce.txt supposed to be? [20:43] I can manually slap it in I suppose [20:44] TheJeff: we're getting out of my depth of knowledge here :( Finding the bug report was due to googling and deep linking. I wish i were more help [20:44] Best I can offer at this juncture is to advise you to watch that bug and experiment [20:45] yeah that's what I'm going to do [20:45] seems it just contains (from comments) just one line: [20:45] user-admin:bootstrap [20:45] if anyone has a working juju bootstrapped box that can check /var/lib/juju/nonce.txt would greatly appreciate it [20:46] is user-admin a real username? or is that a placeholder by the poster? are there other lines? [20:46] or is it a key? [20:48] TheJeff bootstrapping now, will have an answer for you shortly [20:49] lazypower: you are a hero among men (or women) [20:49] You're too kind :) [20:49] TheJeff: are you planning on attending the juju charmer summit? Best place to get your hands/feet wet with juju development [20:51] where / when? [20:51] TheJeff: http://summit.juju.solutions === lazypower changed the topic of #juju to: Welcome to Juju! || jujucharms.com back up || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP || Charmer Summit: http://summit.juju.solutions === marcoceppi changed the topic of #juju to: Welcome to Juju! || Juju Charmer Summit: http://summit.juju.solutions || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP [21:01] derp [21:02] whoops, wrong window focus, nvm [21:07] TheJeff: something is going hinky on my end, this is taking longer than expected :/ [21:16] TheJeff: it just has the string user-admin:bootstrap [21:19] jose: ping [21:19] lazypower: had fires, trying now [21:19] thank you for checking [21:19] np [21:19] re: summit - Belgium is very far from Toronto! [21:19] but we will see we will see [21:30] lazypower: 2015-12-03 21:28:03 DEBUG juju.utils.ssh ssh.go:249 using OpenSSH ssh client [21:30] Logging to /var/log/cloud-init-output.log on remote host [21:30] Running apt-get update [21:30] Running apt-get upgrade [21:30] manually slapping that string in worked around fine [21:30] +1 that bug [21:30] we got you unblocked! nice [21:30] * lazypower fistpumps [21:31] sorry about the bug though. its on teh list to be talked about at the next team standup [21:31] o/ lazypower [21:31] yo yo thumper [21:31] lazypower: my team lead will be at cfgmgmtcamp [21:32] so that's pretty neat [21:32] I wish we had the time to allocate a dev to just follow each eco person for a month and fix all the shitty niggly bugs that get in the way [21:32] he was going to /j and sing praises earlier but i told him to hold off dox'ing us lol [21:32] thumper: we file bugs, and triage accordingly :) [21:32] :) [21:32] but now that this is working i think he can dox us just fine [21:32] and thank you very very very very very much sir [21:32] or madam [21:32] TheJeff: well, you can always have him join us at the summit (and convince him to take you along ;) [21:33] Anytime TheJeff [21:33] where should layered charm authors commit source in launchpad? i know the resultant charm must live at lp:~user/charms/trusty/foo/trunk, but what about the source -- lp:~user/charms/layers/foo/trunk, or lp:~user/charms/trusty/foo/source? [21:33] kwmonroe: they can put it anywhere, there's no need for it to be in the charms project [21:34] kwmonroe: i heard the cool kids were using git on launchpad these days [21:34] kwmonroe: lazypower's answer is the best [21:34] lazypower: he just followed you on twitter [21:34] lol [21:37] can I set a config variable within a charm? [21:38] Icey: only a user can set a configuration value [22:04] thedac, you familiar with the neutron-api charm's templating system? [22:04] adam_g: a bit, yes. [22:04] What are you trying to do? [22:08] thedac, trying to get something set in /etc/neutron/plugins/ml2/ml2_conf.ini [22:08] it looks like the only time this file is ever managed by the charm is when its set in the legacy manage plugin mode [22:08] http://bazaar.launchpad.net/~openstack-charmers/charms/trusty/neutron-api/trunk/view/head:/hooks/neutron_api_utils.py#L286 [22:08] let me take a look [22:09] i can just push setting in via the subordinate's relation, but the principal charm isn't managing the file as a template so they'll never be injected [22:10] ... unless the subordinate API thing supports injecting config values into arbitrary config files not managed by the templating system [22:11] Off the top of my head the subordinate api only affects neutron.conf. Let me verify. [22:11] yeah, thats what it looks like [22:11] i suppose i could just manage the ml2 conf directly from the subordinate [22:12] I think that is the intention. If we are using vanilla ovs neutron-api handles it. If not the subordinate does so. [22:14] adam_g: That is confirmed see lp:~openstack-charmers/charms/trusty/neutron-api-odl/vpp as an example [22:16] thedac, what does 'vanilla ovs' mean? [22:17] If we are using the "default" openvswitch [22:17] Without any external config changes [22:28] ayy [22:28] ok we're like ... 99.9% there I think [22:28] after working around and working around, juju quickstart ran [22:28] juju-gui/0 deployment is pending [22:28] machine 0 is started [22:28] ehhhh didnt wait long enough [22:28] its up ! [22:35] TheJeff: Seems like you're ready to model some deployments [22:54] charmers, can someone please poke this http://review.juju.solutions/review/2357 === ericsnow_afk is now known as ericsnow