=== alexisb is now known as alexisb-afk === natefinch-afk is now known as natefinch === xnox_ is now known as xnox === \b is now known as benonsoftware === aluria` is now known as aluria [12:14] gnuoy, one minor comment on https://code.launchpad.net/~gnuoy/charms/trusty/keystone/add-dnsaas-svc/+merge/283922 [12:22] lazypower|travel: I need your slides from the summit/FOSDEM when you get a chance === lazypower|travel is now known as lazyPower [12:49] jcastro - ack. lemme fish those up [12:49] <3 [12:57] marcoceppi: aisrael: you guys need anything? [12:57] jcastro: ? [12:57] feel free to ask if you need some mwc prep thing done [12:57] huh? [12:57] you guys were working on an mwc demo no? [12:57] yeah, still are [13:14] Hi. For a given charm deployed on a given Unit, how would I go about retrieving a list of all relation names for that unit? All the charm helpers such as relation-list seem to require a relation name to begin with. What I'm trying to do is run a script at the end of the upgrade-charm hook that looks for all existing relationships for that unit and somehow trigger a relationship-set so that the original relation config is reinstated. [13:21] hi all, is it possible to get a unit out of the blocked state? similar to juju resolved for when unit is in error state [13:22] it's missing relations to a unit that has already settled [13:22] lovea : great question. i forget if its relation-list or relation-ids that spits out all the currently established relations [13:23] but one of those commands will give you what you're looking for - or at least the first step to start probing/inspecting [13:23] and I've been waiting for ~20 mins or so, it's not going anywhere [13:23] Ursinha - can you give me a little more context? [13:23] lazyPower: I'm deploying openstack and ceph-radosgw is blocked missing relations on ceph-moin [13:23] *ceph-mon [13:23] lazyPower: definitely not relation-ids so I'll try relation-list [13:24] Ursinha - blocked status messages, if not cleared properly in the charm code - can be misleading. status messages are arbitrary notices from the charm author, and its entirely likely that a charm may be operating nominally but the status message was never cleared giving it the apperance of still being in a blocked state [13:24] but ceph-mon is already ready and idle [13:24] Ursinha: and when you juju status ceph-mon, do you see the relation established in the output? (specifically when using --format=yaml to juju status ceph-mon) [13:24] lazyPower: juju logs show in ceph-radosgw that relations are missing [13:24] lazyPower: let me try that specifically [13:25] icey - ping if you're around :) [13:25] pong lazyPower [13:25] icey - are you familiar with the ceph-radosgw charm? [13:26] little bit, reading the conversation lazyPower [13:26] <3 this is why you're one of my favorites [13:27] Ursinha: just to confirm, you're using my ceph-mon charm that you're using the ceph-mon in my personal namespace? [13:27] lazyPower: hm no. it's not showing in the relations list -- but then I tried to add it and it failed saying the relation already existed, I tried to remove it and it said nothing, but won't allow me add a new one [13:27] (as far as I know, there's nothing that should have changed with the radosgw relation) [13:27] haha yes, hi icey :) [13:28] I think this might be a bug in the charm, but I wanted to know if it would be possible to do something about the "blocked" state in any case [13:28] lazyPower: relation-list also expects a relation name to be passed. I can see a way of starting at the top of the tree and discovering relation names, then relation ids, then acting on the ids. [13:29] lovea: that sounds right [13:29] Ursinha: when I add blocked states, it's always resolvable by the user ;-) Sometimes by adding a required relation, sometimes it needs to be retried; sadly, I didn't add the blocked to the radosgs one so I'm not sure yet [13:29] lazyPower: typo, I can't see a way! [13:30] lazyPower: Might just have to hardcode the charm relation names for now. [13:30] Ursinha: the only thing that's changed with the radosgw stuff recently is adding in a broker, I can dig in in a bit to see if I can reproduce the error [13:30] radosgw stuff in ceph/ceph-mon [13:30] icey: hmm right, I ran a resolved --retry but it seems to work only with units in error state [13:31] icey: I found this bug: https://bugs.launchpad.net/charms/+source/ceph-radosgw/+bug/1517940 [13:31] Bug #1517940: workload-status is wrong [13:31] yeah Ursinha, how to resolved blocked does depend ;-) I try to make it reasonable [13:31] lovea: as the charm author, you can reasonably tell what relations are in the assembled charm... [13:31] lovea: i'll bring this up at our charmer office hours though, so tune in :) [13:31] icey: so it's basically reading the message, hoping the developer was reasonable and try to do something to fix that? :) [13:31] Ursinha: Ed's report comes up before this last change so I'll definitely have to dig [13:32] lazyPower: Yeah. That's what I figured. Thanks for listening. [13:32] Ursinha: yeah, remember: state and messages are freeform so you have to hope the author did the right thing [13:32] lovea np, sorry i didn't have a better answer, but what you're asking is going to become a more requested feature - as interface layers are really going to accellerate the consumption of connectivity [13:32] icey: will keep that in mind, thanks for the info [13:32] pre-fab interfaces make it trivial [13:33] Ursinha: I'll try to take a look at reproducing this today, probably won't get to it until later [13:33] icey: that is fine, it's not happening every time so I'm not blocked, but as I just hit it I wanted to +1 the issue [13:33] icey: thank you :) [13:34] yeah, the transient failure makes it much harder to repro Ursinha [13:34] lazyPower: Indeed. Connected subordinate charms can add up too. Really liking Juju though so many thanks. [13:35] jamespage: ping [13:35] apuimedo, hello [13:35] np lovea :) love that feedback! [13:36] is it possible to enable amulet tests for merge proposals run on my charms? [13:36] or will that only be once they are promulgated? [13:37] (I can see why that would be a whitelist thing) [13:40] lazyPower: do you have swarm charmed? [13:41] apuimedo - i have a layer for it, its not "official" as i've run into a roadblock with the TLS generation that I hiaven't gotten back to [13:41] but its well on its way, let me fish you up a link to the layer [13:41] cool [13:41] https://github.com/chuckbutler/layer-swarm [13:41] some people want to try kuryr with swarm and I was thinking I could just make a charm [13:41] yeah buddy! [13:42] Dude, let me work with you on that [13:42] like, dont block on me, but i def want to be a fly on the wall while you work with these layer(s), and get some solid feedback on how i've got this layed out [13:43] I have to read up on the layers first [13:43] ping me if you need help :) [13:43] cause I didn't use them yet [13:43] but I'll probably get to that at the end of next week [13:44] (weekend) [13:44] if you've got time next week i can reasonably pencil you in for a quick charm school to get you up to speed with layers [13:44] i'll have to clear it with my workload first, but i'm 80% certain that an hour is expendable for this [13:45] lazyPower: I'll also want to give a look at the kubernetes work you have [13:45] but that one you have completely ready, right? [13:46] Mostly, there's still some work to be done to make it "enterprise grade" but you get a functional/scaleable k8s cluster if you're building from tip of our layers [13:47] I want to use it to give devs a kubernetes environment o develop kuryr integration with [13:48] so we'll tear kube-proxy down [13:48] and flannel too [13:48] love it, thats a one liner fix to remove that integration [13:48] just gank the layer: flannel directive from layer.yaml, rebuild and you've got a vanilla k8s setup according to google's deployment guide [13:48] cool [13:48] you'll also want to adjust the pod/service definitions in the layer [13:49] but that should be it [13:49] which series is it? [13:49] trusty [13:50] we're currently piloting a xenial deployment, i'll know more about that by end of day today [13:50] cool === mhall119_ is now known as mhall119 [14:41] So say I want to include and install a wheel of a library in my built charm, is there a way of defining that in my layer and having charm build include it, or do I have to include it in my layer? [14:55] dosaboy, hey does https://code.launchpad.net/~hopem/charm-helpers/lp1518975/+merge/285734 [14:55] mean that you can provide the key after the | in the openstack-origin? [14:55] wesleymason, yes - just add it to a wheelhouse.txt in your charm/layer [14:56] layer-basic does this already [14:56] jamespage: is that a pip requirements file? [14:56] same format [14:56] ta [14:58] dosaboy, the key might be better passed as an additional config option ? [15:08] jamespage: i've tried it this way and it works quite nicely if you use the yaml multiline syntax [15:08] no need for an extra config option [15:08] dosaboy, pastebin? [15:08] 1 sec [15:12] jamespage: http://paste.ubuntu.com/15016527/ [15:15] dosaboy, ok - lgtm - +1 from gnuoy as well [15:15] \o/ [15:15] jamespage: shall i merge the os charms as well? [15:16] i've already synced them [15:16] onesec [15:16] merging ch now [15:16] they're a clean sync ftr, no additions [15:18] dosaboy, I was +1 on approach - but I think we need to preserve import_key as a function [15:18] wesleymason: is it a wheel you have built? [15:19] wesleymason: or one in pypi? [15:19] its part of the api, so renaming is not so great imho [15:19] marcoceppi: will be both [15:19] one I have built *and* it's on pypi 😉 [15:20] dosaboy, does that make sense - import_pgp_key could just use the existing function... rather than subsuming it [15:27] jamespage: sorry yes, i'll switch it back [15:27] jamespage: gimme few mins to get it all synced [15:28] wesleymason: you put pypi deps in wheelhouse.txt as you would a requirements.txt file [15:28] wesleymason: and the rest you should create a wheelhouse directory and put a source wheelhoues in there [15:28] wesleymason: and as I'm typing this I realize this should be documented [15:29] :D === alexisb-afk is now known as alexisb [15:42] jamespage: charm-helpers patch ready, charms on their way [15:46] jamespage: done. === redelmann is now known as rudi|comida [16:32] dosaboy, charm-helpers landed - please resync charms and let osci do it stuff [16:32] dosaboy, +1 on landing those pending lint, unit and amulet test confirmation is OK [16:34] jamespage: charms are all synced [16:34] dosaboy, gooh-oh [19:17] Hey juju guys! I'm back, with followup questions to my juju-local + lxc debugging :) [19:17] I have now got a working local environment, and I notice the following oddity: [19:17] "sudo lxc list" shows nothing. But "sudo lxc-ls --fancy" shows stuff. Um... wahts up with that? (note: I'm a complete lxc noob) [19:20] i m mentioining this, becuase there is nothing about this stuff on https://jujucharms.com/docs/stable/troubleshooting-local-lxc [19:28] and on a different subject, my logs are getting spammed with: [19:28] juju.worker.diskmanager lsblk.go:116 error checking if "fd0" is in use: open /dev/fd0: no such device or address [19:29] I googled that, and found that allegedly restarting the whole machine shoudl clear it up. but it didnt. [19:35] bolthole - the log spam is a bug and should be filed http://bugs.launchpad.net/juju-core/+filebug [19:35] bolthole - regarding lxc-ls --fancy showing data where lxc-ls is not, thats... odd.... [19:35] i'm not sure what would cause that [19:38] correction: lcs-ls and lcs-ls --fancy both work. its "lcs list" that does not. [19:38] I just read why taht is. but I think that info needs to be on the juju docs page mentioned above. [19:44] bolthole - https://github.com/juju/docs/issues - a bug would be <3 === rudi|comida is now known as redelmann [20:27] awwright i filed a bug. wierd that it would be in github not launchpad. [20:27] lazypower: taht was a bug for the dogs. fur the /dev/fd0 thing.. that is already a bug somewhere. [20:28] bolthole - ack. Thanks for getting that filed. [20:28] bolthole - regarding the log bug, 10/4 - those seem to get lower priority in the -dev queue, but with it filed we should get that sorted soonish :) [20:28] https://bugs.launchpad.net/juju-core/+bug/1509747 [20:28] Bug #1509747: Intermittent lxc failures on wily, juju-template-restart.service race condition [20:29] since oct [20:29] wait how is it marked invalid??? WTH [20:29] bolthole: is that the right one? shows that's a bug I opened [20:31] yeah you did. [20:31] i'm hitting it everyh time [20:31] in azure, AND in a regular ubuntu-on-metal install [20:32] ubuntu 15.10 [20:33] bolthole: attach the /var/lib/juju/containers//* to the bug and I can re open it [20:36] alternatively... rather than going through extended bug analysis... maybe you can just stop Checking For Floppy Disks, Seeing as how it is 2010+ now??? :) [20:37] cherylj: waitaminit.. I just noticed it's only whining about machine-0. [20:38] which, funily enough, isnt present in that container directory [20:38] bolthole: bug #1509747 was not opened for /dev/fd0 messages [20:38] Bug #1509747: Intermittent lxc failures on wily, juju-template-restart.service race condition [20:39] symptoms the same though. [20:39] "/var/log/juju-ubuntu-local/all-machines.log just spits out a series of machine-0: 2015-10-28 08:27:55 ERROR juju.worker.diskmanager lsblk.go:116 error checking if "fd0" is in use: open /dev/fd0: no such device or address but doesn't otherwise seem to do anything." [20:40] difference being, after a very long time... my install of juju actually stabilitzed and I could use it. [20:40] but its still spamming errrors [20:41] waitanminit... now it's spaming a different set of errors :-/ [20:42] diskmanager": cannot list block devices: lsblk failed: fork/exec /bin/lsblk: cannot allocate memory [20:42] Which is odd, since the machine supposedly has 128 GIGABYTES of memory and is doing nothing else. [20:42] but maybe the virtual machine has too little allocated to it? [20:43] oops sorry thats a different machine. wrong window. sigh. [20:43] thje 128gig azure VM is still spamming "checking if "fd0" is in use:" errors. [20:44] Which seems a pretty clera error, since [20:44] open /dev/fd0: no such device or address [20:44] If there's no dev, you would think it should shut up and stop trhying to check the device. [20:45] i'l open a new bug [20:48] https://bugs.launchpad.net/juju-core/+bug/1544724 [20:48] Bug #1544724: repeatedly checks /dev/fd0 when it doesnt even exist [20:49] lazyPower: do you know if there is a way to specify the security group on openstack juju environments? [20:50] I do not... but ddellav or beisner may know [20:52] apuimedo, afaik, juju manages creation and cleanup of secgroups and their naming, and there looks to be only one knob to turn. https://jujucharms.com/docs/1.25/config-openstack [20:52] I only saw "use-default" [20:52] or something like that [20:52] apuimedo, right that's all i see too. [20:53] it's a bit of a bummer, for some reason the open ports doesn't trigger anything on the security groups [20:53] so my amulet tests fail :/ [20:53] waiiiiiittttt [20:53] hm [20:53] open-port should most def be poking at those sec groups [20:54] right [20:54] seems like that'd be a bug [20:54] which version of juju is this apuimedo? [20:54] 1.25 [20:54] 1.25.0 to be precise [20:55] apuimedo - is it at all possible its racing? and the port is getting open in the sec group, but later than the test is expecting? [20:55] nope [20:55] oi [20:55] I leave the instances on [20:55] and even now [20:55] 15 minutes later [20:55] yeah it may be a regression then, and thats odd as we have test coverage around that i'm pretty sure [20:55] no entry for 8080 [20:56] I guess I'll have to disturb my maas environment for amulet tests :( [21:07] lazyPower: beisner: does the aws provider work well? [21:13] apuimedo: it should [21:14] ok [21:14] thanks marcoceppi === natefinch is now known as natefinch-afk === wolverin_ is now known as wolverineav