[12:14] <jamespage> gnuoy, one minor comment on https://code.launchpad.net/~gnuoy/charms/trusty/keystone/add-dnsaas-svc/+merge/283922
[12:22] <jcastro> lazypower|travel: I need your slides from the summit/FOSDEM when you get a chance
[12:49] <lazyPower> jcastro - ack. lemme fish those up
[12:49] <jcastro> <3
[12:57] <jcastro> marcoceppi: aisrael: you guys need anything?
[12:57] <marcoceppi> jcastro: ?
[12:57] <jcastro> feel free to ask if you need some mwc prep thing done
[12:57] <marcoceppi> huh?
[12:57] <jcastro> you guys were working on an mwc demo no?
[12:57] <marcoceppi> yeah, still are
[13:14] <lovea> Hi. For a given charm deployed on a given Unit, how would I go about retrieving a list of all relation names for that unit? All the charm helpers such as relation-list seem to require a relation name to begin with. What I'm trying to do is run a script at the end of the upgrade-charm hook that looks for all existing relationships for that unit and somehow trigger a relationship-set so that the original relation config is reinstated.
[13:21] <Ursinha> hi all, is it possible to get a unit out of the blocked state? similar to juju resolved for when unit is in error state
[13:22] <Ursinha> it's missing relations to a unit that has already settled
[13:22] <lazyPower> lovea : great question. i forget if its relation-list or relation-ids that spits out all the currently established relations
[13:23] <lazyPower> but one of those commands will give you what you're looking for - or at least the first step to start probing/inspecting
[13:23] <Ursinha> and I've been waiting for ~20 mins or so, it's not going anywhere
[13:23] <lazyPower> Ursinha - can you give me a little more context?
[13:23] <Ursinha> lazyPower: I'm deploying openstack and ceph-radosgw is blocked missing relations on ceph-moin
[13:23] <Ursinha> *ceph-mon
[13:23] <lovea> lazyPower: definitely not relation-ids so I'll try relation-list
[13:24] <lazyPower> Ursinha - blocked status messages, if not cleared properly in the charm code - can be misleading. status messages are arbitrary notices from the charm author, and its entirely likely that a charm may be operating nominally but the status message was never cleared giving it the apperance of still being in a blocked state
[13:24] <Ursinha> but ceph-mon is already ready and idle
[13:24] <lazyPower> Ursinha: and when you juju status ceph-mon, do you see the relation established in the output? (specifically when using --format=yaml to juju status ceph-mon)
[13:24] <Ursinha> lazyPower: juju logs show in ceph-radosgw that relations are missing
[13:24] <Ursinha> lazyPower: let me try that specifically
[13:25] <lazyPower> icey - ping if you're around :)
[13:25] <icey> pong lazyPower
[13:25] <lazyPower> icey - are you familiar with the ceph-radosgw charm?
[13:26] <icey> little bit, reading the conversation lazyPower
[13:26] <lazyPower> <3 this is why you're one of my favorites
[13:27] <icey> Ursinha: just to confirm, you're using my ceph-mon charm that you're using the ceph-mon in my personal namespace?
[13:27] <Ursinha> lazyPower: hm no. it's not showing in the relations list -- but then I tried to add it and it failed saying the relation already existed, I tried to remove it and it said nothing, but won't allow me add a new one
[13:27] <icey> (as far as I know, there's nothing that should have changed with the radosgw relation)
[13:27] <Ursinha> haha yes, hi icey :)
[13:28] <Ursinha> I think this might be a bug in the charm, but I wanted to know if it would be possible to do something about the "blocked" state in any case
[13:28] <lovea> lazyPower: relation-list also expects a relation name to be passed. I can see a way of starting at the top of the tree and discovering relation names, then relation ids, then acting on the ids.
[13:29] <lazyPower> lovea: that sounds right
[13:29] <icey> Ursinha: when I add blocked states, it's always resolvable by the user ;-) Sometimes by adding a required relation, sometimes it needs to be retried; sadly, I didn't add the blocked to the radosgs one so I'm not sure yet
[13:29] <lovea> lazyPower: typo, I can't see a way!
[13:30] <lovea> lazyPower: Might just have to hardcode the charm relation names for now.
[13:30] <icey> Ursinha: the only thing that's changed with the radosgw stuff recently is adding in a broker, I can dig in in a bit to see if I can reproduce the error
[13:30] <icey> radosgw stuff in ceph/ceph-mon
[13:30] <Ursinha> icey: hmm right, I ran a resolved --retry but it seems to work only with units in error state
[13:31] <Ursinha> icey: I found this bug: https://bugs.launchpad.net/charms/+source/ceph-radosgw/+bug/1517940
[13:31] <mup> Bug #1517940: workload-status is wrong <openstack> <sts> <ceph-radosgw (Juju Charms Collection):New> <https://launchpad.net/bugs/1517940>
[13:31] <icey> yeah Ursinha, how to resolved blocked does depend ;-) I try to make it reasonable
[13:31] <lazyPower> lovea: as the charm author, you can reasonably tell what relations are in the assembled charm...
[13:31] <lazyPower> lovea: i'll bring this up at our charmer office hours though, so tune in :)
[13:31] <Ursinha> icey: so it's basically reading the message, hoping the developer was reasonable and try to do something to fix that? :)
[13:31] <icey> Ursinha: Ed's report comes up before this last change so I'll definitely have to dig
[13:32] <lovea> lazyPower: Yeah. That's what I figured. Thanks for listening.
[13:32] <icey> Ursinha: yeah, remember: state and messages are freeform so you have to hope the author did the right thing
[13:32] <lazyPower> lovea np, sorry i didn't have a better answer, but what you're asking is going to become a more requested feature - as interface layers are really going to accellerate the consumption of connectivity
[13:32] <Ursinha> icey: will keep that in mind, thanks for the info
[13:32] <lazyPower> pre-fab interfaces make it trivial
[13:33] <icey> Ursinha: I'll try to take a look at reproducing this today, probably won't get to it until later
[13:33] <Ursinha> icey: that is fine, it's not happening every time so I'm not blocked, but as I just hit it I wanted to +1 the issue
[13:33] <Ursinha> icey: thank you :)
[13:34] <icey> yeah, the transient failure makes it much harder to repro Ursinha
[13:34] <lovea> lazyPower: Indeed. Connected subordinate charms can add up too. Really liking Juju though so many thanks.
[13:35] <apuimedo> jamespage: ping
[13:35] <jamespage> apuimedo, hello
[13:35] <lazyPower> np lovea :) love that feedback!
[13:36] <apuimedo> is it possible to enable amulet tests for merge proposals run on my charms?
[13:36] <apuimedo> or will that only be once they are promulgated?
[13:37] <apuimedo> (I can see why that would be a whitelist thing)
[13:40] <apuimedo> lazyPower: do you have swarm charmed?
[13:41] <lazyPower> apuimedo - i have a layer for it, its not "official" as i've run into a roadblock with the TLS generation that I hiaven't gotten back to
[13:41] <lazyPower> but its well on its way, let me fish you up a link to the layer
[13:41] <apuimedo> cool
[13:41] <lazyPower> https://github.com/chuckbutler/layer-swarm
[13:41] <apuimedo> some people want to try kuryr with swarm and I was thinking I could just make a charm
[13:41] <lazyPower> yeah buddy!
[13:42] <lazyPower> Dude, let me work with you on that
[13:42] <lazyPower> like, dont block on me, but i def want to be a fly on the wall while you work with these layer(s), and get some solid feedback on how i've got this layed out
[13:43] <apuimedo> I have to read up on the layers first
[13:43] <lazyPower> ping me if you need help :)
[13:43] <apuimedo> cause I didn't use them yet
[13:43] <apuimedo> but I'll probably get to that at the end of next week
[13:44] <apuimedo> (weekend)
[13:44] <lazyPower> if you've got time next week i can reasonably pencil you in for a quick charm school to get you up to speed with layers
[13:44] <lazyPower> i'll have to clear it with my workload first, but i'm 80% certain that an hour is expendable for this
[13:45] <apuimedo> lazyPower: I'll also want to give a look at the kubernetes work you have
[13:45] <apuimedo> but that one you have completely ready, right?
[13:46] <lazyPower> Mostly, there's still some work to be done to make it "enterprise grade" but you get a functional/scaleable k8s cluster if you're building from tip of our layers
[13:47] <apuimedo> I want to use it to give devs a kubernetes environment o develop kuryr integration with
[13:48] <apuimedo> so we'll tear kube-proxy down
[13:48] <apuimedo> and flannel too
[13:48] <lazyPower> love it, thats a one liner fix to remove that integration
[13:48] <lazyPower> just gank the layer: flannel directive from layer.yaml, rebuild and you've got a vanilla k8s setup according to google's deployment guide
[13:48] <apuimedo> cool
[13:48] <lazyPower> you'll also want to adjust the pod/service definitions in the layer
[13:49] <lazyPower> but that should be it
[13:49] <apuimedo> which series is it?
[13:49] <lazyPower> trusty
[13:50] <lazyPower> we're currently piloting a xenial deployment, i'll know more about that by end of day today
[13:50] <apuimedo> cool
[14:41] <wesleymason> So say I want to include and install a wheel of a library in my built charm, is there a way of defining that in my layer and having charm build include it, or do I have to include it in my layer?
[14:55] <jamespage> dosaboy, hey does https://code.launchpad.net/~hopem/charm-helpers/lp1518975/+merge/285734
[14:55] <jamespage> mean that you can provide the key after the | in the openstack-origin?
[14:55] <jamespage> wesleymason, yes - just add it to a wheelhouse.txt in your charm/layer
[14:56] <jamespage> layer-basic does this already
[14:56] <wesleymason> jamespage: is that a pip requirements file?
[14:56] <jamespage> same format
[14:56] <wesleymason> ta
[14:58] <jamespage> dosaboy, the key might be better passed as an additional config option ?
[15:08] <dosaboy> jamespage: i've tried it this way and it works quite nicely if you use the yaml multiline syntax
[15:08] <dosaboy> no need for an extra config option
[15:08] <jamespage> dosaboy, pastebin?
[15:08] <dosaboy> 1 sec
[15:12] <dosaboy> jamespage: http://paste.ubuntu.com/15016527/
[15:15] <jamespage> dosaboy, ok - lgtm - +1 from gnuoy as well
[15:15] <dosaboy> \o/
[15:15] <dosaboy> jamespage: shall i merge the os charms as well?
[15:16] <dosaboy> i've already synced them
[15:16] <jamespage> onesec
[15:16] <jamespage> merging ch now
[15:16] <dosaboy> they're a clean sync ftr, no additions
[15:18] <jamespage> dosaboy, I was +1 on approach - but I think we need to preserve import_key as a function
[15:18] <marcoceppi> wesleymason: is it a wheel you have built?
[15:19] <marcoceppi> wesleymason: or one in pypi?
[15:19] <jamespage> its part of the api, so renaming is not so great imho
[15:19] <wesleymason> marcoceppi: will be both
[15:19] <wesleymason> one I have built *and* it's on pypi 😉
[15:20] <jamespage> dosaboy, does that make sense - import_pgp_key could just use the existing function... rather than subsuming it
[15:27] <dosaboy> jamespage: sorry yes, i'll switch it back
[15:27] <dosaboy> jamespage: gimme few mins to get it all synced
[15:28] <marcoceppi> wesleymason: you put pypi deps in wheelhouse.txt as you would a requirements.txt file
[15:28] <marcoceppi> wesleymason: and the rest you should create a wheelhouse directory and put a source wheelhoues in there
[15:28] <marcoceppi> wesleymason: and as I'm typing this I realize this should be documented
[15:29] <wesleymason> :D
[15:42] <dosaboy> jamespage: charm-helpers patch ready, charms on their way
[15:46] <dosaboy> jamespage: done.
[16:32] <jamespage> dosaboy, charm-helpers landed - please resync charms and let osci do it stuff
[16:32] <jamespage> dosaboy, +1 on landing those pending lint, unit and amulet test confirmation is OK
[16:34] <dosaboy> jamespage: charms are all synced
[16:34] <jamespage> dosaboy, gooh-oh
[19:17] <bolthole> Hey juju guys! I'm back, with followup questions to my juju-local + lxc debugging :)
[19:17] <bolthole> I have now got a working local environment, and I notice the following oddity:
[19:17] <bolthole> "sudo lxc list" shows nothing. But "sudo lxc-ls --fancy" shows stuff.  Um... wahts up with that?  (note: I'm a complete lxc noob)
[19:20] <bolthole> i m mentioining this, becuase there is nothing about this stuff on https://jujucharms.com/docs/stable/troubleshooting-local-lxc
[19:28] <bolthole> and on a different subject, my logs are getting spammed with:
[19:28] <bolthole> juju.worker.diskmanager lsblk.go:116 error checking if "fd0" is in use: open /dev/fd0: no such device or address
[19:29] <bolthole> I googled that, and found that allegedly restarting the whole machine shoudl clear it up. but it didnt.
[19:35] <lazyPower> bolthole - the log spam is a bug and should be filed http://bugs.launchpad.net/juju-core/+filebug
[19:35] <lazyPower> bolthole - regarding lxc-ls --fancy showing data where lxc-ls is not, thats... odd....
[19:35] <lazyPower> i'm not sure what would cause that
[19:38] <bolthole> correction: lcs-ls and lcs-ls --fancy both work. its "lcs list" that does not.
[19:38] <bolthole> I just read why taht is. but I think that info needs to be on the juju docs page mentioned above.
[19:44] <lazyPower> bolthole - https://github.com/juju/docs/issues - a bug would be <3
[20:27] <bolthole> awwright i filed a bug.   wierd that it would be in github not launchpad.
[20:27] <bolthole> lazypower: taht was a bug for the dogs. fur the /dev/fd0 thing.. that is already a bug somewhere.
[20:28] <lazyPower> bolthole - ack. Thanks for getting that filed.
[20:28] <lazyPower> bolthole - regarding the log bug, 10/4 - those seem to get lower priority in the -dev queue, but with it filed we should get that sorted soonish :)
[20:28] <bolthole> https://bugs.launchpad.net/juju-core/+bug/1509747
[20:28] <mup> Bug #1509747: Intermittent lxc failures on wily, juju-template-restart.service race condition <cloud-installer> <lxc> <wily> <juju-core:Invalid by cherylj> <systemd (Ubuntu):Invalid by pitti> <https://launchpad.net/bugs/1509747>
[20:29] <bolthole> since oct
[20:29] <bolthole> wait how is it marked invalid??? WTH
[20:29] <cherylj> bolthole: is that the right one?  shows that's a bug I opened
[20:31] <bolthole> yeah you did.
[20:31] <bolthole> i'm hitting it everyh time
[20:31] <bolthole> in azure, AND in a regular ubuntu-on-metal install
[20:32] <bolthole> ubuntu 15.10
[20:33] <cherylj> bolthole: attach the /var/lib/juju/containers/<container name>/* to the bug and I can re open it
[20:36] <bolthole> alternatively... rather than going through extended bug analysis... maybe you can just stop Checking For Floppy Disks, Seeing as how it is 2010+ now??? :)
[20:37] <bolthole> cherylj: waitaminit.. I just noticed it's only whining about machine-0.
[20:38] <bolthole> which, funily enough, isnt present in that container directory
[20:38] <cherylj> bolthole: bug #1509747 was not opened for /dev/fd0 messages
[20:38] <mup> Bug #1509747: Intermittent lxc failures on wily, juju-template-restart.service race condition <cloud-installer> <lxc> <wily> <juju-core:Invalid by cherylj> <systemd (Ubuntu):Invalid by pitti> <https://launchpad.net/bugs/1509747>
[20:39] <bolthole> symptoms the same though.
[20:39] <bolthole> "/var/log/juju-ubuntu-local/all-machines.log just spits out a series of  machine-0: 2015-10-28 08:27:55 ERROR juju.worker.diskmanager lsblk.go:116 error checking if "fd0" is in use: open /dev/fd0: no such device or address  but doesn't otherwise seem to do anything."
[20:40] <bolthole> difference being, after a very long time... my install of juju actually stabilitzed and I could use it.
[20:40] <bolthole> but its still spamming errrors
[20:41] <bolthole> waitanminit... now it's spaming a different set of errors :-/
[20:42] <bolthole> diskmanager": cannot list block devices: lsblk failed: fork/exec /bin/lsblk: cannot allocate memory
[20:42] <bolthole> Which is odd, since the machine supposedly has 128 GIGABYTES of memory and is doing nothing else.
[20:42] <bolthole> but maybe the virtual machine has too little allocated to it?
[20:43] <bolthole> oops sorry thats a different machine. wrong window. sigh.
[20:43] <bolthole> thje 128gig azure VM is still spamming "checking if "fd0" is in use:" errors.
[20:44] <bolthole> Which seems a pretty clera error, since
[20:44] <bolthole> open /dev/fd0: no such device or address
[20:44] <bolthole> If there's no dev, you would think it should shut up and stop trhying to check the device.
[20:45] <bolthole> i'l open a new bug
[20:48] <bolthole> https://bugs.launchpad.net/juju-core/+bug/1544724
[20:48] <mup> Bug #1544724: repeatedly checks /dev/fd0 when it doesnt even exist <juju-core:New> <https://launchpad.net/bugs/1544724>
[20:49] <apuimedo> lazyPower: do you know if there is a way to specify the security group on openstack juju environments?
[20:50] <lazyPower> I do not... but ddellav or beisner may know
[20:52] <beisner> apuimedo, afaik, juju manages creation and cleanup of secgroups and their naming, and there looks to be only one knob to turn.  https://jujucharms.com/docs/1.25/config-openstack
[20:52] <apuimedo> I only saw "use-default"
[20:52] <apuimedo> or something like that
[20:52] <beisner> apuimedo, right that's all i see too.
[20:53] <apuimedo> it's a bit of a bummer, for some reason the open ports doesn't trigger anything on the security groups
[20:53] <apuimedo> so my amulet tests fail :/
[20:53] <lazyPower> waiiiiiittttt
[20:53] <beisner> hm
[20:53] <lazyPower> open-port should most def be poking at those sec groups
[20:54] <beisner> right
[20:54] <beisner> seems like that'd be a bug
[20:54] <lazyPower> which version of juju is this apuimedo?
[20:54] <apuimedo> 1.25
[20:54] <apuimedo> 1.25.0 to be precise
[20:55] <lazyPower> apuimedo - is it at all possible its racing? and the port is getting open in the sec group, but later than the test is expecting?
[20:55] <apuimedo> nope
[20:55] <lazyPower> oi
[20:55] <apuimedo> I leave the instances on
[20:55] <apuimedo> and even now
[20:55] <apuimedo> 15 minutes later
[20:55] <lazyPower> yeah it may be a regression then, and thats odd as we have test coverage around that i'm pretty sure
[20:55] <apuimedo> no entry for 8080
[20:56] <apuimedo> I guess I'll have to disturb my maas environment for amulet tests :(
[21:07] <apuimedo> lazyPower: beisner: does the aws provider work well?
[21:13] <marcoceppi> apuimedo: it should
[21:14] <apuimedo> ok
[21:14] <apuimedo> thanks marcoceppi