=== blr_ is now known as blr === JoseeAntonioR is now known as jose === natefinch-afk is now known as natefinch === med_ is now known as Guest77121 [08:23] hello [08:24] so several minutes ago I did "juju destroy-service autopkgtest-cloud-worker", and it did destroy the autopkgtest-cloud-worker/0 instance and the associated machine [08:25] but "juju status" still keeps "autopkgtest-cloud-worker" with a status of "life: dying", and it never goes away [08:25] http://paste.ubuntu.com/14485499/ [08:26] is that because some subordinate charms still reference this? [08:26] I can't redeploy the charm while it's in that state [08:30] I already tried "juju destroy-relation ksplice autopkgtest-cloud-worker", and same for landscape-client, but that doesn't help [08:42] I tried destroying the landscape-client and ksplice services too, but still didn't help [08:53] pitti: taking a look [08:53] marcoceppi: axino suggested to do "sudo restart jujud-machine-0" on the 0 machine, but that didn't help either [08:54] pitti: that's alos a bit...drastic [08:55] pitti: it's odd, because it's referencing ksplice and landscape but I don't see services with those interfaces deployed [08:55] marcoceppi: well, they are subordinate charms -- they get deployed on each "real" service [08:55] they are not standalone services [08:55] pitti: sure, but they're not in status at all [08:56] pitti: right, but th relations for that service are listed with no units [08:56] pitti: it seems the service isn't being recycled, because of this relation [08:56] because the machine is basically gone, no units exist, etc [08:57] does juju remove-relation autopkgtest-cloud-worker ksplice do anyhting? [08:59] marcoceppi: nothing visible at all [08:59] pitti: weird. obviously this isn't supposed to happen [08:59] marcoceppi: well, they are in status, you see e. g. landscape-client/6 [08:59] pitti: right, but not on the autopkgtest service [09:00] yeah, as that's gone after destroy-service [09:00] pitti: does `juju destroy-service --force autopkgtest-cloud-worker` do anything? [09:00] it's just stuck in dying, which usually means something as failed, a relation or something else [09:00] marcoceppi: there's no such option [09:00] juju --version: 1.24.7-trusty-amd64 [09:01] :\ [09:01] it's actually quite simple for me to destroy and re-deploy the whole env, I was just wondering as this did work before [09:02] I re-deployed the worker service individually two or three times [09:03] pitti: it should work, something is stuck where Juju says it's not ready to reap the service because there's still an action pending against this [09:03] you see this a lot when relations fail on destroy, or stop hook fails [09:03] but I can't see exactly what it's stuck on, and since it's a 1.22 version I don't remember if there were any oddities in that release [09:04] 1.24 [09:04] marcoceppi: anyway, I'll re-do the whole thing; thanks for looking! [09:05] pitti: you may have 1.24 locally, that deployment is agent-version: 1.22.6. either way, good luck! [09:05] marcoceppi: ah, ok [09:05] marcoceppi: whatever is in prodstack [09:06] marcoceppi: but maybe they updated to 1.24 by now, and with re-deploying I'll get that too, I'll see [09:07] \o/ [09:10] agent-version: 1.24.7 [09:10] marcoceppi: seems so [09:11] marcoceppi: so next time I run into errors I'll have a less outdated version [09:11] pitti: awesome, it's easier to support 1.24.7 since 1.25.0 is latest [09:53] Anybody seen an issue were a reactive built charm during upgrade/install continually loops around uninstalling and reinstalling wheels? [10:04] wesleymason: interesting. [10:04] Can't say that I have. [10:04] wesleymason: do you have a link to the charm? more importantly can you show me the hooks/upgarde-charm file? [10:04] marcoceppi: charm is https://github.com/1stvamp/juju-errbot (WIP) [10:05] wesleymason: I think I know the issue, we can patch in a few mins [10:05] marcoceppi: upgrade-charm hook: http://pastebin.ubuntu.com/14485929/ [10:06] wesleymason: thought so, thanks, we'll patch this quickly [10:06] marcoceppi: cheers 😃 [10:15] wesleymason: when this lands in a few mins, `charm build` again https://github.com/juju-solutions/reactive-base-layer/pull/23 [10:20] marcoceppi: aha, tvm [11:30] perhaps marcoceppi's PR could get a bit of love? https://github.com/juju-solutions/reactive-base-layer/pull/23 - in order to keep working on my charm I keep having to manually copy everything in and void a proper rebuild [11:30] s/void/avoid [11:39] I am correct to assume the JuJu State Machine VM, needs to have access to the OpenStack endpoint APIs? If I use floating IPs, does the uJu State Machine communicates with other VMs using floating IPs or in the tenant network? [11:44] wesleymason: I just poked cory_fu_ [11:45] marcoceppi: ta [11:45] marcoceppi: lgtm, but let me test it. ;) [11:46] psh, tests ;) [11:48] In other news I can now deploy errbot with juju 🙌 (minus nrpe, http and the ability to install from a wheelhouse rather than pypi) [12:10] wesleymason: Ok, the wheelhouse loop on upgrade-charm is fixed, if you rebuild your charm. [12:44] cory_fu_: marcoceppi: cheers guys [13:46] Hi, I removed a charm via command line but in the juju-gui i see the charm, which it is the way to solve this problem? thanks in advance [13:51] D4RKS1D3 whats the current status when you look at the service in `juju status`? [13:53] environment: maas machines: {} services: {} [13:53] juju resolved -r neutron-api/0 ERROR unit "neutron-api/0" not found [13:54] D4RKS1D3 and the GUI still has the service as listed? Reloading the browser tab doesn't correct itself? [13:54] if I reload the page continue neutron-api service [13:54] but not the uni [13:55] D4RKS1D3 That sometimes happens when you destroy a machine out from under a service, and dont remove the service... but if your juju status output says there's nothing registered, thats bug worthy [13:56] http://s22.postimg.org/4aqldlaz5/Screenshot_130116_13_53_43.png [13:56] D4RKS1D3 can you psatebin me the output from juju status --format=tabular ? [13:56] of course [13:58] lazyPower, http://paste.ubuntu.com/14487078/ [13:59] D4RKS1D3 ok it appears you have removed the machine/unit, but not the service [13:59] D4RKS1D3 juju destroy-service netron-api [13:59] *neutron-api [14:00] I wrote this command, yes [14:01] seems stuck huh? try adding a unit to neutron-api, then running juju destroy-service neutron-api [14:01] Okey lazyPower [14:02] juju add-unit neutron-api --to lxc:3 ERROR cannot add unit 1/1 to service "neutron-api": cannot add unit to service "neutron-api": service is not alive [14:03] progress, seems that the service itself is stuck without a unit showing in status.. but somethings lingering in the state server keeping it around. [14:04] D4RKS1D3 Can i get you to file a bug about this? https://launchpad.net/juju-core/ https://login.launchpad.net/LfAACUW1CApYLLfk/+decide [14:04] Of course [14:04] include the status output, gui screenshot, and the all-machines.log, steps to reproduce [14:05] Yes [14:05] what do you need [14:05] > status output, gui screenshot, and the all-machines.log, steps to reproduce [14:06] Hi, can someone reply to this: I am correct to assume the JuJu State Machine VM, needs to have access to the OpenStack endpoint APIs? If I use floating IPs, does the JuJu State Machine communicates with other VMs using floating IPs or in the tenant network? [14:06] tiagogomes__ the tenant network i do believe [14:06] I think the same, only the tenant network [14:07] tiagogomes__ juju uses the private interface for communicating with the nodes. I'm pretty sure floating-ip only effects the public interface [14:07] D4RKS1D3 oh lol i just realized i pasted a login link to you. the intended link was launchpad.net/juju-core [14:08] ok. I see. And the JuJu state machine needs to call the OpenStack APIs right? Or is the client that does that? [14:08] * tiagogomes__ is trying to sort out network requirements [14:08] The state server makes requests to the OS API's [14:09] depending, juju deploy openstack, juju deploy an environment into that openstack. So you'e got the idea of your cloud, and as beisner says, your "undercloud" [14:09] so it depends on which layer of cloudy goodness you're talking about :) [14:09] lazyPower, I do not see the the link in the juju-core [14:10] D4RKS1D3 https://bugs.launchpad.net/juju-core/+filebug [14:10] I see. I am talking about JuJu bootstrapped on OpenStack. And The JuJu gui makes requests to the state server right? It doesn't call the OpenStack APIs [14:10] Correct [14:11] all juju interaction is routed through the model controller (state server) [14:12] tvm lazyPower! [14:12] np tiagogomes__ [14:17] lazyPower, all-machines.log is a huge file! hahahaha [14:17] o/ hi all [14:17] hi beisner [14:22] is that mr beedy? [14:38] i dont think so bdx is beedy [14:50] hi D4RKS1D3 ;-) [15:38] lazyPower, chipping away at the rvw Q. i don't have merge perms on mysql. can you do the honors on this? https://code.launchpad.net/~barryprice/charms/trusty/mysql/add-innodb_file_per_table/+merge/281398 [15:38] hey lazyPower [15:38] so like marco told me with him the LXD provider breaks like every 6 bootstraps or so [15:39] but I have not had issues [15:39] what would be a good way to just automate having my laptop fire up workloads in a loop? [15:39] I figure, let it run for like 8 hours and see what happens [15:45] beisner: did you get it merged already? [15:46] hi jose! nope, but it's ready to land. [15:46] beisner: ok, let me take a look [15:49] jcastro cronjob to run a bundletester job? [15:49] I've not used bundletester before [15:49] is there a tldr doc somewhere? [15:49] pip install bundletester && bundletester -F -l DEBUG -v in the charmdir [15:50] lemme see what we have in the docs [15:50] i'm positive we have somem stuff in the dev docs about this too [15:51] jcastro https://jujucharms.com/docs/devel/developer-testing#bundletester [15:51] ack, thanks [15:51] beisner: and, pushed! [15:51] ta jose [15:52] seems to still use juju-deployer [15:52] jose, great, thanks. i'll send a review mail to the list. [15:52] also, o/ jose [15:52] hey, lazyPower! [15:52] jcastro thats probably the case. juju deploy is too new to have it consumed in our testing infra already [15:52] * jcastro nods [15:52] does the lxd provider not work with juju-deployer? [15:53] I am not sure yet, I was just reading the github page on bundletester [15:57] beisner have i introduced you to my personal assistant, charmbot 5000? aka jose [15:57] >.> [15:57] xD [15:59] huh, juju-jitsu is still in xenial [16:01] whoa [16:01] i thought that was deprecated as of precise [16:01] I'll find out how to remove it [16:02] jcastro: on main? [16:02] or, I mean, universe? [16:03] hey jcastro, i have a potential TC attendee to the charmer summit. Think we have some space for floating heads in a box on a laptop? [16:03] tc = teleconference [16:04] jcastro: you'll need to get a MOTU to remove the package [16:07] https://bugs.launchpad.net/ubuntu/+source/juju-jitsu/+bug/1533738 [16:07] Bug #1533738: Remove juju-jitsu package from Xenial [16:07] they told me what to do [16:07] we just file a bug and subscribe ubuntu-archive with an explanation [16:32] lazyPower || jose - can you review/land this? it's a c-h resync to enable openstack mitaka (ceilometer + mongodb) deployability. https://code.launchpad.net/~1chb1n/charms/trusty/mongodb/ch-sync-mitaka/+merge/282211 [16:33] jose wanna take that one? [16:33] beisner: I have to run an errand right now - if lazy hasn't done it by when I'm back I'll do it [16:33] lazyPower: [16:33] ^ * [16:33] sound good [16:33] cool [16:33] beisner in a meeting, gimme a few and i'm on it [16:33] thx guys. fwiw, the code it's pulling in has already been reviewed and landed in lp:charm-helpers. [16:34] hello. I am trying to write an amulet test for a simple subordinate. It uses "juju-info" relation. I am trying to relate it with cs:ubuntu charm like this: "cls.deployment.relate('ubuntu:juju-info', 'collectd:juju-info')" [16:35] I am getting this: http://pastebin.ubuntu.com/14488256/ [16:36] any ideas what I am doing wrong? [16:36] jacekn, are you able to make the same relation on those services outside of the test? ie. juju add-relation X Y [16:36] jacekn: not sure on this, but does your charm provide juju-info? [16:36] like, explicitly stated? [16:36] beisner: only if I do "juju add-relation ubuntu collectd" [16:37] beisner: so it seems to me that the amulet depends on juju-info being in the metadata.yaml which it probably shouldn't [16:38] jacekn, yeah i've not looked at the code, but i suspect it just parses metadata.yaml to populate valid interfaces. [16:38] see this: http://pastebin.ubuntu.com/14488278/ [16:39] so how should I test my subordinate? [16:39] given the empty dict, i bet you're right. { u'Error': u'no relations found', u'RequestId': 1, u'Response': { }} [16:39] i'd say it's in python-jujuclient [16:40] or hrm. that explicit juju add-relation fail takes me back to: juju-info is somehow super-special [16:40] even juju itself can't add that relation (ubuntu:juju-info collectd:juju-info) [16:44] jacekn correct, juju-info relations are bad [16:45] using it as the implicit interface on a scope: container relation is ok [16:45] but not as the relation name [16:45] beisner ^ [16:46] jacekn swap that relation to be something like: "metric-host: interface: juju-info scope: container" and you should clear up that error [16:47] in developing a layer, not a top level layer, is it OK to use any of the @hook decorators? [16:49] lazyPower: you mean in my subordinate's metadata.yaml? [16:50] also, would it be a good idea to (if possible) make a layer that is not intended to be the top layer deployable? [16:50] ie: calling charm build on the layer would make a deployable charm [16:50] jacekn correct [16:50] icey you can [16:50] icey you just cant mix @hook and @when decorators [16:51] icey however we do prefer that you not use the @hook decorators unless they are absolutely necessary. Like set up a method that uses/provides a synthetic state [16:51] and use those states to drive, as thats more natural when charming with consuming layers [16:51] I'm thinking more about the install hook :) [16:51] its odd to try to intercept @hooke('config-changed') in 3 layers, vs doing something like @when_not('dependency.installed') [16:52] everything else I'm fine pushing to reactive state but can't figure out a non-ridiculous way to handle the install [16:52] well [16:52] considering states run until their state change occurs in the bus coupled with a guarding @when_not() [16:52] you can set the state you want to imply install, and @when() on your other 3 or 4 method decorators [16:56] lazyPower: that did not help: http://pastebin.ubuntu.com/14488410/ [16:57] lazyPower, can you re-trigger tests for http://review.juju.solutions/review/2394 ? [16:58] beisner aws and lxc re-added to the queue [16:58] * lazyPower circles back [16:58] lazyPower, thx [16:58] inc rev/merge on above link [17:00] lazyPower: how would I get my needed state set? [17:00] icey charms.reactive set_state('thing.amabob') [17:00] in an @hook('install')? [17:00] sure [17:00] from charms.reactive import set_state [17:01] set_state('thing.amabob') [17:01] I know how to set states, wondering what the recommended way to replace an install hook would be [17:01] oh [17:01] @when_not('thing.amabob') [17:01] def do_something() [17:01] # do something here [17:01] set_state('thing.amabob') [17:01] then in followign methods [17:01] @when('thing.amabob') [17:01] lazyPower: on it? [17:01] do_something() will trigger in that first sweep of the reactive bus [17:02] jose already merged [17:02] just need to push [17:02] actually the merge is hng up [17:02] lazyPower: gotcha [17:02] if you wanna do it go for it [17:02] bzr push :parent [17:02] ^^ copy and paste [17:02] nah its stuck on the merge dude [17:02] $ bzr merge lp:~1chb1n/charms/trusty/mongodb/ch-sync-mitaka [17:02] sitting here on this, cycling [17:02] ok let's see [17:02] as long as it's not a fat charm I'm good [17:03] * icey lunches [17:05] beisner: merged [17:06] thanks jose [17:06] np [17:40] jose o/ got one hot off the press. https://code.launchpad.net/~ajkavanagh/charms/trusty/mongodb/fix-unit-test-lint-lp1533654/+merge/282472 [17:41] ie. some old existing lint in the unit test file just crossed the failure line in whatever bleeding edge version of flake8 gets pulled from pypi [17:42] beisner: lucky you, I still have that directory open [17:42] woot [17:45] and landed [17:57] thanks again, jose :) [18:01] np [18:11] lazyPower: got a few minutes to chat more about the layers / reactive stuff, maybe on a hangout? [18:30] Hey guys I just reviewed a charm that uses SFTP and no crypto verification. Is that OK? [18:31] Does using sftp remove the need to do cryptographic verification of a payload? [18:32] mbruzek: ehhhhhhhhh [18:32] mbruzek: it's on the fence, I'd mail the list [18:50] icey : hey sorry i stepped out for lunch [18:50] marcoceppi are we doing standup in 10? [18:50] no worries lazyPower :) I figure IRC is async [18:51] lazyPower: probably, yes [18:51] moving to better wifi [18:54] icey But i can help out of band, whats up? [18:54] just trying to wrap my head around best practice for intermediate layers and hooks [18:55] icey well i prototyped some of what this looks like elsewhere [18:55] icey this is a sandwich layer with a short lifespan, it will be deprecated in favor of layer-docker-libnetwork [18:56] https://github.com/chuckbutler/flannel-layer/blob/master/reactive/flannel.py [18:56] but i dont bind to *any* of the default hookenv hooks [18:56] i'm sniffing states off the base layer and setting states for hte top layer to consume via unitdata.kv() [19:13] marcoceppi: can you let me know how to depoloy a local charm with a bundle [19:15] icey: set JUJU_REPOSITORY and set charm: local:trusty/charm [19:15] thanks, yeah the bundle export from juju gui gave me an error with the charm number after === jog__ is now known as jog [21:45] hey whats up guys? Is there a charmhelper function, or best practice anyone knows of to generate self-signed certificates other then using subprocess and pipes? [21:52] bdx: you should talk to lazyPower [21:52] bdx hi there [21:52] o/ arosales [21:52] bdx: and mbruzek [21:52] bdx have you looked at the TLS layer in interfaces.juju.solutions? [21:52] oooh ... I just realized you can pass a -config option to a file [21:53] lazyPower: I was looking at the way its done in the apache2 charm [21:53] bdx nope! [21:53] we have new stuff [21:53] lazyPower: nice, I'll check it now, thanks [21:53] bdx https://github.com/mbruzek/layer-tls [21:54] the tls layer is only peer capable so its not server/client aware yet [21:54] but getting there [21:54] sets up PKI, each unit generates a CSR, the leader signs it and hands back the certificate [21:54] consume, easy self signed pki [21:55] this is great! [21:55] I want to use it now [21:55] do it! [21:55] we want bugs / feedback / layers to use this [21:55] bdx we're doing server/client key generation for kubernetes with this and its workign quite well [21:56] bdx it is still very new, but please give us feedback [21:56] so if you need an example to follow, we have one for ya [21:56] i should have the prelim work done w/ swarm to hand over soon'ish too, its about a week behind the k8s refactoring [21:56] lazyPower, mbruzek: entirely, awesome! [21:57] bdx we aim to please === _thumper_ is now known as thumper [21:58] mbruzek: it was a great idea to make this a layer ... every website/api endpoint should/could need this [21:58] I know! Right? [21:58] To be honest we stumbled upon that with our kubernetes work [21:59] but yeah I want to make it more generic and useful to any charm layer so please advise, open bugs, or sing is praises in #juju [22:00] of course :-) === natefinch is now known as natefinch-afk [22:17] mysql charm mitaka prep ready for review/landing. https://code.launchpad.net/~1chb1n/charms/trusty/mysql/ch-sync-mitaka/+merge/282209 i believe this is the last of the mitaka uca blocker syncs - jamespage || gnuoy [23:06] * D4RKS1D3 GoodNight!