=== natefinch-afk is now known as natefinch === dooferlad_ is now known as dooferlad === scuttle|afk is now known as scuttlemonkey === scuttlemonkey is now known as scuttle|afk [12:58] hey cory_fu [12:59] so I know we talked about charms.tool for core.hookenv naming. What do you think about charms.model instead? [13:00] charms.model.log, charms.model.relation_id, charms.model.config [13:15] marcoceppi: I like that much better [13:15] marcoceppi: i like charms.unit better than charms.model [13:18] tvansteenburgh: unit is ok with log, but doesn't make as much sense as model for thinks like relations, config, leadership, etc [13:19] kjackal: Hey, do you have the full unit log for that plugin instance, by chance? [13:19] true i guess. i was thinking of it in terms of "this operates in the context of a single unit" [13:19] tvansteenburgh: I might keep tool for things like log [13:19] cory_fu: let me check [13:20] kjackal: Also, what arch did you run this on? I think the lzo test might need a skipIf for arch, since it's not supported everywheree [13:21] cory_fu: I do not have the full logs. I am sorry. Arch, its x86_64 [13:21] Though I guess the only place it's missing right now is aarch64, and I doubt you ran the tests on that [13:22] I would like a walkthough the test [13:23] I am not sure why we expect those messages. For example we ask two slaves to upgrade and in the status messages we expect only one to report spec missmatch [13:23] cory_fu, there is always the chance I am reading this the wrong way [13:23] maybe charms.model.service.config charms.model.unit.log charms.model.action charms.model.relation* ? cory_fu tvansteenburgh [13:24] * marcoceppi will play around with it a bit [13:24] kjackal: Yes, I wanted to confirm that if the admin forgets to upgrade a unit, the status message will let them know [13:24] marcoceppi: i like that a lot [13:25] marcoceppi: Yeah, me too [14:19] tvansteenburgh: bundletester isn't juju 2 compatible yet, right? [14:19] aisrael: it is [14:20] tvansteenburgh: latest in pip? [14:20] aisrael: yeah. you also need the juju2 versions of deployer and jujuclient, one sec [14:21] aisrael: well you can install those from tips in launchpad if you want. if you want a ppa instead lmk [14:23] tvansteenburgh: eta on landing the last few patches so we can just release those? [14:23] marcoceppi: they need reviews [14:23] tvansteenburgh: link? [14:24] tvansteenburgh: a ppa would be great, as long as that's no more work for you [14:24] marcoceppi: https://bugs.launchpad.net/juju-deployer/+bug/1575863 https://bugs.launchpad.net/juju-deployer/+bug/1576519 [14:24] Bug #1575863: .deployer-store-cache relies on ~/.juju [14:24] Bug #1576519: Not able to deploy a local charm [14:25] aisrael: ppa:ahasenack/python-jujuclient and ppa:ahasenack/juju-deployer-daily [14:27] tvansteenburgh: ta! === scuttle|afk is now known as scuttlemonkey [14:47] stub: Have you tested your cassandra test changes with juju 2? Specifically, it seems like juju-deployer-wrapper.py isn't fully compatible. === med_ is now known as Guest82506 === Guest82506 is now known as medberry === medberry is now known as med_ [16:17] thedac beisner - question for you: What is the best way to make changes to python configs? like dashboard settings.py, where I need to add value to the installed apps array... do we expose any of this for ISV's to tap into? [16:19] lazyPower: for openstack-dashboard you want to update the local_settings.py template which should override settings.py [16:19] thedac - pointer to where i can find that? (sorry!) [16:20] lazyPower: https://github.com/openstack/charm-openstack-dashboard/blob/master/templates/mitaka/local_settings.py [16:23] thedac - so for nexenta's case, they need to add a subordinate to configure this? or are there existing relations/amenities to do so for them? this isn't as straight forward as the cinder plugin stuff i'm afraid [16:24] lazyPower: there is a dashboard-plugin subordinate relationship with openstack-dashboard which is what you will want. I am trying to find you an example [16:24] is there a way to see available charm-tools subcommands while in plugin mode? i.e. `juju charm ...` [16:24] perfect, thats enough to get us started. Thanks thedac! If you do fish up the example, i'd appreciate it :) [16:25] pmatulis: charm help plugins ? [16:25] ok [16:25] pmatulis: why do you care about plugins specifically? [16:27] jamespage: gnuoy: do you know of any examples of subordinates to opentack-dashbaord that use the dashboard-plugin relationship? [16:27] marcoceppi, 'charm help plugins' doesn't give me anything. i'm reviewing existing docs. currently it says available commands can be discovered with `juju charm` [16:29] lazyPower: looks like Nexta already has a charm with that relation. https://jujucharms.com/u/anton-skriptsov/dashboard-nexentaedge/trusty/0 [16:30] https://api.jujucharms.com/charmstore/v5/~anton-skriptsov/trusty/dashboard-nexentaedge-0/archive/metadata.yaml [16:30] yeah, i'm bringing this up with him now. [16:30] coolio [16:30] do you have a moment to run support with me thedac? [16:30] it would be helpful to have an openstacker wingman this with me [16:31] Sure [16:32] pmatulis: no [16:33] pmatulis: where are you looking at this? we re-wrote the entire charm-tools guide already [16:33] thedac, isn't there a juju gui subordinate that uses it? [16:33] gnuoy: thanks. I look [16:35] s/I/I'll [16:35] thedac, looks like you can search the charm store by relation https://jujucharms.com/requires/dashboard-plugin [16:36] ok, that is the only one I found already. :) [16:36] gnuoy: thanks [16:44] marcoceppi, should this page just be deleted then? https://jujucharms.com/docs/devel/juju-offline-charms === freyes_ is now known as freyes__ [16:53] pmatulis: not really, charm pull is the new command [16:54] and charm pull-source [17:02] marcoceppi, i see 'pull-source' only. anyway, is the juju plugin mode depracated then? === freyes__ is now known as freyes [17:04] pmatulis: yes, it's now only through the charm command itself [17:05] rick_h_, thank you [17:06] pmatulis: you need the latest charm command [17:11] marcoceppi, meaning? is there a ppa? [17:12] developer-getting-started doesn't mention one [17:12] pmatulis: it's in xenial, or in ppa:juju/stable for trusty https://jujucharms.com/docs/devel/tools-charm-tools [17:12] marcoceppi, perfect, i'm on xenial [17:12] pmatulis: sudo apt install charm then [17:12] ah, not charm-tools ? [17:13] pmatulis: both [17:13] http://marcoceppi.com/2016/04/charm-2-point-oh/ [17:13] interesting. maybe update the above page? or is that not for public consumption? [17:14] ok, looks like 'charm' is a dep of 'charm-tools' [17:15] marcoceppi, fyi I have a mongodb charm for xenial (with minimal changes to work + get amulet passing) here - https://code.launchpad.net/~billy-olsen/charms/xenial/mongodb/lp1513094 , where do I target an MP at? since the xenial series for the charm doesn't exist yet [17:16] wolsen: it's a new charm review [17:16] marcoceppi, also, any reason why this command takes ~8 seconds to complete? 'charm --help add' [17:16] marcoceppi: ack [17:16] pmatulis: it's faster to do charm add --help [17:16] pmatulis: if you're going to overwrite that page with the auto help stuff you all do for juju charms could you not? [17:19] marcoceppi, the waiting time is the same for me. and i don't know what you mean with your second sentence [17:20] pmatulis: it takes a few seconds because it has to load the plugins on each run, and you guys have a script that auto generates the reference page for juju commands, I'm request we not do the same for this charm-tools page [17:25] marcoceppi, oh, well the commands.md file is auto-generated but it only affects juju-core. anyway, we still need to explain stuff and we need to use commands here and there to do that. we also use commands in examples [17:33] ahasenack: can you kick off a rebuild of python-jujuclient for your ppa? [17:33] tvansteenburgh: sure [17:34] tvansteenburgh: done [17:34] ahasenack: thanks! [17:34] Any thoughts on what's causing this? "update-alternatives: error: no alternatives for juju" [17:34] aisrael: that's gone [17:35] can't do it any more [17:35] to switch to juju1, sudo apt install juju-1-default [17:35] uninstall to switch back [17:36] tvansteenburgh: Shoot. Ok. I'm reviewing a charm that's doing some juju 1-specific stuff. Have we switched the jenkins stuff to use 2.0 yet? I'm wondering if I should test on juju 1, or push for changes to the tests. [17:37] aisrael: jenkins still using juju1 [17:38] tvansteenburgh: roger, I'll switch and test, and comment about future compatibility wrt tests. Any idea when we'd require 2.0 compat? [17:38] aisrael: i defer to marcoceppi :) [17:39] aisrael tvansteenburgh when 2.0 is released [17:39] marcoceppi: tvansteenburgh ack, thanks! [17:40] and no quickstart in xenial? boo [17:40] juju deploy bundle.yaml [17:40] no need for quickstart :) [17:41] oh, juju1 [17:41] aisrael: no quickstart [17:44] drop it like its hot tvansteenburgh [17:44] gah i wasn't all the way at the bottom i throught you dropped the science about juju deploy [17:44] then i see there's no quickstart for juju-1 in xenial. d'oh [17:48] tvansteenburgh: I need to do a dummy commit to trigger a version change, or else the builds won't upload [17:48] this sometimes happens when a manual build is triggered, LP doesn't remember that [17:49] ahasenack: i'm sure i can find something to improve in a trivial commit [17:49] tvansteenburgh: oh, ok. I was going to do it in the packaging branch, but if you have something trivial at hand, go for it [17:52] ahasenack: oh, in the packaging branch. yeah that makes more sense, go for it. [17:53] ok [17:53] tvansteenburgh: buld requested [17:54] ahasenack: thanks again [18:01] ahasenack: how do the versions get updated for the juju-deployer and python-jujuclient builds? is that a manual step? [18:02] tvansteenburgh: yes. Since the recipe uses the revno, that's a free incrementing number we get, but the actual version has to be set in the debian/changelog file [18:03] so the revno is enough for us to get upgrades in the same version, but if ubuntu releases something with a new version, the ppa, even though having more recent code, won't upgrade that [18:05] tvansteenburgh: hm, deployed just failed to build [18:05] test errors [18:05] ERROR: test_multiple_connections (deployer.tests.test_guienv.TestGUIEnvironment) [18:06] ERROR: test_deploy_unqualified_url (deployer.tests.test_guienv.TestGUIEnvironment) [18:06] ERROR: test_deploy (deployer.tests.test_guienv.TestGUIEnvironment) [18:06] ERROR: test_connect (deployer.tests.test_guienv.TestGUIEnvironment) [18:06] and [18:06] ERROR: test_close (deployer.tests.test_guienv.TestGUIEnvironment) [18:06] all failed with OSError: [Errno 2] No such file or directory [18:06] Hi, I'm trying to deploy to an lxc container on maas, and I've done this for many other units that are deployed, but all the others have been on trusty. The lxc host is trusty as well, but the new one I'm trying to deploy needs to be xenial. I never see the machine come up though, and I suspect I'm hitting a systemd related incompatibility with trying to do [18:06] this: [18:06] that's when calling ["juju", "--version"], log).split('.')[0]) [18:06] https://www.irccloud.com/pastebin/rciZj6wP/ [18:06] xenial [18:07] sorry ahasenack, didn't mean to interleave... thought you were finished :) [18:07] I guess we need to update the dependencies to use juju-1-default or get the right juju2 one [18:07] plars: no worries :) [18:08] plars: never seen that error, sorry [18:09] So - 1. Am I right in assuming I cannot deploy lxc machines onto a trusty host? and 2. Does anyone know if I could simply upgrade to a xenial base for my lxc machines and deploy trusty *and* xenial lxc units to it? [18:09] err... s/lxc machines/xenial lxc units/ [18:10] marcoceppi, previously, there was a 'getall' tools command to grab all charms. how is that done now? [18:11] ahasenack: i'll take a look [18:12] tvansteenburgh: I'll just update the deps, I bet there was no /usr/bin/juju [18:12] just the versioned ones [18:12] tvansteenburgh: do these tests work with juju-2? Do you know? [18:12] ahasenack: they do, yes [18:12] ok, I'll put juju-2 in front then, so xenial uses juju2, and the rest will use juju1 [18:12] ahasenack: sounds good [18:27] marcoceppi, i tried 'charm pull wordpress ~/charms/' and the charm's files get put directly under ~/charms and not under ~/charms/wordpress . normal? [18:33] pmatulis: I didn't write that, you could file a bug against https://github.com/juju/charmstore-client [18:33] that is however strange, as i've used charm-pull and it puts the code in a subdirectory named after teh charm [18:33] eg if i'm in /tmp and i run charm pull elasticsearch, it creates /tmp/elasticsearch and puts the charm there [18:34] pmatulis - ^ it really stripped the dir and spit out charm files in $JUJU_REPOSITORY (~/charms)? [18:38] lazyPower, but what you wrote is not what i did so i'm not sure why you say it's strange. i don't know how i can be clearer. why don't you try it? [18:41] oh, i missed the path at the end [18:41] my mistake === scuttlemonkey is now known as scuttle|afk [19:28] Hi [19:28] I am getting the following error in my debug-log [19:28] unit-ibm-db2-0[19719]: 2016-05-05 19:27:42 ERROR juju.worker.uniter.filter filter.go:137 tomb: dying unit-ibm-db2-0[19719]: 2016-05-05 19:27:42 WARNING juju.worker.dependency engine.go:304 failed to start "uniter" manifold worker: dependency not available unit-ibm-db2-0[19719]: 2016-05-05 19:27:45 ERROR juju.worker.uniter.filter filter.go:137 tomb: dying [19:28] How can I resolve this ? [19:35] suchvenu see this bug: https://bugs.launchpad.net/juju-core/+bug/1513667 [19:35] Bug #1513667: Better error messaging around uniter failure [19:39] Hi lazyPower [19:39] You restart the juju machine daemon by running `sudo restart jujud-machine-0` from machine 0 [19:39] Where should i run this from ? [19:41] machine 0 is local host, right ? [19:44] suchvenu - are you using the local provider? [19:44] yes [19:45] yeah but thats not the name of the service you need unfortunately. can you pastebin the output of initctl list | grep juju for me? [19:46] sorry, sudo initctl list | grep juju [19:48] sudo initctl list | grep juju juju-agent-charm-local start/running, process 9596 juju-db-charm-local start/running, process 9529 charm@islrpbeixv665:~$ [19:48] http://pastebin.ubuntu.com/16245865/ [19:49] restart both of those and if you could, let us know on the bug if it resolved the issue or if it gets better/worse/about-the-same? [19:52] Restarted both and now the debug-log is moving further, I don;t see the error now [19:52] However I see this status in Juju status [19:53] http://pastebin.ubuntu.com/16245919/ [19:55] suchvenu - ok, sounds like the agent needs to be cycled on the db-2 unit. Its lost because it hasn't checked in with the controller in a specified time, is what i recall that meaning. [19:56] I am redeploying the service [19:59] lazyPower, why is this happening ? [20:00] suchvenu i'm not certain :( [20:01] suchvenu - the logs of that deployment would be helpful, and if we can capture any of the unit agent logs that are failing to start due to that uniter failure would be helpful. so if it happens again, ping me and i can step you through capturing that. I haven't been able to reproduce that bug. [20:04] ok [20:08] http://askubuntu.com/questions/766661/openstack-lxc-containers-missing-dns [20:08] can anyone help with this one? [20:08] * jcastro glances over at beisner [20:41] whats going on everyone? [20:41] yo yo bdx [20:41] lazyPower: how can I generate/obtain tools for beta7? [20:42] bdx juju bootstrap --upload-tools doesn't work? [20:42] unfortunately not [20:42] hmm, not certain. I'm still on beta6 here. [20:42] thanks for the heads up that i need to recycle my env :) [20:43] ok, thanks. It could be another issue, I'm about to start debugging it now [20:43] yea, good luck! [20:43] oh hey bdx , not sure if you saw last week. hattip @ your contributions to the tls-layer. Skinnied up some code i had to write for swarm :) [20:44] nice!!!! [22:02] https://bugs.launchpad.net/charms/+source/percona-cluster/+bug/1578838 [22:02] Bug #1578838: Services not running that should be: mysql