[00:06] stokachu: that did it. thanks! [00:06] lutostag: np [00:07] lutostag: it's since been fixed i dont remember what beta release though === scuttlemonkey is now known as scuttle|afk [00:34] stokachu: generating and configuring the cernts is a great deal of maunual overhaul I go through with almost every app that uses layer-nginx, or has a frontend or endpoint of any kind for that matter [00:35] stokach: do you think it would be wise to tls/ssl functionality as as option in layer-nginx? [00:35] stokachu:^ [00:36] bdx: yea [00:36] think it's a good idea [00:51] stokachu: should the target directory to store the crt/key in be specified as a config or option? [00:51] I would think option ... bc it is something that isn't going to be modified really ... [00:52] post deploy [00:52] bdx: yea should just drop the certs in the normal locations for nginx to look [00:54] you can make it an option but just default to /etc/ssl/certs [01:05] stokachu: like this -> http://paste.ubuntu.com/23287010/ ? [01:05] bdx: quick glance looks good === thumper is now known as thumper-afk === thumper-afk is now known as thumper === frankban|afk is now known as frankban [09:03] rc3 with GCE: lots of my overnight tests failed because of machines going to 'down' state and never coming back. Wonder if anyone else is seeing that? It could be a GCE problem, as much as Juju. === rogpeppe1 is now known as rogpeppe [11:57] marcoceppi: Are you still the maintainer for the Nagios charm? If not, do you know who could look at https://bugs.launchpad.net/charms/+source/nagios/+bug/1605733 ? [11:57] Bug #1605733: Nagios charm does not add default host checks to nagios [12:24] Hi all, trying to bootstrap into an openstack environment, but getting "authentication failed" and I found this is because "certificate signed by unknown authority"... [12:24] also tried juju --debug bootstrap --config ssh-hostname-verification=false mycontroller cloudname [12:25] any way to disable that certificate check, similar with "--insecure" options in curl .. ? [13:02] the mesos docker integration just mimics docker commands in C [13:02] how hard can it be to do the same with LXC?! [13:02] (famous last words) [13:03] my mimics, do you mean fork/exec/pipe? [13:05] Hello! [13:05] Is there any folk that has working charm with xenial + ansible? [13:06] xenial does not have python2 ... and basically i'm stuck cause i don't know how to run something like "pre-install" hook [13:07] jrwren: https://github.com/apache/mesos/blob/master/src/docker/docker.cpp#L1437 [13:07] i'm not a C coder [13:07] but that looks like its just running some commandline stuff [13:08] magicaltrout: i agree. subprocess is a giveaway [13:08] Spaulding: i dont' know of any, but i know people do do it from time to time. marcoceppi or lazypower should be able to help when they're around [13:09] indeed jrwren, so I figure fork, copy, change the commands to run LXC/LXD stuff compile and run LXD on Mesos ;)@ [13:09] cheers magicaltrout ! [13:09] so now i need to wait ;) [13:09] Spaulding: a two pronged attack never hurt either, but people dont' do it, dump a question on the juju mailing list as well [13:10] as people monitor that and not irc and visa versa [13:10] magicaltrout: hmm... i might give a shot, but i also have an option to have direct help from juju dev's [13:11] well they are the people on the mailing list :) [13:11] they should i guess :) [13:11] Spaulding: a common pattern is for hooks/install to be a shell script which installs requirements and calls python2 install.real at the end. [13:11] ok, let's give a try with mailing list :) [13:12] jrwren: exactly - but basically it's like a common scenario.. [13:12] and i wanted just to use ansible to bootstrap anything i want [13:12] but xenial is different (no python2) so basically i need to hack it dirty... [13:17] Spaulding: sadly, I think all of the charms I knew which were using ansible, moved over to reactive, so I have no good examples. They may never have been updated for xenial when they were ansible. [13:17] hmm... [13:17] we've got really big ansible.. [13:18] and it would be much easier to use it instead repeating whole thing i.e. in bash [13:18] i also tried to search some projects in google / github [13:18] so far - no luck... [13:21] btw. reactive looks really promising... [13:22] i think it might be a good idea to use reactive and invoke ansible from it [13:28] hey, sorry about the drop off yesterday, and I've since gotten a decent client that'll log the chat for me. Is there an irc chat log somewhere I can review what was said yesterday? [13:29] holocron: i have a decent backscroll 2 mins i'll pastebin what I saw [13:30] magicaltrout thanks [13:31] https://gist.github.com/buggtb/7b96fa7f023aa3749b4c5c3cc67d3e0c [13:32] appreciate that [13:32] Spaulding: o/ [13:33] Spaulding: you can install python2 during bootstrap for reactive charms by adding the following to your layer.yaml [13:35] Spaulding: https://gist.github.com/marcoceppi/8743453bfce28be97d71d5706bda0ab8 [13:35] the layer.yaml options are evaluated by the reactive framework prior to any reactive code running [13:36] this allows you to bootstrap any deps needed for either apt packages or pip packages to run hook code [13:36] lovely! [13:37] nearly as lovely as marcoceppi himself.... [13:37] and then i can tell reactive to run ansible playbooks? right? [13:37] yeah you can do stuff like @when{myplaybook.notinstalled} [13:38] def install_the_best_playbookever: [13:38] great! [13:39] finally I'm out from the dark hole.. [13:39] so, it seems that i've run into an odd problem with cs:xenial/rabbitmq-server. It starts up normally after deploy, but at some point the local nodename changes from "ubuntu" to "juju-..." and it starts to fail with "unable to connect to node rabbit@ubuntu: nodedown" [13:40] holocron: is that lxc? or something else? [13:40] yeah, lxc.. or lxd i suppose [13:40] yeah, beisner told me they facilitate a reboot of rabbit-mq to fix that (I think,I was a bit drunk) [13:41] also RC3 supposedly has some hostname fixes that might resolve that issue also holocron [13:41] so if you're not on RC3, upgrade if you can [13:41] i'm on rc3 [13:41] okay i'll try a reboot [13:41] thats supposedly in the charm [13:41] just fyi: the relevant log snippet https://gist.github.com/vmorris/402e946bbf8d82c1e46e1c2123d29c7e [13:41] not some manual interaction [13:41] ah hrm [13:42] dunno, although beisner and some others will do. Although I'm surprised RC3 doesn't resolve it if the change log wasn't lying [13:48] admcleod: you coming to ApacheCon then? [14:09] magicaltrout: im not sure yet [14:10] give me warning if kjackal is going [14:10] need to pack prozac [14:13] hi magicaltrout - hmm nope no rmq reboots happening here. but the beer was good :-) [14:18] beisner : are you saying that no reboots are happening in the rmq charm, or in my log snip? I have a clean model with single rabbitmq-server deployed, and it seems to be okay (but so was the one that came in with the openstack-on-lxd bundle) [14:19] holocron, in our CI, not rebooting rmq [14:19] okay, any idea what might've gone wrong here? https://gist.github.com/vmorris/402e946bbf8d82c1e46e1c2123d29c7e [14:23] beisner I spoke too soon, simple deploy failed in the same manner, as did a scale of the original unit [14:25] https://gist.github.com/vmorris/4020f3299134e4e8a287e233e3d18dac [14:42] clearly too much beer [14:43] magicaltrout: impossible [14:43] well not "too much beer", but "too much beer... to remember the conversation properly" [14:44] lol that's definitely possible [14:50] cory_fu_: with the charm build thing I ran into yesterday... this fixes it for me, https://github.com/lutostag/charm-tools/commit/b41f5a584809f547adfb0db917d5e9a2cc909500 [14:51] (trying to run the charm-tools make test, but it keeps falling over, not due to me I believe' [14:52] (although I was abusing the wheelhouse -- for application rather than charm deps, so went ahead and made my own instead) [15:03] lutostag: The problem with using 'download' instead of 'install' is that I don't think it's available with the pip version in trusty. [15:04] cory_fu_: ah, ok, I'll keep playing with it then [15:04] lutostag: We may just have to put a condition on the series, though. But I would appreciate seeing that tested on trusty [15:07] kwmonroe: https://github.com/apache/bigtop/pull/137 updated [15:07] thanks cory_fu_! [15:56] anybody know about juju storage (and how to use it in 1.x)? [15:57] for instance, I have a postgresql charm that theoretically accepts storage, and its already deployed, how would I add storage to it? [15:57] s/charm/unit [16:01] rick_h_: ^^ who is storage-knowlegeable? === frankban is now known as frankban|afk [17:01] lutostag: if/when you find some answers, will you put them on blast? [17:02] bdx: yeah I'll submit an askubuntu.com for sure [17:13] kwmonroe, petevg: You guys notice this item in the RC3 announcement? "* LXD containers now have proper hostnames set" [17:16] cory_fu: awesome! I'm gonna fire off a test of the hadoop bundle against localhost :-) [17:42] cory_fu: sadly, it looks like our problem might not be fixed. Got a suspicious failure in my logs: http://paste.ubuntu.com/23289933/ (This is from one of the hadoop slaves, when deployed against lxd containers on xenial.) [17:56] petevg: what in the heck is unallocated.barefruit.co.uk? is that really the name you get from running 'hostname' on that container? [17:57] kwomonroe: that's what was in the logs ... [17:57] cory_fu_: remember how yesterday i was giving you grief about the slave unit status message being wrong because of the spec match? well, that was true, but you were right(er). when a charm is undergoing a long hook (like install) before -joined, the other side won't know it's .joined yet :( [17:57] I tore down the container. Will try again in a bit, and poke at it some more. [17:59] kwmonroe: Yeah, I knew that long-running hooks would block the .joined, but the spec issue has potential to make it inaccurate even longer. Anyway, we were both right [18:02] cory_fu_: can you think of a way to detect a unit's relations without relying on the states being set? [18:03] kwmonroe: No. Before the -relation-joined hook fires, I don't think there's any possible way for the charm to know about the relation. I don't think even relation-ids would work [18:17] hi all :) [18:18] I have a fresh xenial (16.04, server version) install in a VM [18:18] I want to try out JUJU (yes never used before) so I followed the following instruction [18:18] https://jujucharms.com/docs/devel/getting-started [18:19] when I type "groups" it does not show me the LXD in the list [18:19] and when typing newgrp lxd it gives me "group lxd does not exist" [18:20] PCdude: You might have to log out and back in, or try the `newgrp lxd` command to refresh in-place [18:20] cory_fu_: tried both, I restarted and tried the "newgrp lxd" [18:21] PCdude: Can you confirm that lxd is installed with, e.g., `dpkg -l lxd`? [18:22] It should have been brought in as a dependency of Juju, though [18:23] cory_fu_: it indeed is not installed, and as u said I thought that was automatically installed [18:23] but apparently not haha [18:23] manual install? [18:23] sudo apt install lxd? [18:24] Yep\ [18:24] PCdude: what does 'juju version' say? [18:25] cory_fu_: lxd is present now lets continue the install and see what JUJU is capable of thanks [18:25] kwmonroe: let me check [18:25] 2.0-rc3-xenial-amd64 [18:26] ok - that's good PCdude. just making sure it was of the 2.0 flavor [18:26] kwmonroe: yeah, I was aware of the 2.0 version. I added the PPA and check with "apt-cache" that the right version was being installed [18:27] cool PCdude.. strange that it didn't bring lxd in as a dep [18:27] apart that it is solved now, how can this happen? [18:27] I always have to "usermod -aG lxd holocron" and relog to pick up the change [18:28] personally, I think its vmware. I had strange problems with ESXI before, maybe something strange happened there [18:28] I used there preseed option for a change, not gonna do that again... [18:29] oh, you didn't have LXD installed, just catching up :D [18:31] holocron: haha np [18:33] any amazing cool bundles or charms I have to check out as a newbie? :D [18:34] what do you want to do? [18:34] ghost is an okay one to poke at [18:35] well I have openstack running on ubuntu, but I want to tweak some more. Since it uses JUJU maybe something in that field? [18:36] holocron: from what I can see is that something similar to wordpress? [18:36] yeah, though it's all node.js [18:37] PCdude, you might look into the openstack-on-lxd bundle if you're wishing to dig into openstack, juju and lxd [18:37] ah cool, yeah let me check that out [18:38] use these instructions; http://docs.openstack.org/developer/charm-guide/openstack-on-lxd.html [18:39] btw, I have the strong feeling that the autopilot function (from landscape) just uses the openstack bundle from JUJU. is somebody here that can confirm that? [18:39] holocron: thanks, will look into that link [18:41] that's the impression i got as well PCdude, though i haven't actually used autopilot myself [18:43] holocron: well it is quick and painless, but dont start asking questions about changing something then u are stuck in landscape. easy=10 customization=2 , but I think I can go a level lower to JUJU and configure there, but not sure if autopilot and landscape like that very much [18:44] also for me the 10 licenses are good enough, but for something bigger u have to pay alot [18:47] PCdude I see. I don't use landscape either ^^ You'll find the openstack charms have a wide variety of configuration options, probably most of what you'd want to tune [18:49] holocron: I was thinking about the following for my openstack install: rn, I have 2 machines which is of course way to little to run good enough for the whole infrastructure,but I was planning on placing it on those 2 and when in the future adding machines slowly moving services from one of those 2 and place it on the new one with JUJU. Until I have say lets say 5-6 servers running without anything virtual. Would that be possible? and I mean moving it live, so [18:52] i suppose it's possible PCdude.. having your machines in a MAAS cluster might make it simpler, though I'm no expert in the matter [18:53] holocron: me neither :) , we will see. where do u use JUJU for? [18:54] pcdude: just starting to explore it myself, but specifically i'll be using it for openstack as well, considering moving some of my workload deployment automation to charms [18:55] haha cool, u have a working install with something else now? or this is going to be ur first try? [18:55] i have a few ibm cloud manager with openstack installations but they're rapidly going away [18:56] a few custom rolled installations of mitaka being maintained by the team too at the moment [18:57] so u are moving away from something I guess, what did u use before? [18:57] as i said: ICM [18:58] maybe the server reboot caused the message to get lost, i'll resend [18:58] i have a few ibm cloud manager with openstack installations but they're rapidly going away [18:58] ah check I see [18:58] I got the last one [18:58] if you're asking about what i'm using for hypervisor, it's KVM [18:59] but really what i need is a good way to install openstack that's easy and repeatable.. juju is really attractive to me for this purpose [18:59] yeah, I am using ESXI right now, but wanna use openstack with KVM [19:00] and to restate again, i'm interested in migrating some of my workload deployment automation into charms [19:00] amen... I so agree on that point. When I first opened the docs I thought, how is anyone in this world even capable of doing this onces haha [19:00] so that's on the radar [19:02] holocron: have u looked at something else besides openstack? [19:02] kubernetes maybe [19:03] kubernetes doesn't really map across to openstack imo [19:04] it's more akin to juju or docker, i have looked at docker for some things (hyperledger specifically) [19:05] yeah I agree, but there have been some projects that it kind of makes it that way but with containers. I have seen some videos that makes it in the grey area. [19:05] uhm ok, let me check hyperledger [19:05] oh hyperledger is not for workloads ;) it's smart contract stuff [19:08] haha, I was already reading and thought uhm, that cant be right [19:10] some servers are seriously restarting here [19:10] they're rolling the whole freenode network [19:11] yeah, I guess [19:12] uhm, what is a "model" in JUJU? [19:13] PCdude: A Juju model is an environment associated with a controller [19:13] PCdude: https://jujucharms.com/docs/2.0/models [19:13] roadmr: yeah, I read that too, but it is the place where the charms are fired up? [19:14] its like a subnet for charms? [19:20] so when I type "juju list-controllers" I see 1 machine running. is that the controller of the models? [19:21] PCdude: generally yep [19:22] oh wait, i think i understand your question, you have 1 under "machines" ? [19:23] try "juju status" and "juju show-machine 0" -- assuming that machine 0 is the one machine listed [19:27] holocron: yeah, I think that is what it is [19:27] deploying ghost rn [19:54] https://lists.apache.org/thread.html/7b215705d3b222336d3989782722715e43af31af720f69db7ad19911@%3Cdev.mesos.apache.org%3E i mention juju and lxc and suddenly the thread goes dead [19:54] its like they know! === zeus is now known as Guest91639 [19:57] magicalt1out: heh, did you cause trouble? === Guest91639 is now known as zeus` === zeus` is now known as zeus [19:58] well its a bit weird when people ask for the use case, i give it and get crickets [19:58] you broke some rule of fight-club [19:58] silly people, why do people just want application containers [19:58] this maybe true rick_h_ [20:01] good afternoon - i have an juju charm who’s deploy failed because the machine didn’t spin up. remove-application isn’t working…. [20:01] and it’s causing havoc: ERROR could not filter units: could not filter units: unit "juju-gui/0" has no assigned machine: unit "juju-gui/0" is not assigned to a machine (not assigned) [20:02] how do i get rid of it? please === spammy is now known as Guest28928 [20:03] hml: can you mark it resolved and then remove it? [20:04] rick_h_: ERROR unit "juju-gui/0" is not in an error state [20:04] hml: and when you do remove-application juju-gui it gives ou the filter erro? [20:05] hml: no - it gave no error - i got the filter error when trying to do a juju status of a different application. [20:05] hml: maybe try juju retry-provisioning X where X is the machine that should have come up that failed? [20:05] hml: and see if you can get the application up [20:05] hml: and then cleanly remove it [20:06] rick_h_: the machine never came up, how do i give the retry-provision a machine? [20:07] hml: well is there a machine record that would show in status that it tried and is marked with a failure status? [20:07] hml: how many machines are currently deployed? Maybe try to start with what we think it might be. 0, 1, etc? [20:08] hml: other idea might be to try to juju add-unit juju-gui [20:08] hml: and see if you can get it to come up with a unit and help clear up the error space there [20:08] rick_h_: i’m at 16 machines… === CyberJacob is now known as Guest90146 === med_ is now known as Guest94967 [20:08] hml: k, might try the juju retry-provisioning 17 and see what it does === beisner- is now known as beisner [20:09] hml: or try the add-unit trick and see if that gets things to a good place [20:09] rich_h_: ERROR cannot add unit 1/1 to application "juju-gui": cannot add unit to application "juju-gui": application is not alive === Guest94967 is now known as medberry [20:10] rick_h_: retry-provisioning on machines 16-21 - machine not found. [20:10] hml: ok [20:10] hml: add-unit trick is all I can think of from there then. Will need to file a bug and see if we can repro and make it more resilient [20:12] rick_h_: so is there any way around the filtering message? i need to resolve issues on other charms, this is standing in the way [20:13] hml: can you do juju status without any filters? [20:13] hml: assuming that's also not working? [20:13] hml: maybe juju status --format=yaml and see if it bypasses any of the filter work? [20:13] rick_h_: status without filters is working [20:14] hml: ok, so there's nothing I can think to bypass an error using filters. Just ways around it by using jason output plus a tool like jq to get the filtering done outside of juju [20:14] rick_h_: format didnt’ work - i’m looking for more detail since nova-compute/0 machine is having troubles with the open vswitch [20:15] hml: grep the unit log file and see there? THings like status changes/etc should be in the log [20:15] hml: so juju ssh nova-compute/0 and then view /var/log/juju/unit-xxxxxx.log [20:16] rick_h_: got it [20:16] where the xxxx is something like nova-computer-0 [20:17] rick_h_: my favorite: “"leadership-tracker" manifold worker returned unexpected error: leadership failure: lease manager stopped” [20:19] hml: :/ [20:21] rick_h_: is there a bug for that one, or am i just lucky to keep hitting it? === xnox_ is now known as xnox [20:28] hml: https://bugs.launchpad.net/juju/+bug/1616174 ? [20:28] Bug #1616174: Juju agents cannot start: failed to start "uniter" manifold worker: dependency not available [20:29] hml: has a potential thing to fix it. Sounds like we didn't get a good repro steps though if you have more details to add to the bug that'd be helpful [20:31] rick_h_: i’ve seen that bug, i wasn’t sure how to run those steps - i found another solution, but perhaps short term: http://www.astokes.org/juju/2/common-errors [20:40] kwmonroe: the units in question are running 2.0rc1 or were created with [20:40] optimism reinstated [20:46] hey rick_h_, can i specify bundle tags? i'm pretty sure this result only comes up because 'bigtop' is in the name: https://jujucharms.com/q/bigtop?type=bundle [20:46] i'm looking for tags simliar to what you'd do in a charm's metadata.yaml [20:46] but i don't see where i'd specify that for a bundle === kragniz1 is now known as kragniz === Guest28928 is now known as spammy [21:36] cmars: I hooked it up -> https://github.com/cmars/juju-charm-mattermost/pull/2/files [21:41] i give up https://bugs.launchpad.net/charms/+source/rabbitmq-server/+bug/1563271 [21:41] Bug #1563271: update-status hook errors when unable to connect [21:50] i know i saw in the docs how to update a unit from local charm source, now i'm missing it [21:50] any help? [22:04] holocron: juju upgrade-charm --path /path/to/new-charm [22:05] cory_fu: tvansteenburgh: is tests.yaml deployment_timeout in seconds or minutes? juju-deployer -t is seconds (https://github.com/juju-solutions/bundletester/blob/610801149ec214966b80e2766ca8760eb29a6f9e/bundletester/spec.py#L137) but the bundletester readme comment for this option is minutes. [22:06] kwmonroe: seconds [22:06] thx [22:06] kwmonroe thanks [22:07] kwmonroe: i don't see where it says minutes in the readme? [22:08] tvansteenburgh: https://github.com/juju-solutions/bundletester/commit/c88015b890cfd17fa10e375e3e394e57758b9e7d [22:09] tvansteenburgh: https://github.com/juju-solutions/bundletester/pull/62 [22:11] kwmonroe: thanks [22:12] np