=== frankban|afk is now known as frankban === rmcall_ is now known as rmcall === jamespag` is now known as jamespage === StoneTable is now known as aisrael === jcsacket- is now known as jcsackett === cmars` is now known as cmars [12:39] I see juju 2.0 rc2 is available and the updated charms store requires at least beta16. Xenial seems to currently carry beta15 so I'm unable to progress. Should I file an issue against ubuntu/xenial to request a SRU for juju to be updated? [12:56] diddledan: i doubt they didn't think of that.. [12:57] diddledan: you can use the ppa for a newer juju version on xenial for now [12:57] we're in the process of getting rc3 back to xenial [12:57] cool [12:58] I also have a question! Regarding Nagios and JuJu. We added the Nagios charm and the NRPE charm. Added relations. But nagios refuses to monitor our disks :(... [12:58] Anyone who could shed a light on that? What could we be doing wrong. [12:58] diddledan: see the release announcement: [12:59] are we sure there is a package in the ppa? Candidate: 2.0~beta15-0ubuntu2.16.04.1 [12:59] that's from apt policy with the ppa enabled [13:00] oh, /devel; [13:00] sorry, I have the juju/stable ppa . silly billy me :-p [13:00] :) [13:01] Randlema1: no idea without going off and looking at the nagios charm I'm afraid [13:01] mgz: np :) [13:12] mornin #juju o/ === frankban is now known as frankban|afk === barry` is now known as barry [13:24] mornin [13:25] evening [13:35] Hello folks! [13:35] https://gist.github.com/pananormalny/35d2ae4f1651145ff0d8cfcd4a196ad5 [13:35] Could someone look at it and tell me why this reactive script is running in a loop? [13:44] Spaulding a pastebin or github would be helpful [13:54] lazyPower: haha you joined too late ;) [13:54] https://gist.github.com/pananormalny/35d2ae4f1651145ff0d8cfcd4a196ad5 === medberry is now known as med_ [14:00] * lazyPower shakes a fist at connectivity issues [14:06] hmm, maybe I need to change check_call to call? [14:13] Spaulding: you should be able to move all that apt stuff to layer.yaml i'd have thought [14:13] Spaulding: yeah, if you use the apt layer, all that complexity goes away [14:13] Spaulding: there's also a lot of other things to simplify [14:14] Spaulding: also [14:14] you don't set [14:14] sme.installed [14:14] so it will always run the install hook [14:15] ok [14:15] understood [14:16] marcoceppi: it's my first charm [14:16] so still learning how to do it properly.. [14:16] Spaulding: I'll give you a few updates === frankban|afk is now known as frankban [14:16] Spaulding: to help get you on the right track [14:16] \o/ === scuttle|afk is now known as scuttlemonkey [14:23] ok, i've read something more about layer-apt [14:23] looks promising... [14:27] Spaulding: yeah, I think you'll like the result [14:28] * marcoceppi continues to update your gist === skayskayskay is now known as skay === skay is now known as Guest86296 [14:29] Spaulding: is GID important? [14:30] Spaulding: or is it just so you can add a user to that GID [14:30] Spaulding: as in, could you let the system autoassign for does the software expect an explicit GID mapping [14:31] i mean... those numbers are not that important [14:31] but users need to be assigned to specific groups [14:31] right [14:31] gotchya [14:32] and gid => 5000 so they'll not collide with other groups... rather future-proof hack... === Guest86296 is now known as skay [14:33] users got also high uid >= 5000 [14:36] Spaulding: yeah [14:36] Spaulding: just about done [14:39] :) [14:46] Spaulding: https://gist.github.com/marcoceppi/2eb1cf988f7f64eb535b290b1de29632 [14:46] Spaulding: there's a lot going on in there, I tried to leave comments where I could [14:47] Spaulding: there's a lot ot unpack, and I have to go for a bit, but lots of others here should be able to help, I'll try to check back in a bit [14:48] marcoceppi: can you write my charms please? [14:48] marcoceppi: thank you! :) [14:49] Spaulding: no problem, by the looks of it, you'll probably find the apache layer useful as well [14:58] marcoceppi: yeah, i guess [14:59] hmm... it's hard to google some layers... [15:01] marcoceppi: I can only see layers related to apache... but not strictly layer-apache.. [15:08] Spaulding: yeah, my bad. There's a search bar top right on http://interfaces.juju.solutions/ but it appears there's only apache-php and apache-wsgi [15:08] no base apache layer [15:10] exactly! but still I think I can use apache-php... even some parts of it [15:11] cause I'll need to manage some files [15:11] Spaulding: yeah, when I create sharable layers, I usually do everything in my layer, then I find the parts that I see as reusable and strip time out into it's own layer [15:12] we're long overdue for an apache layer, much like how we have an nginx layer [15:14] so you're working on apache layer? [15:14] i mean - ubuntu team... [15:16] Spaulding: not at the moment [15:16] Spaulding: if I come across a project that needs an apache layer I'll probably do one, but most of the stuff I charm up uses naginx [15:17] * marcoceppi signs off for a bit [15:18] Spaulding: oh, well, I think we can take the apache-php layer and instead make it just apache and then make apache-php use the apache layer and php layer and just merge them there but the php layer hasn't been published yet (I'm still getting that one ironed out) [15:19] Unfortunatelly I can't use nginx [15:19] not right now... [15:19] not with this project (suexec, perl etc.) [15:19] yeah, no worries [15:19] I'm just commenting on why we don't really have a base apache layer, at least why I havent created one [15:19] but still, because of you now I see how juju and layers works [15:20] cause juju docs - hmm... they don't have enough information === shawniverson is now known as spammy [15:48] Spaulding - filing bugs against http://github.com/juju/docs/issues will help us target those areas that aren't clear and expand on the information there. Specifically if you could call out missing concepts, verbiage, etc. that would have helped that would be tops. [16:27] lazyPower: sure, will do! [16:36] thanks Spaulding :) === frankban is now known as frankban|afk [19:04] rick_h_: whats the status on https://bugs.launchpad.net/juju-release-tools/+bug/1631038 [19:04] Bug #1631038: Need /etc/sysctl.d/10-juju.conf [19:04] i think we're hitting this with our single system deployment of openstack [19:29] I have deployed an application and I find that the 'juju status' shows the machine in pending state, though the MAAS UI says the machine is 'Deployed' [19:30] cory_fu: i've been thinking about the bigtop hdfs smoke-test failing with < 3 units. without a dfs-peer relation, can i detect how many peers a datanode might have? if not, i'm thinking of running a dfsadmin report to get a count of live datanodes. thoughts? [19:30] It is in pending state for 15 mts now [19:30] Is there a log file I can look at to debug and see what is happening? [19:31] kwmonroe: I don't think there's a way to count the units w/o a peer relation, though it would be trivial to add one. But dfsadmin seems reasonable, too [19:31] ack cory_fu, dfsadmin is trivialier for me to add ;) [19:34] Any help is much appreciated [19:36] Randlema1: You hitting something like this with Nagios and the NRPE charm? https://bugs.launchpad.net/charms/+source/nagios/+bug/1605733 [19:36] Bug #1605733: Nagios charm does not add default host checks to nagios [19:39] I am seeing this status for 1 hr [19:39] 1 started 192.168.1.252 4y3hkf trusty default 4 pending 192.168.1.29 4y3hkg trusty default [19:40] which log file should I look at to see why the machine is not moving to 'started' state? [19:42] Siva: can you ssh to the pending unit? either "juju ssh 4" or "ssh ubuntu@192.168.1.29"? [19:43] Siva: also, does "juju status 4 --format=yaml" tell you more about why the machine is pending? [19:45] Nope. I am not able to ssh into it [19:46] I see on teh machine console that curl call to juju-controller to connect on port 17070 is failing [19:46] I see on the machine console that curl call to juju-bootstrap node to connect on port 17070 is failing [19:50] Here is the output [19:50] http://pastebin.ubuntu.com/23309745/ [20:06] Siva, is there a way to get onto machine 4 from maas? i'm curious what your /var/log/cloud-init* logs look like [20:19] stokachu: it's setup to be in GA to get the extra config so you get a couple more LXD ootb, but it's not a big swing [20:19] stokachu: to get the big changes you need a logout/back in or reboot and we can't do it ootb [20:19] rick_h_: gotcha [20:19] rick_h_: ill document this on our side [20:20] stokachu: yea, we now show the link to scaling lxd wiki page on bootstrap for lxd because of it [20:20] rick_h_: you have that link so i can keep the message the same? [20:34] stokachu: https://github.com/lxc/lxd/blob/master/doc/production-setup.md [20:35] thanks [20:59] hows it going everyone? [20:59] is there a way to specify what subnet I bootstrap to? [21:00] bdx: sorry, a couple of folks that would know were on holiday last week. I'm sending an email right now to find out and will get back to you tomorrow [21:00] bdx: they're in EU end EOD atm [21:00] but will be back in the morning [21:01] bdx: this is on AMZ correct? [21:01] bdx: or something else? [21:07] rick_h_: thats great, thanks [21:07] rick_h_: yea, aws [21:08] bdx: and this is not going to work with the vpc-id constraint? [21:08] bdx: because there's > 1 subnet or any other details I can pull out? [21:09] rick_h_: exactly ... I have 50+ and growing subnets [21:10] bdx: k, will see. [21:10] rick_h_: thx [21:24] someone here was asking about lxd and zfs? [21:24] katco: sorry, the canonical one [21:24] rick_h_: oh... i cannot atm :( my server is down [21:24] rick_h_: hd went out [21:24] katco: ok, but can you join from the current client? [21:25] rick_h_: i don't have any certs or anything [21:25] rick_h_: i can try and get that set up.. [21:25] katco: hi, I was just wondering how juju calls lxd init [21:25] if it requests zfs or not [21:25] juju 2 rc3 specifically [21:25] on xenial [21:26] ahasenack: try running bootstrap with --debug; it should provide some information about the rest calls [21:26] katco: this is the maas provider, where I did a deploy --to lxd:0 [21:26] it's that lxd [21:26] ahasenack: ah [21:27] katco: I'm seeing abysmal i/o performance inside that container, and I checked and saw that the host has the lxd containers backed by one big zfs image file [21:27] I haven't seen this before, and I can't tell if it's new [21:28] ahasenack: so you're wondering if it requests zfs by default? [21:28] yes [21:28] if not, there are other clues I can chase [21:28] like, if zfsutils is installed, then lxd will pick zfs by default, I'm told [21:32] ahasenack: it certainly looks like if series == "xenial" it's going to initialize a zfs pool [21:32] hmmm [21:32] ahasenack: trying to figure out where that gets used... [21:32] katco: does it create the pool beforehand, file-backed? [21:32] I got a 100G pool [21:32] in the host [21:33] ahasenack: yeah: https://github.com/juju/juju/blob/78273ef59ee77c0be55f761346917cfe63842dcd/container/lxd/initialisation_linux.go#L136 [21:33] ahasenack: ah, so it looks like it's telling lxd to use zfs from the get-go... juju doesn't do anything else after that [21:34] ahasenack: all created containers will allocate to that pool backed by zfs [21:34] 90% [21:35] ahasenack: alarming, but at least it's sparse [21:35] ajmitch: i don't know where that magic number came from [21:36] katco: I think lxd caps it at 100G [21:36] ajmitch: oops sorry for misping [21:37] katco: ok, so if xenial, that happens. Else, it's just "lxd init"? [21:39] ahasenack: i believe it will just use the presumably running lxd daemon. init in this case is just to initialize a storage pool [21:40] got it, thanks [21:40] ahasenack: hth [21:42] bdx rick_h_: this really needs to be updated https://jujucharms.com/docs/stable/network-spaces [21:42] I guess you can only work with spaces if MAAS is the undercloud, but in juju 2 it doesn't seem that spaces are configurable directly [21:45] Is there any way to use a local juju controller (lxd) to deploy bundles to remote MAAS system? It seems a waste to spin up and entire server just to act be a controller. [21:46] Sorry if that is a dumb question. Just looking at maas/juju for the first time and need to know if I need an extra piece of physical hardware for the juju controller. [21:50] jgriffiths: i asked a similar question but yeah, you've gotta have a whole dedicated machine in your MAAS cluster for the juju controller [21:51] jgriffiths: i also find it silly, but currently running MAAS all with KVM VMs, so I can get away with a tiny VM for the controller [21:54] vmorris: Thanks! I've been looking around the internet for a couple hours trying to find out how to do it before I started thinking that it wasn't even possible. It's a huge waste in a bare metal environment. So, it looks like I need a physical MAAS server, a physical controller, and all the nodes for Openstack. [21:54] And a grammar teacher. [21:55] jgriffiths: yeah if you're looking to run the openstack-base bundle, it's really 5 machines :P [21:55] plus the maas controller, yep [21:56] jgriffiths: my current configuration has maas-controller and 4 maas machines all as KVM guests on the same physical host [21:56] sorry, 5 maas machines [21:58] jgriffiths: it's stable, but complex.. I've been exploring the openstack-on-lxd approach for about a week, and it's not quite as stable in my experience [21:58] i still like the pure lxd approach though [22:01] One last stupid question then. Do I need a controller if I'm manually deploying the individual charms? And now that you mention it, is anybody making an openstack-base style bundle without any containers at all (apt-get everything)? [22:03] I'm still learning all this and have a lot of holes in my knowledge of charms. [22:16] jgriffiths: I'm not aware of any way to deploy a charm without a controller [22:16] jgriffiths: I'm also unaware of anyone at canonical doing anything to deploy openstack from a vendor perspective outside of juju [22:16] but i am not an expert in the matter! [22:18] Thank you very much vmorris! [22:18] sure :_) [22:20] jgriffiths: you can deploy the openstack bundle to whatever [22:21] containers are just easy low overhead servers [22:21] spaok: not without a juju controller bootstrapped [22:21] that was the first q [22:21] vmorris: correc [22:21] I was talking about the container part [22:22] ah ok, but still they're going to end up running services in containers on the machines [22:22] right? [22:22] not really, you can use KVM [22:23] we don't run containers for our vm's yet [22:23] ah yeah, that's right re: KVM [22:24] do you have a link to that? I saw the bug report https://bugs.launchpad.net/juju/+bug/1547665 [22:24] Bug #1547665: juju 2.0 no longer supports KVM for local provider <2.0-count> [22:24] I was referencing the "openstack-on-lxd approach" not being stable concept and thought vmorris was suggesting not using containers for the components. [22:24] spaok: or are you just talking about juju 1? [22:25] jgriffiths: it's just not stable for me, i've heard it works.. just hasn't been my experience yet [22:25] I was referencing the "openstack-on-lxd approach" not being stable comment and thought vmorris was suggesting not using containers for the components. [22:25] I'm saying KVM as the compute type for openstack [22:25] as for the bundle part, I would target physical servers [22:25] in MAAS or something [22:25] Oh. So you're not running any services inside containers? [22:25] spaok: alright, then we're talking about different things [22:26] vmorris: ya, there's the compute VM's and the OpenStack services, the later can be deployed to physical servers, LXD containers, or even KVM (maybe that bug you referenced) [22:27] I was saying you can build on all physical servers with KVM type compute nodes and have zero containers [22:27] spaok: using the openstack-base bundle? [22:28] ya, you just modify it slightly [22:28] specify machines and make them the targets for the services [22:28] spaok: yep alright, good point there [22:28] though I would say its a waste of hardware [22:29] and not how we are building our new production systems [22:29] we us containers [22:29] s/us/use/ [22:29] yep [22:30] jgriffiths: I think containers for openstack services is pretty stable, the questionable one is LXD as the compute backend [22:30] but that does work too [22:30] which is the bundle you referred to [22:31] That's what I thought. It seemed pretty stable. Thanks spaok [22:31] the new HA stuff using DNS is pretty good too , we had a lot issues with the hacluster charm way of doing it [22:32] but our setup is fairly off the beaten path also [22:32] spaok: do you have any published architectures or white papers? [22:33] no not really, our original design was based on https://github.com/wolsen/ods-tokyo-demo/blob/master/ods-tokyo.yaml [22:33] that we added a bunch of overlay to and some other stuff [22:33] the new "2.0" version of our stuff we stripped most of that out [22:34] cool, thanks [23:23] Fgffd [23:23] ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDHXecAJkgNgxXpFfoSV+RwB2JwSoRASTUboa6FaAYU8XkCCQlx1K+jlvjXJY5ItDzmJRfQ1Q+07X7RfPuE4Ditjy9g8jwpdAnA4IYTN5b9QYkdxwbZPY7Jrsw8eYbnWsBWTDo3CKjxZyeglUq/cue8w0Rjw7FKwcIa4PvLMZo8V/H+nD30Y0MRtY1p2NYFUfZvdyNiIeB65k2ONiPf66NVa7Ywm63oShLztmyTY2Oy3VY5BYDCFatv3/PjagWyICdPrvWH2SdTK4Zo8c8jVyr4cA3JsEG2S11s4sKFFNKXA+epqQclyrD2CpcfG8uxHCdirQGThY0d1iUa6nf9pIaz lin [23:46] maybe I should start here instead. :) [23:46] so I've deployed openstack via the charm, and I'm working on increasing the size of the cluster. Now, the readme says to scale out neutron to do "juju add-unit neutron-gateway", but I only need neutron-openvswitch on the additional nodes. but if I try to add-unit neutron-openvswitch it complains about it being a sub-charm or somesuch. Any ideas? In the initial deployment, neutron-gateway only ends up on 1 machine instead of all 4.