/srv/irclogs.ubuntu.com/2016/10/11/#juju.txt

=== frankban|afk is now known as frankban
=== rmcall_ is now known as rmcall
=== jamespag` is now known as jamespage
=== StoneTable is now known as aisrael
=== jcsacket- is now known as jcsackett
=== cmars` is now known as cmars
diddledanI see juju 2.0 rc2 is available and the updated charms store requires at least beta16. Xenial seems to currently carry beta15 so I'm unable to progress. Should I file an issue against ubuntu/xenial to request a SRU for juju to be updated?12:39
Randlema1diddledan: i doubt they didn't think of that..12:56
mgzdiddledan: you can use the ppa for a newer juju version on xenial for now12:57
mgzwe're in the process of getting rc3 back to xenial12:57
diddledancool12:57
Randlema1I also have a question! Regarding Nagios and JuJu. We added the Nagios charm and the NRPE charm. Added relations. But nagios refuses to monitor our disks :(...12:58
Randlema1Anyone who could shed a light on that? What could we be doing wrong.12:58
mgzdiddledan: see the release announcement: <https://lists.ubuntu.com/archives/juju/2016-October/007989.html>12:58
diddledanare we sure there is a package in the ppa? Candidate: 2.0~beta15-0ubuntu2.16.04.112:59
diddledanthat's from apt policy with the ppa enabled12:59
diddledanoh, /devel;13:00
diddledansorry, I have the juju/stable ppa . silly billy me :-p13:00
mgz:)13:00
mgzRandlema1: no idea without going off and looking at the nagios charm I'm afraid13:01
Randlema1mgz: np :)13:01
lazyPowermornin #juju o/13:12
=== frankban is now known as frankban|afk
=== barry` is now known as barry
vmorrismornin13:24
Randlema1evening13:25
SpauldingHello folks!13:35
Spauldinghttps://gist.github.com/pananormalny/35d2ae4f1651145ff0d8cfcd4a196ad513:35
SpauldingCould someone look at it and tell me why this reactive script is running in a loop?13:35
lazyPowerSpaulding a pastebin or github would be helpful13:44
SpauldinglazyPower: haha you joined too late ;)13:54
Spauldinghttps://gist.github.com/pananormalny/35d2ae4f1651145ff0d8cfcd4a196ad513:54
=== medberry is now known as med_
* lazyPower shakes a fist at connectivity issues14:00
Spauldinghmm, maybe I need to change check_call to call?14:06
magicaltroutSpaulding: you should be able to move all that apt stuff to layer.yaml i'd have thought14:13
marcoceppiSpaulding: yeah, if you use the apt layer, all that complexity goes away14:13
marcoceppiSpaulding: there's also a lot of other things to simplify14:13
magicaltroutSpaulding: also14:14
magicaltroutyou don't set14:14
magicaltroutsme.installed14:14
magicaltroutso it will always run the install hook14:14
Spauldingok14:15
Spauldingunderstood14:15
Spauldingmarcoceppi: it's my first charm14:16
Spauldingso still learning how to do it properly..14:16
marcoceppiSpaulding: I'll give you a few updates14:16
=== frankban|afk is now known as frankban
marcoceppiSpaulding: to help get you on the right track14:16
Spaulding\o/14:16
=== scuttle|afk is now known as scuttlemonkey
Spauldingok, i've read something more about layer-apt14:23
Spauldinglooks promising...14:23
marcoceppiSpaulding: yeah, I think you'll like the result14:27
* marcoceppi continues to update your gist14:28
=== skayskayskay is now known as skay
=== skay is now known as Guest86296
marcoceppiSpaulding: is GID important?14:29
marcoceppiSpaulding: or is it just so you can add a user to that GID14:30
marcoceppiSpaulding: as in, could you let the system autoassign for does the software expect an explicit GID mapping14:30
Spauldingi mean... those numbers are not that important14:31
Spauldingbut users need to be assigned to specific groups14:31
marcoceppiright14:31
marcoceppigotchya14:31
Spauldingand gid => 5000 so they'll not collide with other groups... rather future-proof hack...14:32
=== Guest86296 is now known as skay
Spauldingusers got also high uid >= 500014:33
marcoceppiSpaulding: yeah14:36
marcoceppiSpaulding: just about done14:36
Spaulding:)14:39
marcoceppiSpaulding: https://gist.github.com/marcoceppi/2eb1cf988f7f64eb535b290b1de2963214:46
marcoceppiSpaulding: there's a lot going on in there, I tried to leave comments where I could14:46
marcoceppiSpaulding: there's a lot ot unpack, and I have to go for a bit, but lots of others here should be able to help, I'll try to check back in a bit14:47
magicaltroutmarcoceppi: can you write my charms please?14:48
Spauldingmarcoceppi: thank you! :)14:48
marcoceppiSpaulding: no problem, by the looks of it, you'll probably find the apache layer useful as well14:49
Spauldingmarcoceppi: yeah, i guess14:58
Spauldinghmm... it's hard to google some layers...14:59
Spauldingmarcoceppi: I can only see layers related to apache... but not strictly layer-apache..15:01
marcoceppiSpaulding: yeah, my bad. There's a search bar top right on http://interfaces.juju.solutions/ but it appears there's only apache-php and apache-wsgi15:08
marcoceppino base apache layer15:08
Spauldingexactly! but still I think I can use apache-php... even some parts of it15:10
Spauldingcause I'll need to manage some files15:11
marcoceppiSpaulding: yeah, when I create sharable layers, I usually do everything in my layer, then I find the parts that I see as reusable and strip time out into it's own layer15:11
marcoceppiwe're long overdue for an apache layer, much like how we have an nginx layer15:12
Spauldingso you're working on apache layer?15:14
Spauldingi mean - ubuntu team...15:14
marcoceppiSpaulding: not at the moment15:16
marcoceppiSpaulding: if I come across a project that needs an apache layer I'll probably do one, but most of the stuff I charm up uses naginx15:16
* marcoceppi signs off for a bit15:17
marcoceppiSpaulding: oh, well, I think we can take the apache-php layer and instead make it just apache and then make apache-php use the apache layer and php layer and just merge them there but the php layer hasn't been published yet (I'm still getting that one ironed out)15:18
SpauldingUnfortunatelly I can't use nginx15:19
Spauldingnot right now...15:19
Spauldingnot with this project (suexec, perl etc.)15:19
marcoceppiyeah, no worries15:19
marcoceppiI'm just commenting on why we don't really have a base apache layer, at least why I havent created one15:19
Spauldingbut still, because of you now I see how juju and layers works15:19
Spauldingcause juju docs - hmm... they don't have enough information15:20
=== shawniverson is now known as spammy
lazyPowerSpaulding - filing bugs against http://github.com/juju/docs/issues  will help us target those areas that aren't clear and expand on the information there.  Specifically if you could call out missing concepts, verbiage, etc. that would have helped that would be tops.15:48
SpauldinglazyPower: sure, will do!16:27
lazyPowerthanks Spaulding :)16:36
=== frankban is now known as frankban|afk
stokachurick_h_: whats the status on https://bugs.launchpad.net/juju-release-tools/+bug/163103819:04
mupBug #1631038: Need /etc/sysctl.d/10-juju.conf <juju-release-tools:Triaged by torbaumann> <https://launchpad.net/bugs/1631038>19:04
stokachui think we're hitting this with our single system deployment of openstack19:04
SivaI have deployed an application and I find that the 'juju status' shows the machine in pending state, though the MAAS UI says the machine is 'Deployed'19:29
kwmonroecory_fu: i've been thinking about the bigtop hdfs smoke-test failing with < 3 units.  without a dfs-peer relation, can i detect how many peers a datanode might have?  if not, i'm thinking of running a dfsadmin report to get a count of live datanodes.  thoughts?19:30
SivaIt is in pending state for 15 mts now19:30
SivaIs there a log file I can look at to debug and see what is happening?19:30
cory_fukwmonroe: I don't think there's a way to count the units w/o a peer relation, though it would be trivial to add one.  But dfsadmin seems reasonable, too19:31
kwmonroeack cory_fu, dfsadmin is trivialier for me to add ;)19:31
SivaAny help is much appreciated19:34
zeestratRandlema1: You hitting something like this with Nagios and the NRPE charm? https://bugs.launchpad.net/charms/+source/nagios/+bug/160573319:36
mupBug #1605733: Nagios charm does not add default host checks to nagios <family> <nagios> <nrpe> <unknown> <nagios (Juju Charms Collection):New> <https://launchpad.net/bugs/1605733>19:36
SivaI am seeing this status for 1 hr19:39
Siva1         started  192.168.1.252  4y3hkf                trusty  default 4         pending  192.168.1.29   4y3hkg                trusty  default19:39
Sivawhich log file should I look at to see why the machine is not moving to 'started' state?19:40
kwmonroeSiva: can you ssh to the pending unit?  either "juju ssh 4" or "ssh ubuntu@192.168.1.29"?19:42
kwmonroeSiva: also, does "juju status 4 --format=yaml" tell you more about why the machine is pending?19:43
SivaNope. I am not able to ssh into it19:45
SivaI see on teh machine console that curl call to juju-controller to connect on port 17070 is failing19:46
SivaI see on the machine console that curl call to juju-bootstrap node to connect on port 17070 is failing19:46
SivaHere is the output19:50
Sivahttp://pastebin.ubuntu.com/23309745/19:50
kwmonroeSiva, is there a way to get onto machine 4 from maas?  i'm curious what your /var/log/cloud-init* logs look like20:06
rick_h_stokachu: it's setup to be in GA to get the extra config so you get a couple more LXD ootb, but it's not a big swing20:19
rick_h_stokachu: to get the big changes you need a logout/back in or reboot and we can't do it ootb20:19
stokachurick_h_: gotcha20:19
stokachurick_h_: ill document this on our side20:19
rick_h_stokachu: yea, we now show the link to scaling lxd wiki page on bootstrap for lxd because of it20:20
stokachurick_h_: you have that link so i can keep the message the same?20:20
rick_h_stokachu: https://github.com/lxc/lxd/blob/master/doc/production-setup.md20:34
stokachuthanks20:35
bdxhows it going everyone?20:59
bdxis there a way to specify what subnet I bootstrap to?20:59
rick_h_bdx: sorry, a couple of folks that would know were on holiday last week. I'm sending an email right now to find out and will get back to you tomorrow21:00
rick_h_bdx: they're in EU end EOD atm21:00
rick_h_but will be back in the morning21:00
rick_h_bdx: this is on AMZ correct?21:01
rick_h_bdx: or something else?21:01
bdxrick_h_: thats great, thanks21:07
bdxrick_h_: yea, aws21:07
rick_h_bdx: and this is not going to work with the vpc-id constraint?21:08
rick_h_bdx: because there's > 1 subnet or any other details I can pull out?21:08
bdxrick_h_: exactly ... I have 50+ and growing subnets21:09
rick_h_bdx: k, will see.21:10
bdxrick_h_: thx21:10
katcosomeone here was asking about lxd and zfs?21:24
rick_h_katco: sorry, the canonical one21:24
katcorick_h_: oh... i cannot atm :( my server is down21:24
katcorick_h_: hd went out21:24
rick_h_katco: ok, but can you join from the current client?21:24
katcorick_h_: i don't have any certs or anything21:25
katcorick_h_: i can try and get that set up..21:25
ahasenackkatco: hi, I was just wondering how juju calls lxd init21:25
ahasenackif it requests zfs or not21:25
ahasenackjuju 2 rc3 specifically21:25
ahasenackon xenial21:25
katcoahasenack: try running bootstrap with --debug; it should provide some information about the rest calls21:26
ahasenackkatco: this is the maas provider, where I did a deploy --to lxd:021:26
ahasenackit's that lxd21:26
katcoahasenack: ah21:26
ahasenackkatco: I'm seeing abysmal i/o performance inside that container, and I checked and saw that the host has the lxd containers backed by one big zfs image file21:27
ahasenackI haven't seen this before, and I can't tell if it's new21:27
katcoahasenack: so you're wondering if it requests zfs by default?21:28
ahasenackyes21:28
ahasenackif not, there are other clues I can chase21:28
ahasenacklike, if zfsutils is installed, then lxd will pick zfs by default, I'm told21:28
katcoahasenack: it certainly looks like if series == "xenial" it's going to initialize a zfs pool21:32
ahasenackhmmm21:32
katcoahasenack: trying to figure out where that gets used...21:32
ahasenackkatco: does it create the pool beforehand, file-backed?21:32
ahasenackI got a 100G pool21:32
ahasenackin the host21:32
katcoahasenack: yeah: https://github.com/juju/juju/blob/78273ef59ee77c0be55f761346917cfe63842dcd/container/lxd/initialisation_linux.go#L13621:33
katcoahasenack: ah, so it looks like it's telling lxd to use zfs from the get-go... juju doesn't do anything else after that21:33
katcoahasenack: all created containers will allocate to that pool backed by zfs21:34
ahasenack90%21:34
katcoahasenack: alarming, but at least it's sparse21:35
katcoajmitch: i don't know where that magic number came from21:35
ahasenackkatco: I think lxd caps it at 100G21:36
katcoajmitch: oops sorry for misping21:36
ahasenackkatco: ok, so if xenial, that happens. Else, it's just "lxd init"?21:37
katcoahasenack: i believe it will just use the presumably running lxd daemon. init in this case is just to initialize a storage pool21:39
ahasenackgot it, thanks21:40
katcoahasenack: hth21:40
vmorrisbdx rick_h_: this really needs to be updated https://jujucharms.com/docs/stable/network-spaces21:42
vmorrisI guess you can only work with spaces if MAAS is the undercloud, but in juju 2 it doesn't seem that spaces are configurable directly21:42
jgriffithsIs there any way to use a local juju controller (lxd) to deploy bundles to remote MAAS system? It seems a waste to spin up and entire server just to act be a controller.21:45
jgriffithsSorry if that is a dumb question. Just looking at maas/juju for the first time and need to know if I need an extra piece of physical hardware for the juju controller.21:46
vmorrisjgriffiths: i asked a similar question but yeah, you've gotta have a whole dedicated machine in your MAAS cluster for the juju controller21:50
vmorrisjgriffiths: i also find it silly, but currently running MAAS all with KVM VMs, so I can get away with a tiny VM for the controller21:51
jgriffithsvmorris: Thanks! I've been looking around the internet for a couple hours trying to find out how to do it before I started thinking that it wasn't even possible. It's a huge waste in a bare metal environment. So, it looks like I need a physical MAAS server, a physical controller, and all the nodes for Openstack.21:54
jgriffithsAnd a grammar teacher.21:54
vmorrisjgriffiths: yeah if you're looking to run the openstack-base bundle, it's really 5 machines :P21:55
vmorrisplus the maas controller, yep21:55
vmorrisjgriffiths: my current configuration has maas-controller and 4 maas machines all as KVM guests on the same physical host21:56
vmorrissorry, 5 maas machines21:56
vmorrisjgriffiths: it's stable, but complex.. I've been exploring the openstack-on-lxd approach for about a week, and it's not quite as stable in my experience21:58
vmorrisi still like the pure lxd approach though21:58
jgriffithsOne last stupid question then. Do I need a controller if I'm manually deploying the individual charms? And now that you mention it, is anybody making an openstack-base style bundle without any containers at all (apt-get everything)?22:01
jgriffithsI'm still learning all this and have a lot of holes in my knowledge of charms.22:03
vmorrisjgriffiths: I'm not aware of any way to deploy a charm without a controller22:16
vmorrisjgriffiths: I'm also unaware of anyone at canonical doing anything to deploy openstack from a vendor perspective outside of juju22:16
vmorrisbut i am not an expert in the matter!22:16
jgriffithsThank you very much vmorris!22:18
vmorrissure :_)22:18
spaokjgriffiths: you can deploy the openstack bundle to whatever22:20
spaokcontainers are just easy low overhead servers22:21
vmorrisspaok: not without a juju controller bootstrapped22:21
vmorristhat was the first q22:21
spaokvmorris: correc22:21
spaokI was talking about the container part22:21
vmorrisah ok, but still they're going to end up running services in containers on the machines22:22
vmorrisright?22:22
spaoknot really, you can use KVM22:22
spaokwe don't run containers for our vm's yet22:23
vmorrisah yeah, that's right re: KVM22:23
vmorrisdo you have a link to that? I saw the bug report https://bugs.launchpad.net/juju/+bug/154766522:24
mupBug #1547665: juju 2.0 no longer supports KVM for local provider <2.0-count> <juju:Triaged> <https://launchpad.net/bugs/1547665>22:24
jgriffithsI was referencing the "openstack-on-lxd approach" not being stable concept and thought vmorris was suggesting not using containers for the components.22:24
vmorrisspaok: or are you just talking about juju 1?22:24
vmorrisjgriffiths: it's just not stable for me, i've heard it works.. just hasn't been my experience yet22:25
jgriffithsI was referencing the "openstack-on-lxd approach" not being stable comment and thought vmorris was suggesting not using containers for the components.22:25
spaokI'm saying KVM as the compute type for openstack22:25
spaokas for the bundle part, I would target physical servers22:25
spaokin MAAS or something22:25
jgriffithsOh. So you're not running any services inside containers?22:25
vmorrisspaok: alright, then we're talking about different things22:25
spaokvmorris: ya, there's the compute VM's and the OpenStack services, the later can be deployed to physical servers, LXD containers, or even KVM (maybe that bug you referenced)22:26
spaokI was saying you can build on all physical servers with KVM type compute nodes and have zero containers22:27
vmorrisspaok: using the openstack-base bundle?22:27
spaokya, you just modify it slightly22:28
spaokspecify machines and make them the targets for the services22:28
vmorrisspaok: yep alright, good point there22:28
spaokthough I would say its a waste of hardware22:28
spaokand not how we are building our new production systems22:29
spaokwe us containers22:29
spaoks/us/use/22:29
vmorrisyep22:29
spaokjgriffiths: I think containers for openstack services is pretty stable, the questionable one is LXD as the compute backend22:30
spaokbut that does work too22:30
spaokwhich is the bundle you referred to22:30
jgriffithsThat's what I thought. It seemed pretty stable. Thanks spaok22:31
spaokthe new HA stuff using DNS is pretty good too , we had a lot issues with the hacluster charm way of doing it22:31
spaokbut our setup is fairly off the beaten path also22:32
vmorrisspaok: do you have any published architectures or white papers?22:32
spaokno not really, our original design was based on https://github.com/wolsen/ods-tokyo-demo/blob/master/ods-tokyo.yaml22:33
spaokthat we added a bunch of overlay to and some other stuff22:33
spaokthe new "2.0" version of our stuff we stripped most of that out22:33
vmorriscool, thanks22:34
Juju3Fgffd23:23
Juju3ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDHXecAJkgNgxXpFfoSV+RwB2JwSoRASTUboa6FaAYU8XkCCQlx1K+jlvjXJY5ItDzmJRfQ1Q+07X7RfPuE4Ditjy9g8jwpdAnA4IYTN5b9QYkdxwbZPY7Jrsw8eYbnWsBWTDo3CKjxZyeglUq/cue8w0Rjw7FKwcIa4PvLMZo8V/H+nD30Y0MRtY1p2NYFUfZvdyNiIeB65k2ONiPf66NVa7Ywm63oShLztmyTY2Oy3VY5BYDCFatv3/PjagWyICdPrvWH2SdTK4Zo8c8jVyr4cA3JsEG2S11s4sKFFNKXA+epqQclyrD2CpcfG8uxHCdirQGThY0d1iUa6nf9pIaz lin23:23
smgollermaybe I should start here instead. :)23:46
smgollerso I've deployed openstack via the charm, and I'm working on increasing the size of the cluster. Now, the readme says to scale out neutron to do "juju add-unit neutron-gateway", but I only need neutron-openvswitch on the additional nodes. but if I try to add-unit neutron-openvswitch it complains about it being a sub-charm or somesuch. Any ideas?  In the initial deployment, neutron-gateway only ends up on 1 machine instead of all 4.23:46

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!