=== frankban|afk is now known as frankban | ||
=== rmcall_ is now known as rmcall | ||
=== jamespag` is now known as jamespage | ||
=== StoneTable is now known as aisrael | ||
=== jcsacket- is now known as jcsackett | ||
=== cmars` is now known as cmars | ||
diddledan | I see juju 2.0 rc2 is available and the updated charms store requires at least beta16. Xenial seems to currently carry beta15 so I'm unable to progress. Should I file an issue against ubuntu/xenial to request a SRU for juju to be updated? | 12:39 |
---|---|---|
Randlema1 | diddledan: i doubt they didn't think of that.. | 12:56 |
mgz | diddledan: you can use the ppa for a newer juju version on xenial for now | 12:57 |
mgz | we're in the process of getting rc3 back to xenial | 12:57 |
diddledan | cool | 12:57 |
Randlema1 | I also have a question! Regarding Nagios and JuJu. We added the Nagios charm and the NRPE charm. Added relations. But nagios refuses to monitor our disks :(... | 12:58 |
Randlema1 | Anyone who could shed a light on that? What could we be doing wrong. | 12:58 |
mgz | diddledan: see the release announcement: <https://lists.ubuntu.com/archives/juju/2016-October/007989.html> | 12:58 |
diddledan | are we sure there is a package in the ppa? Candidate: 2.0~beta15-0ubuntu2.16.04.1 | 12:59 |
diddledan | that's from apt policy with the ppa enabled | 12:59 |
diddledan | oh, /devel; | 13:00 |
diddledan | sorry, I have the juju/stable ppa . silly billy me :-p | 13:00 |
mgz | :) | 13:00 |
mgz | Randlema1: no idea without going off and looking at the nagios charm I'm afraid | 13:01 |
Randlema1 | mgz: np :) | 13:01 |
lazyPower | mornin #juju o/ | 13:12 |
=== frankban is now known as frankban|afk | ||
=== barry` is now known as barry | ||
vmorris | mornin | 13:24 |
Randlema1 | evening | 13:25 |
Spaulding | Hello folks! | 13:35 |
Spaulding | https://gist.github.com/pananormalny/35d2ae4f1651145ff0d8cfcd4a196ad5 | 13:35 |
Spaulding | Could someone look at it and tell me why this reactive script is running in a loop? | 13:35 |
lazyPower | Spaulding a pastebin or github would be helpful | 13:44 |
Spaulding | lazyPower: haha you joined too late ;) | 13:54 |
Spaulding | https://gist.github.com/pananormalny/35d2ae4f1651145ff0d8cfcd4a196ad5 | 13:54 |
=== medberry is now known as med_ | ||
* lazyPower shakes a fist at connectivity issues | 14:00 | |
Spaulding | hmm, maybe I need to change check_call to call? | 14:06 |
magicaltrout | Spaulding: you should be able to move all that apt stuff to layer.yaml i'd have thought | 14:13 |
marcoceppi | Spaulding: yeah, if you use the apt layer, all that complexity goes away | 14:13 |
marcoceppi | Spaulding: there's also a lot of other things to simplify | 14:13 |
magicaltrout | Spaulding: also | 14:14 |
magicaltrout | you don't set | 14:14 |
magicaltrout | sme.installed | 14:14 |
magicaltrout | so it will always run the install hook | 14:14 |
Spaulding | ok | 14:15 |
Spaulding | understood | 14:15 |
Spaulding | marcoceppi: it's my first charm | 14:16 |
Spaulding | so still learning how to do it properly.. | 14:16 |
marcoceppi | Spaulding: I'll give you a few updates | 14:16 |
=== frankban|afk is now known as frankban | ||
marcoceppi | Spaulding: to help get you on the right track | 14:16 |
Spaulding | \o/ | 14:16 |
=== scuttle|afk is now known as scuttlemonkey | ||
Spaulding | ok, i've read something more about layer-apt | 14:23 |
Spaulding | looks promising... | 14:23 |
marcoceppi | Spaulding: yeah, I think you'll like the result | 14:27 |
* marcoceppi continues to update your gist | 14:28 | |
=== skayskayskay is now known as skay | ||
=== skay is now known as Guest86296 | ||
marcoceppi | Spaulding: is GID important? | 14:29 |
marcoceppi | Spaulding: or is it just so you can add a user to that GID | 14:30 |
marcoceppi | Spaulding: as in, could you let the system autoassign for does the software expect an explicit GID mapping | 14:30 |
Spaulding | i mean... those numbers are not that important | 14:31 |
Spaulding | but users need to be assigned to specific groups | 14:31 |
marcoceppi | right | 14:31 |
marcoceppi | gotchya | 14:31 |
Spaulding | and gid => 5000 so they'll not collide with other groups... rather future-proof hack... | 14:32 |
=== Guest86296 is now known as skay | ||
Spaulding | users got also high uid >= 5000 | 14:33 |
marcoceppi | Spaulding: yeah | 14:36 |
marcoceppi | Spaulding: just about done | 14:36 |
Spaulding | :) | 14:39 |
marcoceppi | Spaulding: https://gist.github.com/marcoceppi/2eb1cf988f7f64eb535b290b1de29632 | 14:46 |
marcoceppi | Spaulding: there's a lot going on in there, I tried to leave comments where I could | 14:46 |
marcoceppi | Spaulding: there's a lot ot unpack, and I have to go for a bit, but lots of others here should be able to help, I'll try to check back in a bit | 14:47 |
magicaltrout | marcoceppi: can you write my charms please? | 14:48 |
Spaulding | marcoceppi: thank you! :) | 14:48 |
marcoceppi | Spaulding: no problem, by the looks of it, you'll probably find the apache layer useful as well | 14:49 |
Spaulding | marcoceppi: yeah, i guess | 14:58 |
Spaulding | hmm... it's hard to google some layers... | 14:59 |
Spaulding | marcoceppi: I can only see layers related to apache... but not strictly layer-apache.. | 15:01 |
marcoceppi | Spaulding: yeah, my bad. There's a search bar top right on http://interfaces.juju.solutions/ but it appears there's only apache-php and apache-wsgi | 15:08 |
marcoceppi | no base apache layer | 15:08 |
Spaulding | exactly! but still I think I can use apache-php... even some parts of it | 15:10 |
Spaulding | cause I'll need to manage some files | 15:11 |
marcoceppi | Spaulding: yeah, when I create sharable layers, I usually do everything in my layer, then I find the parts that I see as reusable and strip time out into it's own layer | 15:11 |
marcoceppi | we're long overdue for an apache layer, much like how we have an nginx layer | 15:12 |
Spaulding | so you're working on apache layer? | 15:14 |
Spaulding | i mean - ubuntu team... | 15:14 |
marcoceppi | Spaulding: not at the moment | 15:16 |
marcoceppi | Spaulding: if I come across a project that needs an apache layer I'll probably do one, but most of the stuff I charm up uses naginx | 15:16 |
* marcoceppi signs off for a bit | 15:17 | |
marcoceppi | Spaulding: oh, well, I think we can take the apache-php layer and instead make it just apache and then make apache-php use the apache layer and php layer and just merge them there but the php layer hasn't been published yet (I'm still getting that one ironed out) | 15:18 |
Spaulding | Unfortunatelly I can't use nginx | 15:19 |
Spaulding | not right now... | 15:19 |
Spaulding | not with this project (suexec, perl etc.) | 15:19 |
marcoceppi | yeah, no worries | 15:19 |
marcoceppi | I'm just commenting on why we don't really have a base apache layer, at least why I havent created one | 15:19 |
Spaulding | but still, because of you now I see how juju and layers works | 15:19 |
Spaulding | cause juju docs - hmm... they don't have enough information | 15:20 |
=== shawniverson is now known as spammy | ||
lazyPower | Spaulding - filing bugs against http://github.com/juju/docs/issues will help us target those areas that aren't clear and expand on the information there. Specifically if you could call out missing concepts, verbiage, etc. that would have helped that would be tops. | 15:48 |
Spaulding | lazyPower: sure, will do! | 16:27 |
lazyPower | thanks Spaulding :) | 16:36 |
=== frankban is now known as frankban|afk | ||
stokachu | rick_h_: whats the status on https://bugs.launchpad.net/juju-release-tools/+bug/1631038 | 19:04 |
mup | Bug #1631038: Need /etc/sysctl.d/10-juju.conf <juju-release-tools:Triaged by torbaumann> <https://launchpad.net/bugs/1631038> | 19:04 |
stokachu | i think we're hitting this with our single system deployment of openstack | 19:04 |
Siva | I have deployed an application and I find that the 'juju status' shows the machine in pending state, though the MAAS UI says the machine is 'Deployed' | 19:29 |
kwmonroe | cory_fu: i've been thinking about the bigtop hdfs smoke-test failing with < 3 units. without a dfs-peer relation, can i detect how many peers a datanode might have? if not, i'm thinking of running a dfsadmin report to get a count of live datanodes. thoughts? | 19:30 |
Siva | It is in pending state for 15 mts now | 19:30 |
Siva | Is there a log file I can look at to debug and see what is happening? | 19:30 |
cory_fu | kwmonroe: I don't think there's a way to count the units w/o a peer relation, though it would be trivial to add one. But dfsadmin seems reasonable, too | 19:31 |
kwmonroe | ack cory_fu, dfsadmin is trivialier for me to add ;) | 19:31 |
Siva | Any help is much appreciated | 19:34 |
zeestrat | Randlema1: You hitting something like this with Nagios and the NRPE charm? https://bugs.launchpad.net/charms/+source/nagios/+bug/1605733 | 19:36 |
mup | Bug #1605733: Nagios charm does not add default host checks to nagios <family> <nagios> <nrpe> <unknown> <nagios (Juju Charms Collection):New> <https://launchpad.net/bugs/1605733> | 19:36 |
Siva | I am seeing this status for 1 hr | 19:39 |
Siva | 1 started 192.168.1.252 4y3hkf trusty default 4 pending 192.168.1.29 4y3hkg trusty default | 19:39 |
Siva | which log file should I look at to see why the machine is not moving to 'started' state? | 19:40 |
kwmonroe | Siva: can you ssh to the pending unit? either "juju ssh 4" or "ssh ubuntu@192.168.1.29"? | 19:42 |
kwmonroe | Siva: also, does "juju status 4 --format=yaml" tell you more about why the machine is pending? | 19:43 |
Siva | Nope. I am not able to ssh into it | 19:45 |
Siva | I see on teh machine console that curl call to juju-controller to connect on port 17070 is failing | 19:46 |
Siva | I see on the machine console that curl call to juju-bootstrap node to connect on port 17070 is failing | 19:46 |
Siva | Here is the output | 19:50 |
Siva | http://pastebin.ubuntu.com/23309745/ | 19:50 |
kwmonroe | Siva, is there a way to get onto machine 4 from maas? i'm curious what your /var/log/cloud-init* logs look like | 20:06 |
rick_h_ | stokachu: it's setup to be in GA to get the extra config so you get a couple more LXD ootb, but it's not a big swing | 20:19 |
rick_h_ | stokachu: to get the big changes you need a logout/back in or reboot and we can't do it ootb | 20:19 |
stokachu | rick_h_: gotcha | 20:19 |
stokachu | rick_h_: ill document this on our side | 20:19 |
rick_h_ | stokachu: yea, we now show the link to scaling lxd wiki page on bootstrap for lxd because of it | 20:20 |
stokachu | rick_h_: you have that link so i can keep the message the same? | 20:20 |
rick_h_ | stokachu: https://github.com/lxc/lxd/blob/master/doc/production-setup.md | 20:34 |
stokachu | thanks | 20:35 |
bdx | hows it going everyone? | 20:59 |
bdx | is there a way to specify what subnet I bootstrap to? | 20:59 |
rick_h_ | bdx: sorry, a couple of folks that would know were on holiday last week. I'm sending an email right now to find out and will get back to you tomorrow | 21:00 |
rick_h_ | bdx: they're in EU end EOD atm | 21:00 |
rick_h_ | but will be back in the morning | 21:00 |
rick_h_ | bdx: this is on AMZ correct? | 21:01 |
rick_h_ | bdx: or something else? | 21:01 |
bdx | rick_h_: thats great, thanks | 21:07 |
bdx | rick_h_: yea, aws | 21:07 |
rick_h_ | bdx: and this is not going to work with the vpc-id constraint? | 21:08 |
rick_h_ | bdx: because there's > 1 subnet or any other details I can pull out? | 21:08 |
bdx | rick_h_: exactly ... I have 50+ and growing subnets | 21:09 |
rick_h_ | bdx: k, will see. | 21:10 |
bdx | rick_h_: thx | 21:10 |
katco | someone here was asking about lxd and zfs? | 21:24 |
rick_h_ | katco: sorry, the canonical one | 21:24 |
katco | rick_h_: oh... i cannot atm :( my server is down | 21:24 |
katco | rick_h_: hd went out | 21:24 |
rick_h_ | katco: ok, but can you join from the current client? | 21:24 |
katco | rick_h_: i don't have any certs or anything | 21:25 |
katco | rick_h_: i can try and get that set up.. | 21:25 |
ahasenack | katco: hi, I was just wondering how juju calls lxd init | 21:25 |
ahasenack | if it requests zfs or not | 21:25 |
ahasenack | juju 2 rc3 specifically | 21:25 |
ahasenack | on xenial | 21:25 |
katco | ahasenack: try running bootstrap with --debug; it should provide some information about the rest calls | 21:26 |
ahasenack | katco: this is the maas provider, where I did a deploy --to lxd:0 | 21:26 |
ahasenack | it's that lxd | 21:26 |
katco | ahasenack: ah | 21:26 |
ahasenack | katco: I'm seeing abysmal i/o performance inside that container, and I checked and saw that the host has the lxd containers backed by one big zfs image file | 21:27 |
ahasenack | I haven't seen this before, and I can't tell if it's new | 21:27 |
katco | ahasenack: so you're wondering if it requests zfs by default? | 21:28 |
ahasenack | yes | 21:28 |
ahasenack | if not, there are other clues I can chase | 21:28 |
ahasenack | like, if zfsutils is installed, then lxd will pick zfs by default, I'm told | 21:28 |
katco | ahasenack: it certainly looks like if series == "xenial" it's going to initialize a zfs pool | 21:32 |
ahasenack | hmmm | 21:32 |
katco | ahasenack: trying to figure out where that gets used... | 21:32 |
ahasenack | katco: does it create the pool beforehand, file-backed? | 21:32 |
ahasenack | I got a 100G pool | 21:32 |
ahasenack | in the host | 21:32 |
katco | ahasenack: yeah: https://github.com/juju/juju/blob/78273ef59ee77c0be55f761346917cfe63842dcd/container/lxd/initialisation_linux.go#L136 | 21:33 |
katco | ahasenack: ah, so it looks like it's telling lxd to use zfs from the get-go... juju doesn't do anything else after that | 21:33 |
katco | ahasenack: all created containers will allocate to that pool backed by zfs | 21:34 |
ahasenack | 90% | 21:34 |
katco | ahasenack: alarming, but at least it's sparse | 21:35 |
katco | ajmitch: i don't know where that magic number came from | 21:35 |
ahasenack | katco: I think lxd caps it at 100G | 21:36 |
katco | ajmitch: oops sorry for misping | 21:36 |
ahasenack | katco: ok, so if xenial, that happens. Else, it's just "lxd init"? | 21:37 |
katco | ahasenack: i believe it will just use the presumably running lxd daemon. init in this case is just to initialize a storage pool | 21:39 |
ahasenack | got it, thanks | 21:40 |
katco | ahasenack: hth | 21:40 |
vmorris | bdx rick_h_: this really needs to be updated https://jujucharms.com/docs/stable/network-spaces | 21:42 |
vmorris | I guess you can only work with spaces if MAAS is the undercloud, but in juju 2 it doesn't seem that spaces are configurable directly | 21:42 |
jgriffiths | Is there any way to use a local juju controller (lxd) to deploy bundles to remote MAAS system? It seems a waste to spin up and entire server just to act be a controller. | 21:45 |
jgriffiths | Sorry if that is a dumb question. Just looking at maas/juju for the first time and need to know if I need an extra piece of physical hardware for the juju controller. | 21:46 |
vmorris | jgriffiths: i asked a similar question but yeah, you've gotta have a whole dedicated machine in your MAAS cluster for the juju controller | 21:50 |
vmorris | jgriffiths: i also find it silly, but currently running MAAS all with KVM VMs, so I can get away with a tiny VM for the controller | 21:51 |
jgriffiths | vmorris: Thanks! I've been looking around the internet for a couple hours trying to find out how to do it before I started thinking that it wasn't even possible. It's a huge waste in a bare metal environment. So, it looks like I need a physical MAAS server, a physical controller, and all the nodes for Openstack. | 21:54 |
jgriffiths | And a grammar teacher. | 21:54 |
vmorris | jgriffiths: yeah if you're looking to run the openstack-base bundle, it's really 5 machines :P | 21:55 |
vmorris | plus the maas controller, yep | 21:55 |
vmorris | jgriffiths: my current configuration has maas-controller and 4 maas machines all as KVM guests on the same physical host | 21:56 |
vmorris | sorry, 5 maas machines | 21:56 |
vmorris | jgriffiths: it's stable, but complex.. I've been exploring the openstack-on-lxd approach for about a week, and it's not quite as stable in my experience | 21:58 |
vmorris | i still like the pure lxd approach though | 21:58 |
jgriffiths | One last stupid question then. Do I need a controller if I'm manually deploying the individual charms? And now that you mention it, is anybody making an openstack-base style bundle without any containers at all (apt-get everything)? | 22:01 |
jgriffiths | I'm still learning all this and have a lot of holes in my knowledge of charms. | 22:03 |
vmorris | jgriffiths: I'm not aware of any way to deploy a charm without a controller | 22:16 |
vmorris | jgriffiths: I'm also unaware of anyone at canonical doing anything to deploy openstack from a vendor perspective outside of juju | 22:16 |
vmorris | but i am not an expert in the matter! | 22:16 |
jgriffiths | Thank you very much vmorris! | 22:18 |
vmorris | sure :_) | 22:18 |
spaok | jgriffiths: you can deploy the openstack bundle to whatever | 22:20 |
spaok | containers are just easy low overhead servers | 22:21 |
vmorris | spaok: not without a juju controller bootstrapped | 22:21 |
vmorris | that was the first q | 22:21 |
spaok | vmorris: correc | 22:21 |
spaok | I was talking about the container part | 22:21 |
vmorris | ah ok, but still they're going to end up running services in containers on the machines | 22:22 |
vmorris | right? | 22:22 |
spaok | not really, you can use KVM | 22:22 |
spaok | we don't run containers for our vm's yet | 22:23 |
vmorris | ah yeah, that's right re: KVM | 22:23 |
vmorris | do you have a link to that? I saw the bug report https://bugs.launchpad.net/juju/+bug/1547665 | 22:24 |
mup | Bug #1547665: juju 2.0 no longer supports KVM for local provider <2.0-count> <juju:Triaged> <https://launchpad.net/bugs/1547665> | 22:24 |
jgriffiths | I was referencing the "openstack-on-lxd approach" not being stable concept and thought vmorris was suggesting not using containers for the components. | 22:24 |
vmorris | spaok: or are you just talking about juju 1? | 22:24 |
vmorris | jgriffiths: it's just not stable for me, i've heard it works.. just hasn't been my experience yet | 22:25 |
jgriffiths | I was referencing the "openstack-on-lxd approach" not being stable comment and thought vmorris was suggesting not using containers for the components. | 22:25 |
spaok | I'm saying KVM as the compute type for openstack | 22:25 |
spaok | as for the bundle part, I would target physical servers | 22:25 |
spaok | in MAAS or something | 22:25 |
jgriffiths | Oh. So you're not running any services inside containers? | 22:25 |
vmorris | spaok: alright, then we're talking about different things | 22:25 |
spaok | vmorris: ya, there's the compute VM's and the OpenStack services, the later can be deployed to physical servers, LXD containers, or even KVM (maybe that bug you referenced) | 22:26 |
spaok | I was saying you can build on all physical servers with KVM type compute nodes and have zero containers | 22:27 |
vmorris | spaok: using the openstack-base bundle? | 22:27 |
spaok | ya, you just modify it slightly | 22:28 |
spaok | specify machines and make them the targets for the services | 22:28 |
vmorris | spaok: yep alright, good point there | 22:28 |
spaok | though I would say its a waste of hardware | 22:28 |
spaok | and not how we are building our new production systems | 22:29 |
spaok | we us containers | 22:29 |
spaok | s/us/use/ | 22:29 |
vmorris | yep | 22:29 |
spaok | jgriffiths: I think containers for openstack services is pretty stable, the questionable one is LXD as the compute backend | 22:30 |
spaok | but that does work too | 22:30 |
spaok | which is the bundle you referred to | 22:30 |
jgriffiths | That's what I thought. It seemed pretty stable. Thanks spaok | 22:31 |
spaok | the new HA stuff using DNS is pretty good too , we had a lot issues with the hacluster charm way of doing it | 22:31 |
spaok | but our setup is fairly off the beaten path also | 22:32 |
vmorris | spaok: do you have any published architectures or white papers? | 22:32 |
spaok | no not really, our original design was based on https://github.com/wolsen/ods-tokyo-demo/blob/master/ods-tokyo.yaml | 22:33 |
spaok | that we added a bunch of overlay to and some other stuff | 22:33 |
spaok | the new "2.0" version of our stuff we stripped most of that out | 22:33 |
vmorris | cool, thanks | 22:34 |
Juju3 | Fgffd | 23:23 |
Juju3 | ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDHXecAJkgNgxXpFfoSV+RwB2JwSoRASTUboa6FaAYU8XkCCQlx1K+jlvjXJY5ItDzmJRfQ1Q+07X7RfPuE4Ditjy9g8jwpdAnA4IYTN5b9QYkdxwbZPY7Jrsw8eYbnWsBWTDo3CKjxZyeglUq/cue8w0Rjw7FKwcIa4PvLMZo8V/H+nD30Y0MRtY1p2NYFUfZvdyNiIeB65k2ONiPf66NVa7Ywm63oShLztmyTY2Oy3VY5BYDCFatv3/PjagWyICdPrvWH2SdTK4Zo8c8jVyr4cA3JsEG2S11s4sKFFNKXA+epqQclyrD2CpcfG8uxHCdirQGThY0d1iUa6nf9pIaz lin | 23:23 |
smgoller | maybe I should start here instead. :) | 23:46 |
smgoller | so I've deployed openstack via the charm, and I'm working on increasing the size of the cluster. Now, the readme says to scale out neutron to do "juju add-unit neutron-gateway", but I only need neutron-openvswitch on the additional nodes. but if I try to add-unit neutron-openvswitch it complains about it being a sub-charm or somesuch. Any ideas? In the initial deployment, neutron-gateway only ends up on 1 machine instead of all 4. | 23:46 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!