[01:13] <blahdeblah> So hallyn was telling me on Friday about using KVM with the local provider; is it also possible to use this with the manual provider, i.e. to control a remote KVM host?
[01:14] <blahdeblah> Or alternatively, change the container type on a manual host after the environment has been bootstrapped?
[01:23] <spaok> if you put maas in a container and use the maas provider, setup ssh keys for the maas user, you can specify the qemu url to a remote machine for the power control, just need ssh
[01:24] <spaok> juju add-machine would spin up a KVM instance on that server
[01:31] <blahdeblah> spaok: Yeah - I've already done that; but I'd like to avoid having to add KVM instances to the MAAS controller
[06:58] <kjackal_> Good morning Juju world!
[08:01] <SDBStefano> hi  kjackal
[08:02] <kjackal> hi SDBStefano, what's up?
[08:02] <SDBStefano> how could I create a charm for Xenial instaed of trusty ?
[08:02] <kjackal> in the metadata.yaml you set the proper series, let me find an example
[08:03] <kjackal> SDBStefano: https://github.com/juju-solutions/layer-apache-bigtop-base/blob/master/metadata.yaml#L10
[08:05] <kjackal> SDBStefano: although the charm build command when there is a single series decides to put it under the trusty build path (but that might be some missconfiguration I have on my side)
[08:06] <SDBStefano> so, I have added into the yaml file  - series:   - xenial
[08:07] <SDBStefano> and I done the build
[08:07] <SDBStefano> are you saying that the stuff under the trusty directory is now for Xenial ?
[08:09] <kjackal> SDBStefano: when you do a charm build first line should say where the output directory is
[08:10] <kjackal> SDBStefano: go ahead and open the <build-output-dir>/metadata.yaml . It should say that the series is xenial, right?
[08:11] <SDBStefano> ok, it creates a new dirctory name 'builds' there the metadata.yaml contains "series": ["xenial"]
[08:12] <SDBStefano> so, now I should deploy using 'juju deploy $JUJU_REPOSITORY/builds/use  --series xenial'
[08:12] <kjackal> SDBStefano: awesome! then this charm is xenial. Yes, you can deploy it now!
[08:14] <SDBStefano> I have to remove the previous app, but it has an error : - ude                       error        1  ude      local         6  ubuntu
[08:14] <SDBStefano> so the 'juju remove-application ude' is not removing the app
[08:15] <SDBStefano> is it possible to force the removal ?
[08:17] <kjackal> you can remove-machine where the unit is with --force
[08:17] <kjackal> SDBStefano: ^
[08:18] <SDBStefano> yes, it worked, I'm deploying, thanks for helping
[08:19] <kjackal> you can also deploy the same application with a different name like so: juju deploy $JUJU_REPOSITORY/builds/use myappname --series xenial
[08:19] <kjackal> SDBStefano: ^
[10:24] <neiljerram> jamespage, good morning!
[10:25] <jamespage> morning neiljerram
[10:25] <jamespage> how things?
[10:25] <neiljerram> Quite well, thanks!
[10:25] <neiljerram> You suggested that I ping you here in your review of https://review.openstack.org/#/c/382563/
[10:25]  * magicaltrout knows from experience, anyone who greats another user specifically whilst saying good morning in reality has an issue or favour to ask..... "quite well" or not ;)
[10:26] <jamespage> neiljerram, yes!
[10:26] <magicaltrout> s/greats/greets
[10:26] <neiljerram> So doing that.  I think your proposal is basically right, so just looking for pointers on how to start reworking the code into the correct form...
[10:27] <neiljerram> magicaltrout, I just thought that 'good morning' was a little more friendly than 'ping'!
[10:27] <jamespage> neiljerram, so since the original calico integration was written, we've done quite a bit of work to minimize the amount of vendor specific code that is required in the core principle charms
[10:27] <jamespage> neiljerram, you already have a subordinate for nova-compute
[10:27] <jamespage> neiljerram, that's been complemented with the same approach for neutron-api
[10:28] <neiljerram> jamespage, yes, agreed, and makes sense.
[10:28] <jamespage> neiljerram, so that all of the SDN specific bits can reside in an SDN specific charm
[10:28] <jamespage> neiljerram, right and the nice bit of this is that gnuoy's just completed some work to make writing those a whole lot easier
[10:28] <neiljerram> jamespage, Ah, nice.
[10:28] <jamespage> neiljerram, we've done quite a bit of work on making reactive and layers work for openstack charming this cycle
[10:28] <jamespage> neiljerram, so your charm can be quite minimal
[10:29] <jamespage> neiljerram, https://github.com/openstack/charm-neutron-api-odl has been refactored as an example for reference
[10:29] <jamespage> neiljerram, but we also have a template for charm create
[10:29] <jamespage> gnuoy`, ^^ is that live yet?
[10:30] <jamespage> neiljerram, I'd love to get a neutron-api-calico charm up and running, so we can deprecate the neutron-api bits for ocata release and remove them next cycle
[10:31] <gnuoy`> jamespage, the template ? That is very nearly ready. I just need to run the guide using the template to check they are both in sync and work
[10:31] <neiljerram> jamespage, Just as a thought, might it even make sense to have a single 'neutron-calico' charm that provides both the compute and the server function?  I assume it can detect at runtime which charm it is subordinate to?  If it's subordinate to nova-compute, it would provide the compute function; if it's subordinate to neutron-api, it would provide the server side function.
[10:31] <jamespage> neiljerram, that's absolutely fine
[10:31] <jamespage> +1
[10:31] <jamespage> neiljerram, neutron-calico already exists right?
[10:32] <neiljerram> jamespage, Thanks.  So, is there an example of another SDN subordinate charm that already uses gnuoy's new facilities?
[10:32] <neiljerram> jamespage, Yes, neutron-calico already exists (for the compute function).
[10:32] <jamespage> neiljerram, https://github.com/openstack/charm-neutron-api-odl does
[10:32] <jamespage> neiljerram, so approach re neutron-calico as a 'does both' type charm
[10:32] <jamespage> neiljerram, right now its not possible to upgrade from a non-reactive charm to a reactive charm
[10:33] <jamespage> neiljerram, neutron-calico is an older style python charm I think
[10:33] <gnuoy`> neiljerram, I'm still smoothing of the rough corners but https://github.com/gnuoy/charm-guide/blob/master/doc/source/new-sdn-charm.rst may help
[10:33] <neiljerram> jamespage, TBH we've never really tested for upgrading yet at all.
[10:33] <jamespage> neiljerram, how many live deployments are you aware of using calico deployed via juju?
[10:33] <gnuoy`> haha good point
[10:34] <jamespage> neiljerram, in which case I'd take the hit and move to the new layers+reactive approach now
[10:34] <neiljerram> jamespage, Just two: OIL, and a Canonical customer that I'm not sure I can name here.
[10:34] <jamespage> neiljerram, ok OIL is manageable
[10:34] <jamespage> neiljerram, I'll poke on the other one :-)
[10:35] <jamespage> gnuoy`, what do you think to the single subordinate doing both roles approach discussed above?
[10:35] <neiljerram> jamespage, I think you're right about neutron-calico being an older style charm.  So perhaps it would be a simpler first step to make a separate neutron-api-calico, in the most up-to-date style (reactive)
[10:35] <gnuoy`> jamespage, I'm fine with that
[10:36] <jamespage> neiljerram, ack
[10:36] <jamespage> gnuoy`, that charm-guide update is for the hypervisor integration - do we have the equiv for API?
[10:37] <gnuoy`> jamespage, yep, https://review.openstack.org/#/c/387238/
[10:38] <jamespage> gnuoy, sorry I mean't the neutron-api subordinate charm version
[10:39] <gnuoy> jamespage, no, not atm.
[10:39] <jamespage> gnuoy, ok that's what neiljerram will be after
[10:39] <gnuoy> ack
[10:39] <jamespage> neiljerram, you might need to give us a week or two to pull that bit into shape
[10:40] <gnuoy> jamespage, then https://github.com/openstack/charm-neutron-api-od is the best bet
[10:40] <gnuoy> * https://github.com/openstack/charm-neutron-api-odl
[10:40] <neiljerram> jamespage, But I could start by looking at https://github.com/openstack/charm-neutron-api-odl for inspiration, and ask any questions here?
[10:40] <jamespage> neiljerram, ^^ yeah that byt example is our current doc - but we'll be working on that
[10:40] <jamespage> neiljerram, yeah that's fine - but we've move openstack charm discussion over to #openstack-charms
[10:40] <jamespage> but either is still fine
[10:40] <neiljerram> jamespage, Ah OK, I'll go there now...
[10:41] <jamespage> neiljerram, ta
[10:52] <SDBStefano> Hi hackal, I deployed a local charm, but it's stucked into :
[10:52] <SDBStefano> UNIT    WORKLOAD  AGENT       MACHINE  PUBLIC-ADDRESS  PORTS  MESSAGE ude/11  waiting   allocating  20       10.15.177.80           waiting for machine  MACHINE  STATE    DNS           INS-ID          SERIES  AZ 20       pending  10.15.177.80  juju-19dc1e-20  xenial
[10:53] <SDBStefano> ok, it has just changed into :
[10:53] <SDBStefano> MACHINE  STATE    DNS           INS-ID          SERIES  AZ 20       started  10.15.177.80  juju-19dc1e-20  xenial
[10:53] <SDBStefano> so it seems very slow
[12:26] <icey> bdx, do I remember you trying to get Juju working with Digital Ocean?
[13:24] <Spaulding> Hmm...
[13:24] <Spaulding> Why my "juju status" saying to me that:
[13:24] <Spaulding> Unit    Workload     Agent  Machine  Public address  Ports  Message
[13:24] <Spaulding> sme/5*  maintenance  idle   5        10.190.134.241         Enabling Apache2 modules
[13:25] <Spaulding> But as I see all of my tasks has been made.
[13:25] <Spaulding> So the last one should set up state "active"
[13:25] <Spaulding> How I can track it?
[13:32] <kjackal> Spaulding: is there any info in juju debug-log
[13:33] <Spaulding> kjackal:
[13:33] <Spaulding> unit-sme-5: 14:31:32 INFO unit.sme/5.juju-log Invoking reactive handler: reactive/sme.py:95:enable_apache_mods
[13:33] <Spaulding> Hm... it looks like reactive ran this task again
[13:34] <Spaulding> that's why my status changed...
[13:40] <lazyPower> Spaulding - thats the rub. you cannot gauarantee ordering of the methods in reactive unless its a tightly controlled flow through states. if you want something to be run only once, it would be a good idea to decorate with a @when_not, and then subsequently, set the state its decorating against.    do note, that you'll have to handle removal of that state if you ever want ot re-execute that body of code again.
[13:46] <Spaulding> lazyPower: yeah, i already noticed that
[13:47] <Spaulding> and I'm using when & when_not
[13:47] <Spaulding> it's like puppet
[13:50] <Spaulding> lazyPower: I found it!
[13:50] <Spaulding> @when_not('http_mods_enabled')
[13:50] <Spaulding> it should be sme.http ...
[13:50] <lazyPower> Spaulding  - Sounds like you're on the right track :)
[13:51] <Spaulding> because of you guys! :)
[14:01] <magicaltrout> s/greats/greets
[14:17] <lazyPower> o/ magicaltrout
[14:51] <fbarilla> I've the 'juju machines' command that does not list any machines but trying to add one leads to the following error message: ERROR machine is already provisioned
[14:52] <rick_h_> fbarilla: ah, is the machine you're trying to add in use in another model?
[14:55] <fbarilla> I've two models, 'controller and default' . None of them list the machine I want to add
[14:56] <fbarilla> In the 'controller' model I've the LXD container where juju has been bootstrapped
[15:02] <Spaulding> lazyPower: i broke it! :(
[15:02] <Spaulding> https://gist.github.com/pananormalny/765622f3d2c332bd9dece6f35b9ff267
[15:02] <Spaulding> maybe someone can spot an issue?
[15:03] <Spaulding> it's running in a loop - again... :/
[15:03] <Spaulding> the main idea was to run those tasks one by one...
[15:04] <Spaulding> from the top to the beginning... in that order
[15:04] <Spaulding> to the bottom**
[15:07] <lazyPower> Spaulding - i left a few comments on the gist, nothing stands out other than the state of sme.ready being commented out, and instead being set on 331
[15:08] <lazyPower> Spaulding i dont see a remove_state however, and thats the other tell-tale, as removing states cause the reactive dispatcher to re-test the state queue to determine what needs to be run, which makes it possible to enter an inf. loop if you're not careful with how you're decorating the methods.
[15:08] <lazyPower> but that doesn't appear to be the case, so that last message is more FYI than anything else.
[15:09] <Spaulding> lazyPower: about jinja2 I'll probably use it
[15:09] <Spaulding> but basically i would like to have any "prototype" of juju - before OpenStack Barcelona
[15:09] <Spaulding> so I'm trying to do this ASAP
[15:10] <Spaulding> After that I'll have more time to do it properly... now it's just proof-of-concept
[15:11] <lazyPower> right, just a suggestion
[15:11] <lazyPower> its not wrong to put heredocs in there, its not best practice.
[15:11] <Spaulding> I know
[15:11] <Spaulding> It's dirty way - but in this case - it's working
[15:12] <lazyPower> Spaulding - get me an update on that gist with the output of charms.reactive get_states
[15:12] <lazyPower> i imagine whats happened is this isn't a fresh deploy, and you've modified state progression and run an upgrade-charm and now its misbehaving - is that consistent with whats happened?
[15:13] <Spaulding> basically - i'm trying every time to deploy it from scratch
[15:15] <lazyPower> Spaulding - `juju run --unit sme/0 "charms.reactive get_states" `
[15:15] <lazyPower> assuming sme is the name of your charm and its unit 0
[15:15] <Spaulding> lazyPower: will do
[15:16] <Spaulding> lazyPower: is there any other way to remove application if it fails at install hook?
[15:16] <lazyPower> juju remove-machine # --force
[15:16] <Spaulding> cause right now I'm destroying the model
[15:16] <lazyPower> that'll strip the underlying machine unit from the charm and the charm should finish removal on its own
[15:16] <Spaulding> i tried with force with 2.0rc3... couldn't get that working...
[15:16] <Spaulding> maybe it's fixed(?) now...
[15:18] <lazyPower> http://paste.ubuntu.com/23339079/
[15:18] <lazyPower> seems like its functioning fine in 2.0.0
[15:19] <lazyPower> notice that it leaves/orphans the application/charm for a short while before reaping the charm.
[16:14] <cory_fu> petevg: I added a bunch comments in reply to and based on your review on https://github.com/juju-solutions/matrix/pull/2
[16:14] <cory_fu> bcsaller: Our feedback awaits you.  :)
[16:15] <bcsaller> cory_fu, petevg: thank you both
[16:15] <petevg> np
[16:16] <petevg> cory_fu: thx. Reading your comments ...
[16:19] <aisrael> cory_fu: Do you know what might be causing this? http://pastebin.ubuntu.com/23339455/
[16:19] <aisrael> From running charm build
[16:22] <cory_fu> aisrael: Not sure, but I'd guess that the log message contains utf8 encoded data and should maybe be .decode('utf8')ed before being logged?
[16:22] <cory_fu> aisrael: Can you drop a breakpoint into /usr/lib/python2.7/dist-packages/charmtools/utils.py and see what the output var holds?
[16:25] <aisrael> cory_fu: Not yet. I'm doing a charm school/training and they hit it. I'll dig in deeper, though. Thanks!
[16:26] <cory_fu> aisrael: I'd look for unicode characters in their yaml files, then.  metadata.yaml and layer.yaml specifically.  Otherwise, I'm not really sure
[16:27] <aisrael> Weird. We haven't touched those at all.
[16:35] <aisrael> cory_fu: looks like it may be related to an actions.yaml that's UTF-8-encoded
[16:35] <cory_fu> aisrael: Strange.  Well, we should definitely handle that better
[16:36] <lazyPower> is there any particular reason we're limiting that to ascii? (just curious)
[16:36] <aisrael> cory_fu: definitely. Once I confirm, I'll file a bug
[16:37] <cory_fu> Thanks
[16:45] <aisrael> It definitely looks like a locale/encoding issue. I had them send me the charm and I built it locally with no problem
[16:45] <cory_fu> Hrm
[17:08] <cory_fu> Is anyone having an issue with 2.0 and lxd where the agents never report as started?  Just started for me with the GA release
[17:10] <cory_fu> Seems to be an issue with the ssh keys
[17:11] <bdx> cory_fu: I've been having successful deploys so far
[17:13] <bdx> cory_fu, cmars: should we also set status to 'blocked', or 'error' here -> http://paste.ubuntu.com/23339654/
[17:14] <cory_fu> bdx: I'd say "blocked", yeah
[17:15] <bdx> cory_fu, cmars: or is setting the supported series in metadata enough?
[17:15] <bdx> probably both for good measure?
[17:15] <cory_fu> bdx, cmars: Setting the supported series in the metadata could (probably would) be overwritten by the charm layer.
[17:15] <cmars> bdx, cory_fu please open an issue. iirc think you can force series with juju deploy --series
[17:16] <cmars> cory_fu, i've noticed something with that.. if you have multiple layers that both specify the same series in metadata, the series gets output twice. and the CS rejects that
[17:16] <cmars> cory_fu, i think I opened a bug..
[17:17] <cmars> (but i forget where.. so many projects)
[17:17] <cmars> bdx, interested to get your thoughts on https://github.com/cmars/layer-lets-encrypt/issues/1 as well
[17:18] <cmars> i think we might be able to make this a little more reusable without introducing too much complexity
[17:18] <cory_fu> cmars: https://github.com/juju/charm-tools/issues/257  I do think the series list needs to be de-duped
[17:18] <cmars> cory_fu, thanks :)
[17:45] <cory_fu> Odd.  It seems to only be trusty lxd instances that get stuck in pending.  Guess I'll try torching my trusty image
[18:02]  * magicaltrout hands cory_fu the matches
[18:03] <cory_fu> magicaltrout: Thanks, but it didn't help
[18:26] <magicaltrout> awww
[18:35] <cmars> cory_fu, xenial host and trusty container?
[18:35] <cory_fu> cmars: Yes
[18:35] <cory_fu> cmars: It looks like the trusty image isn't getting the lxdbr0 interface for some reason, but I can't find anything obvious in any of the logs I could think of to check
[18:36] <cmars> cory_fu, one thing it could be, is a known systemd issue
[18:36] <cory_fu> Oh?
[18:36] <cmars> cory_fu: 22-09-2016 15:23:18 < stgraber!~stgraber@ubuntu/member/stgraber: cmars: if you mask (systemctl mask) the systemd units related to binfmt (systemctl -a | grep binfmt) and reboot, then the problem is gone for good. This is related to systemd's use of automount and not to binfmt-misc (which just ships a bunch of binfmt hooks)
[18:37] <cmars> cory_fu, a telltale symptom is that the trusty container gets stuck in mountall
[18:37] <cory_fu> cmars: Huh.  Worth a shot.  FWIW, this only started with the GA.  It was fine on the last RC
[18:37] <cory_fu> cmars: How do I tell where it's stuck?
[18:38] <cmars> cory_fu, a ps -ef would show a mountall process, and /var/log/upstart/mountall.log shows some binfmt errors
[18:38] <cmars> cory_fu, a quickfix is this (pardon my language): https://github.com/cmars/tools/blob/master/bin/lxc-unfuck
[18:38] <cmars> but masking the systemd units is a better solution
[18:39] <cmars> cory_fu, unless this is a completely different issue, in which case, forget everything I've said :)
[18:39] <cory_fu> cmars: Yep, that seems to be exactly the issue!
[18:40] <bdx> cory_fu: http://paste.ubuntu.com/23340056/
[18:41] <bdx> cory_fu: can't seem to replicate :-(
[18:42] <cory_fu> bdx: That looks like what I'm seeing.  The trusty machine never goes out of pending
[18:42] <cory_fu> cmars: I'm not really familiar with systemd mask.  Do you have the exact commands handy, or can you point me to some docs?
[18:43] <bdx> cory_fu: it started .... http://paste.ubuntu.com/23340072/
[18:43] <cory_fu> bdx: Oh, well, it never does for me.  But it sounds like cmars has the solution for me
[18:43] <bdx> strange ... nice
[18:46] <cmars> cory_fu, this *should* do it: sudo systemctl mask $(systemctl -a | awk '/binfmt/{print $2}')
[18:46] <cmars> cory_fu, this might break binfmt on your host, if you run .NET exe binaries directly, for example, that might stop working
[18:47] <cory_fu> cmars: I don't think I do, but good to know.  No way to have both work, I take it?
[18:47] <cmars> cory_fu, ideally, someone would fix this issue in systemd or binfmt or wherever it needs to be done
[18:48] <cory_fu> cmars: I don't think that awk is right.  It just gives me "loaded" four times
[18:48] <cmars> cory_fu, what do you get from: systemctl -a | grep binfmt ?
[18:48] <cory_fu> I assume it should be $1 instead
[18:49] <cory_fu> cmars: http://pastebin.ubuntu.com/23340090/
[18:49] <cmars> cory_fu, on my machine, $1 is a big white dot for some reason. could be my terminal type?
[18:49] <cmars> cory_fu, yep, $1 for you
[18:49] <cory_fu> Could be.  I just have leading whitespace.
[18:49] <cory_fu> cmars: In case this causes issues, what would be the command to undo this?
[18:50] <cmars> cory_fu, systemctl unmask the units that were masked
[18:50] <cory_fu> Ok, thanks
[19:06] <troontje> hi all :)
[19:06] <troontje> I followed this instruction : https://jujucharms.com/docs/stable/getting-started
[19:07] <troontje> its pretty clear, except what exaclty does that 20GB mean?
[19:08] <marcoceppi> troontje: during the lxd init step?
[19:08] <troontje> exaclty
[19:08] <troontje> when setting the loop device
[19:08] <marcoceppi> troontje: the size of the loop back device to use for storage on LXD machiens
[19:08] <marcoceppi> when LXD boots machiens, they'll all be allocated a slice of that storage
[19:09] <troontje> marcoceppi: ah ok, so when setting that 20GB that will the 20GB that will be shared with all the machines together?
[19:09] <marcoceppi> troontje: exactly
[19:10] <troontje> marcoceppi: I was trying to install openstack base with that 20 GB ,athough it does not fail it just not continues the process
[19:10] <troontje> I first thought that value was per machine
[19:11] <marcoceppi> troontje: yeah, it's for all machines, and each machine boots with like 8GB of storage from that 20
[19:11] <troontje> haha, well I cant blame it that it sort of failed
[19:17] <troontje> marcoceppi: ok, so I set everything back and the canvas is clean again
[19:17] <troontje> How can I increase that value for the storage
[19:18] <troontje> destorying it and make it new is no problem btw
[19:19] <cory_fu> bcsaller, petevg: Resolved merge conflicts and added TODO comment re: the model connection.  Going to merge now
[19:20] <bdx> cmars: what happens when you don't have an 'A' record precreated -> http://paste.ubuntu.com/23340240/
[19:21] <bdx> cmars: or created pointing at another ip :/
[19:21] <cmars> bdx, pretty sure in that case that the standalone method will fail
[19:22] <bdx> yea
[19:22] <bdx> we should call that out in layer le readme
[19:22] <cmars> bdx, ack, good call
[19:23] <cory_fu> bcsaller, petevg: Merged
[19:28] <petevg> bcsaller, cory_fu: Just pushed a small fix to master: glitch now grabs context.juju_model rather than context.model (good name change; just needed to update glitch's assumptions).
[19:31] <troontje> how can I increase that zfs disk for JUJU?
[19:57] <bdx> cmars: it would be worth noting that layer le cannot be used on lxd/contianer deploys
[19:57] <cmars> bdx, true, true. wish there was a way to expose containers through the host machine..
[19:58] <kwmonroe> cory_fu: can you have a hyphen in auto_accessors?  https://github.com/juju-solutions/interface-dfs/blob/master/requires.py#L25
[19:58] <cory_fu> kwmonroe: Hyphens are translated to underscores
[19:59] <kwmonroe> cool, thx cory_fu
[20:01] <cory_fu> cmars: That systemctl mask fix worked perfectly.  Thanks!
[20:01] <jrwren> bdx, cmars, you could use it with some tweaking of juju-default lxd profile, but its not likely to do what you want.
[20:01] <cmars> cory_fu, sure thing
[20:02] <cmars> jrwren, ah, that's true. i do some lxd bridging at home, but that doesn't always work on public clouds. ipv4 addresses are in short supply
[20:05] <bdx> cmars, jrwren: I'm thinking layer:letsencrypt might be best consumed by a public facing endpoint
[20:06] <bdx> the public endpoint (nginx/haproxy), could also be a reverse proxy
[20:06] <bdx> cmars, jrwren: so it could be used as an ssl termination + reverse proxy
[20:07] <cmars> bdx, a frontend that negotiates certs for its backends, and then does a passthrough to them (haproxy's tcp mode)... interesting
[20:07] <bdx> cmars: exactly
[20:09] <bdx> ^ is a skewed model bc it takes for granted that a user would be deploying lxd continers on the aws provider :/
[20:12] <bdx> but not in all cases
[20:12] <bdx> just the one I want
[20:12] <cmars> bdx, i think i'd like to keep layer:lets-encrypt lightweight and usable in standalone web apps like mattermost.. but also usable in such a frontend charm as you describe
[20:13] <bdx> entirely
[20:13] <cmars> bdx, with the layer:nginx part removed, i think we'll have something reusable to this end
[20:13] <bdx> right
[20:15] <arosales> bdx: did you see the fixes for lxd/local
[20:15] <arosales> it was indeed to tune kernel settings
[20:15] <arosales> bdx: ping if you are still hitting the 8 lxd limit
[20:15] <cmars> bdx, it'd be really cool if the frontend could operate as a DNS server too. then it could register subdomains AND obtain certs
[20:16] <bdx> arosales: no I haven't .... just the warning to reference the production lxd docs when bootstrapping .... is this what you are talking about?
[20:17] <bdx> cmars: thats a great idea ... just what I've been thinking too
[20:24] <bdx> arosales, rick_h_: I've seen and heard bits and pieces about the rbd lxd backend, and also that getting rbd backend compat with nova-lxd isn't on the current roadmap. Can you comment on the status/progress of lxd rbd backend work to any extent?
[20:40] <arosales> bdx: yes, you have to tune settings on the host to spawn more than 8 lxd
[20:41] <arosales> bdx: re nova-lxd rockstar may have more info
[20:41] <arosales> bdx: but for general bootstrapping with lxd make sure to tune your host settings per the bootstrap info
[20:42] <arosales> if you want > 8
[20:42] <arosales> :-)
[20:47] <jhobbs> is there a way to rename a model?
[20:48] <rick_h_> jhobbs: no, not currently
[20:48] <jhobbs> rick_h_: ok, thanks
[20:49]  * magicaltrout went to production with a model called Jeff recently.....
[20:50] <arosales> magicaltrout: even for testing I would have been thought of a more whimsical model name than Jeff
[20:51] <magicaltrout> ah well, when you're trying to  get LXD containers running inside Mesos my ability to come up with names  fails me
[21:04] <arosales> mesos is taxing :-)
[21:10] <icey> arosales: I deployed the entirety of openstack (16 units?) onto LXD at the charmer's summit without tuning lxd at all...
[21:15] <bdx> icey: you were using ubuntu desktop which has different default /etc/sysctl
[21:16] <icey> touche bdx :)
[21:16] <bdx> :)
[21:16] <arosales> icey: I have as well. It depends on your host settings. We do know on stock ubuntu server that the host settings need to be adjusted
[22:44] <anastasiamac> lazyPower: ping