/srv/irclogs.ubuntu.com/2016/10/17/#juju.txt

blahdeblahSo hallyn was telling me on Friday about using KVM with the local provider; is it also possible to use this with the manual provider, i.e. to control a remote KVM host?01:13
blahdeblahOr alternatively, change the container type on a manual host after the environment has been bootstrapped?01:14
spaokif you put maas in a container and use the maas provider, setup ssh keys for the maas user, you can specify the qemu url to a remote machine for the power control, just need ssh01:23
spaokjuju add-machine would spin up a KVM instance on that server01:24
blahdeblahspaok: Yeah - I've already done that; but I'd like to avoid having to add KVM instances to the MAAS controller01:31
=== rmcall_ is now known as rmcall
kjackal_Good morning Juju world!06:58
SDBStefanohi  kjackal08:01
kjackalhi SDBStefano, what's up?08:02
SDBStefanohow could I create a charm for Xenial instaed of trusty ?08:02
kjackalin the metadata.yaml you set the proper series, let me find an example08:02
kjackalSDBStefano: https://github.com/juju-solutions/layer-apache-bigtop-base/blob/master/metadata.yaml#L1008:03
kjackalSDBStefano: although the charm build command when there is a single series decides to put it under the trusty build path (but that might be some missconfiguration I have on my side)08:05
SDBStefanoso, I have added into the yaml file  - series:   - xenial08:06
SDBStefanoand I done the build08:07
SDBStefanoare you saying that the stuff under the trusty directory is now for Xenial ?08:07
kjackalSDBStefano: when you do a charm build first line should say where the output directory is08:09
kjackalSDBStefano: go ahead and open the <build-output-dir>/metadata.yaml . It should say that the series is xenial, right?08:10
SDBStefanook, it creates a new dirctory name 'builds' there the metadata.yaml contains "series": ["xenial"]08:11
SDBStefanoso, now I should deploy using 'juju deploy $JUJU_REPOSITORY/builds/use  --series xenial'08:12
kjackalSDBStefano: awesome! then this charm is xenial. Yes, you can deploy it now!08:12
SDBStefanoI have to remove the previous app, but it has an error : - ude                       error        1  ude      local         6  ubuntu08:14
SDBStefanoso the 'juju remove-application ude' is not removing the app08:14
SDBStefanois it possible to force the removal ?08:15
kjackalyou can remove-machine where the unit is with --force08:17
kjackalSDBStefano: ^08:17
SDBStefanoyes, it worked, I'm deploying, thanks for helping08:18
kjackalyou can also deploy the same application with a different name like so: juju deploy $JUJU_REPOSITORY/builds/use myappname --series xenial08:19
kjackalSDBStefano: ^08:19
=== tinwood_ is now known as tinwood
neiljerramjamespage, good morning!10:24
jamespagemorning neiljerram10:25
jamespagehow things?10:25
neiljerramQuite well, thanks!10:25
neiljerramYou suggested that I ping you here in your review of https://review.openstack.org/#/c/382563/10:25
* magicaltrout knows from experience, anyone who greats another user specifically whilst saying good morning in reality has an issue or favour to ask..... "quite well" or not ;)10:25
jamespageneiljerram, yes!10:26
magicaltrouts/greats/greets10:26
neiljerramSo doing that.  I think your proposal is basically right, so just looking for pointers on how to start reworking the code into the correct form...10:26
neiljerrammagicaltrout, I just thought that 'good morning' was a little more friendly than 'ping'!10:27
jamespageneiljerram, so since the original calico integration was written, we've done quite a bit of work to minimize the amount of vendor specific code that is required in the core principle charms10:27
jamespageneiljerram, you already have a subordinate for nova-compute10:27
jamespageneiljerram, that's been complemented with the same approach for neutron-api10:27
neiljerramjamespage, yes, agreed, and makes sense.10:28
jamespageneiljerram, so that all of the SDN specific bits can reside in an SDN specific charm10:28
jamespageneiljerram, right and the nice bit of this is that gnuoy's just completed some work to make writing those a whole lot easier10:28
neiljerramjamespage, Ah, nice.10:28
jamespageneiljerram, we've done quite a bit of work on making reactive and layers work for openstack charming this cycle10:28
jamespageneiljerram, so your charm can be quite minimal10:28
jamespageneiljerram, https://github.com/openstack/charm-neutron-api-odl has been refactored as an example for reference10:29
jamespageneiljerram, but we also have a template for charm create10:29
=== caribou_ is now known as caribou
jamespagegnuoy`, ^^ is that live yet?10:29
jamespageneiljerram, I'd love to get a neutron-api-calico charm up and running, so we can deprecate the neutron-api bits for ocata release and remove them next cycle10:30
gnuoy`jamespage, the template ? That is very nearly ready. I just need to run the guide using the template to check they are both in sync and work10:31
neiljerramjamespage, Just as a thought, might it even make sense to have a single 'neutron-calico' charm that provides both the compute and the server function?  I assume it can detect at runtime which charm it is subordinate to?  If it's subordinate to nova-compute, it would provide the compute function; if it's subordinate to neutron-api, it would provide the server side function.10:31
jamespageneiljerram, that's absolutely fine10:31
jamespage+110:31
jamespageneiljerram, neutron-calico already exists right?10:31
neiljerramjamespage, Thanks.  So, is there an example of another SDN subordinate charm that already uses gnuoy's new facilities?10:32
neiljerramjamespage, Yes, neutron-calico already exists (for the compute function).10:32
jamespageneiljerram, https://github.com/openstack/charm-neutron-api-odl does10:32
jamespageneiljerram, so approach re neutron-calico as a 'does both' type charm10:32
jamespageneiljerram, right now its not possible to upgrade from a non-reactive charm to a reactive charm10:32
jamespageneiljerram, neutron-calico is an older style python charm I think10:33
gnuoy`neiljerram, I'm still smoothing of the rough corners but https://github.com/gnuoy/charm-guide/blob/master/doc/source/new-sdn-charm.rst may help10:33
neiljerramjamespage, TBH we've never really tested for upgrading yet at all.10:33
jamespageneiljerram, how many live deployments are you aware of using calico deployed via juju?10:33
gnuoy`haha good point10:33
jamespageneiljerram, in which case I'd take the hit and move to the new layers+reactive approach now10:34
neiljerramjamespage, Just two: OIL, and a Canonical customer that I'm not sure I can name here.10:34
jamespageneiljerram, ok OIL is manageable10:34
jamespageneiljerram, I'll poke on the other one :-)10:34
jamespagegnuoy`, what do you think to the single subordinate doing both roles approach discussed above?10:35
neiljerramjamespage, I think you're right about neutron-calico being an older style charm.  So perhaps it would be a simpler first step to make a separate neutron-api-calico, in the most up-to-date style (reactive)10:35
gnuoy`jamespage, I'm fine with that10:35
jamespageneiljerram, ack10:36
jamespagegnuoy`, that charm-guide update is for the hypervisor integration - do we have the equiv for API?10:36
gnuoy`jamespage, yep, https://review.openstack.org/#/c/387238/10:37
=== gnuoy` is now known as gnuoy
jamespagegnuoy, sorry I mean't the neutron-api subordinate charm version10:38
gnuoyjamespage, no, not atm.10:39
jamespagegnuoy, ok that's what neiljerram will be after10:39
gnuoyack10:39
jamespageneiljerram, you might need to give us a week or two to pull that bit into shape10:39
gnuoyjamespage, then https://github.com/openstack/charm-neutron-api-od is the best bet10:40
gnuoy* https://github.com/openstack/charm-neutron-api-odl10:40
neiljerramjamespage, But I could start by looking at https://github.com/openstack/charm-neutron-api-odl for inspiration, and ask any questions here?10:40
jamespageneiljerram, ^^ yeah that byt example is our current doc - but we'll be working on that10:40
jamespageneiljerram, yeah that's fine - but we've move openstack charm discussion over to #openstack-charms10:40
jamespagebut either is still fine10:40
neiljerramjamespage, Ah OK, I'll go there now...10:40
jamespageneiljerram, ta10:41
SDBStefanoHi hackal, I deployed a local charm, but it's stucked into :10:52
SDBStefanoUNIT    WORKLOAD  AGENT       MACHINE  PUBLIC-ADDRESS  PORTS  MESSAGE ude/11  waiting   allocating  20       10.15.177.80           waiting for machine  MACHINE  STATE    DNS           INS-ID          SERIES  AZ 20       pending  10.15.177.80  juju-19dc1e-20  xenial10:52
SDBStefanook, it has just changed into :10:53
SDBStefanoMACHINE  STATE    DNS           INS-ID          SERIES  AZ 20       started  10.15.177.80  juju-19dc1e-20  xenial10:53
SDBStefanoso it seems very slow10:53
=== saibarspeis is now known as saibarAuei
iceybdx, do I remember you trying to get Juju working with Digital Ocean?12:26
SpauldingHmm...13:24
SpauldingWhy my "juju status" saying to me that:13:24
SpauldingUnit    Workload     Agent  Machine  Public address  Ports  Message13:24
Spauldingsme/5*  maintenance  idle   5        10.190.134.241         Enabling Apache2 modules13:24
SpauldingBut as I see all of my tasks has been made.13:25
SpauldingSo the last one should set up state "active"13:25
SpauldingHow I can track it?13:25
kjackalSpaulding: is there any info in juju debug-log13:32
Spauldingkjackal:13:33
Spauldingunit-sme-5: 14:31:32 INFO unit.sme/5.juju-log Invoking reactive handler: reactive/sme.py:95:enable_apache_mods13:33
SpauldingHm... it looks like reactive ran this task again13:33
Spauldingthat's why my status changed...13:34
lazyPowerSpaulding - thats the rub. you cannot gauarantee ordering of the methods in reactive unless its a tightly controlled flow through states. if you want something to be run only once, it would be a good idea to decorate with a @when_not, and then subsequently, set the state its decorating against.    do note, that you'll have to handle removal of that state if you ever want ot re-execute that body of code again.13:40
SpauldinglazyPower: yeah, i already noticed that13:46
Spauldingand I'm using when & when_not13:47
Spauldingit's like puppet13:47
SpauldinglazyPower: I found it!13:50
Spaulding@when_not('http_mods_enabled')13:50
Spauldingit should be sme.http ...13:50
lazyPowerSpaulding  - Sounds like you're on the right track :)13:50
Spauldingbecause of you guys! :)13:51
=== coreycb` is now known as coreycb
magicaltrouts/greats/greets14:01
lazyPowero/ magicaltrout14:17
fbarillaI've the 'juju machines' command that does not list any machines but trying to add one leads to the following error message: ERROR machine is already provisioned14:51
rick_h_fbarilla: ah, is the machine you're trying to add in use in another model?14:52
fbarillaI've two models, 'controller and default' . None of them list the machine I want to add14:55
fbarillaIn the 'controller' model I've the LXD container where juju has been bootstrapped14:56
SpauldinglazyPower: i broke it! :(15:02
Spauldinghttps://gist.github.com/pananormalny/765622f3d2c332bd9dece6f35b9ff26715:02
Spauldingmaybe someone can spot an issue?15:02
Spauldingit's running in a loop - again... :/15:03
Spauldingthe main idea was to run those tasks one by one...15:03
Spauldingfrom the top to the beginning... in that order15:04
Spauldingto the bottom**15:04
lazyPowerSpaulding - i left a few comments on the gist, nothing stands out other than the state of sme.ready being commented out, and instead being set on 33115:07
lazyPowerSpaulding i dont see a remove_state however, and thats the other tell-tale, as removing states cause the reactive dispatcher to re-test the state queue to determine what needs to be run, which makes it possible to enter an inf. loop if you're not careful with how you're decorating the methods.15:08
lazyPowerbut that doesn't appear to be the case, so that last message is more FYI than anything else.15:08
SpauldinglazyPower: about jinja2 I'll probably use it15:09
Spauldingbut basically i would like to have any "prototype" of juju - before OpenStack Barcelona15:09
Spauldingso I'm trying to do this ASAP15:09
SpauldingAfter that I'll have more time to do it properly... now it's just proof-of-concept15:10
lazyPowerright, just a suggestion15:11
lazyPowerits not wrong to put heredocs in there, its not best practice.15:11
SpauldingI know15:11
SpauldingIt's dirty way - but in this case - it's working15:11
lazyPowerSpaulding - get me an update on that gist with the output of charms.reactive get_states15:12
lazyPoweri imagine whats happened is this isn't a fresh deploy, and you've modified state progression and run an upgrade-charm and now its misbehaving - is that consistent with whats happened?15:12
Spauldingbasically - i'm trying every time to deploy it from scratch15:13
lazyPowerSpaulding - `juju run --unit sme/0 "charms.reactive get_states" `15:15
lazyPowerassuming sme is the name of your charm and its unit 015:15
SpauldinglazyPower: will do15:15
SpauldinglazyPower: is there any other way to remove application if it fails at install hook?15:16
lazyPowerjuju remove-machine # --force15:16
Spauldingcause right now I'm destroying the model15:16
lazyPowerthat'll strip the underlying machine unit from the charm and the charm should finish removal on its own15:16
Spauldingi tried with force with 2.0rc3... couldn't get that working...15:16
Spauldingmaybe it's fixed(?) now...15:16
lazyPowerhttp://paste.ubuntu.com/23339079/15:18
lazyPowerseems like its functioning fine in 2.0.015:18
lazyPowernotice that it leaves/orphans the application/charm for a short while before reaping the charm.15:19
=== saibarAuei is now known as saibarspeis
=== saibarspeis is now known as saibarAuei
=== saibarAuei is now known as saibarspeis
cory_fupetevg: I added a bunch comments in reply to and based on your review on https://github.com/juju-solutions/matrix/pull/216:14
cory_fubcsaller: Our feedback awaits you.  :)16:14
bcsallercory_fu, petevg: thank you both16:15
petevgnp16:15
petevgcory_fu: thx. Reading your comments ...16:16
aisraelcory_fu: Do you know what might be causing this? http://pastebin.ubuntu.com/23339455/16:19
aisraelFrom running charm build16:19
cory_fuaisrael: Not sure, but I'd guess that the log message contains utf8 encoded data and should maybe be .decode('utf8')ed before being logged?16:22
cory_fuaisrael: Can you drop a breakpoint into /usr/lib/python2.7/dist-packages/charmtools/utils.py and see what the output var holds?16:22
aisraelcory_fu: Not yet. I'm doing a charm school/training and they hit it. I'll dig in deeper, though. Thanks!16:25
=== dames is now known as thedac
cory_fuaisrael: I'd look for unicode characters in their yaml files, then.  metadata.yaml and layer.yaml specifically.  Otherwise, I'm not really sure16:26
aisraelWeird. We haven't touched those at all.16:27
aisraelcory_fu: looks like it may be related to an actions.yaml that's UTF-8-encoded16:35
cory_fuaisrael: Strange.  Well, we should definitely handle that better16:35
lazyPoweris there any particular reason we're limiting that to ascii? (just curious)16:36
aisraelcory_fu: definitely. Once I confirm, I'll file a bug16:36
cory_fuThanks16:37
aisraelIt definitely looks like a locale/encoding issue. I had them send me the charm and I built it locally with no problem16:45
cory_fuHrm16:45
cory_fuIs anyone having an issue with 2.0 and lxd where the agents never report as started?  Just started for me with the GA release17:08
cory_fuSeems to be an issue with the ssh keys17:10
bdxcory_fu: I've been having successful deploys so far17:11
bdxcory_fu, cmars: should we also set status to 'blocked', or 'error' here -> http://paste.ubuntu.com/23339654/17:13
cory_fubdx: I'd say "blocked", yeah17:14
bdxcory_fu, cmars: or is setting the supported series in metadata enough?17:15
bdxprobably both for good measure?17:15
cory_fubdx, cmars: Setting the supported series in the metadata could (probably would) be overwritten by the charm layer.17:15
cmarsbdx, cory_fu please open an issue. iirc think you can force series with juju deploy --series17:15
cmarscory_fu, i've noticed something with that.. if you have multiple layers that both specify the same series in metadata, the series gets output twice. and the CS rejects that17:16
cmarscory_fu, i think I opened a bug..17:16
cmars(but i forget where.. so many projects)17:17
cmarsbdx, interested to get your thoughts on https://github.com/cmars/layer-lets-encrypt/issues/1 as well17:17
cmarsi think we might be able to make this a little more reusable without introducing too much complexity17:18
cory_fucmars: https://github.com/juju/charm-tools/issues/257  I do think the series list needs to be de-duped17:18
cmarscory_fu, thanks :)17:18
cory_fuOdd.  It seems to only be trusty lxd instances that get stuck in pending.  Guess I'll try torching my trusty image17:45
* magicaltrout hands cory_fu the matches18:02
cory_fumagicaltrout: Thanks, but it didn't help18:03
magicaltroutawww18:26
cmarscory_fu, xenial host and trusty container?18:35
cory_fucmars: Yes18:35
cory_fucmars: It looks like the trusty image isn't getting the lxdbr0 interface for some reason, but I can't find anything obvious in any of the logs I could think of to check18:35
cmarscory_fu, one thing it could be, is a known systemd issue18:36
cory_fuOh?18:36
cmarscory_fu: 22-09-2016 15:23:18 < stgraber!~stgraber@ubuntu/member/stgraber: cmars: if you mask (systemctl mask) the systemd units related to binfmt (systemctl -a | grep binfmt) and reboot, then the problem is gone for good. This is related to systemd's use of automount and not to binfmt-misc (which just ships a bunch of binfmt hooks)18:36
cmarscory_fu, a telltale symptom is that the trusty container gets stuck in mountall18:37
cory_fucmars: Huh.  Worth a shot.  FWIW, this only started with the GA.  It was fine on the last RC18:37
cory_fucmars: How do I tell where it's stuck?18:37
cmarscory_fu, a ps -ef would show a mountall process, and /var/log/upstart/mountall.log shows some binfmt errors18:38
cmarscory_fu, a quickfix is this (pardon my language): https://github.com/cmars/tools/blob/master/bin/lxc-unfuck18:38
cmarsbut masking the systemd units is a better solution18:38
cmarscory_fu, unless this is a completely different issue, in which case, forget everything I've said :)18:39
cory_fucmars: Yep, that seems to be exactly the issue!18:39
bdxcory_fu: http://paste.ubuntu.com/23340056/18:40
bdxcory_fu: can't seem to replicate :-(18:41
cory_fubdx: That looks like what I'm seeing.  The trusty machine never goes out of pending18:42
cory_fucmars: I'm not really familiar with systemd mask.  Do you have the exact commands handy, or can you point me to some docs?18:42
bdxcory_fu: it started .... http://paste.ubuntu.com/23340072/18:43
cory_fubdx: Oh, well, it never does for me.  But it sounds like cmars has the solution for me18:43
bdxstrange ... nice18:43
cmarscory_fu, this *should* do it: sudo systemctl mask $(systemctl -a | awk '/binfmt/{print $2}')18:46
cmarscory_fu, this might break binfmt on your host, if you run .NET exe binaries directly, for example, that might stop working18:46
cory_fucmars: I don't think I do, but good to know.  No way to have both work, I take it?18:47
cmarscory_fu, ideally, someone would fix this issue in systemd or binfmt or wherever it needs to be done18:47
cory_fucmars: I don't think that awk is right.  It just gives me "loaded" four times18:48
cmarscory_fu, what do you get from: systemctl -a | grep binfmt ?18:48
cory_fuI assume it should be $1 instead18:48
cory_fucmars: http://pastebin.ubuntu.com/23340090/18:49
cmarscory_fu, on my machine, $1 is a big white dot for some reason. could be my terminal type?18:49
cmarscory_fu, yep, $1 for you18:49
cory_fuCould be.  I just have leading whitespace.18:49
cory_fucmars: In case this causes issues, what would be the command to undo this?18:49
cmarscory_fu, systemctl unmask the units that were masked18:50
cory_fuOk, thanks18:50
=== alexisb is now known as alexisb-afk
=== med_` is now known as medberry
troontjehi all :)19:06
troontjeI followed this instruction : https://jujucharms.com/docs/stable/getting-started19:06
troontjeits pretty clear, except what exaclty does that 20GB mean?19:07
marcoceppitroontje: during the lxd init step?19:08
troontjeexaclty19:08
troontjewhen setting the loop device19:08
marcoceppitroontje: the size of the loop back device to use for storage on LXD machiens19:08
marcoceppiwhen LXD boots machiens, they'll all be allocated a slice of that storage19:08
troontjemarcoceppi: ah ok, so when setting that 20GB that will the 20GB that will be shared with all the machines together?19:09
marcoceppitroontje: exactly19:09
troontjemarcoceppi: I was trying to install openstack base with that 20 GB ,athough it does not fail it just not continues the process19:10
troontjeI first thought that value was per machine19:10
marcoceppitroontje: yeah, it's for all machines, and each machine boots with like 8GB of storage from that 2019:11
troontjehaha, well I cant blame it that it sort of failed19:11
=== alexisb-afk is now known as alexisb
troontjemarcoceppi: ok, so I set everything back and the canvas is clean again19:17
troontjeHow can I increase that value for the storage19:17
troontjedestorying it and make it new is no problem btw19:18
cory_fubcsaller, petevg: Resolved merge conflicts and added TODO comment re: the model connection.  Going to merge now19:19
bdxcmars: what happens when you don't have an 'A' record precreated -> http://paste.ubuntu.com/23340240/19:20
bdxcmars: or created pointing at another ip :/19:21
cmarsbdx, pretty sure in that case that the standalone method will fail19:21
bdxyea19:22
bdxwe should call that out in layer le readme19:22
cmarsbdx, ack, good call19:22
cory_fubcsaller, petevg: Merged19:23
petevgbcsaller, cory_fu: Just pushed a small fix to master: glitch now grabs context.juju_model rather than context.model (good name change; just needed to update glitch's assumptions).19:28
troontjehow can I increase that zfs disk for JUJU?19:31
bdxcmars: it would be worth noting that layer le cannot be used on lxd/contianer deploys19:57
cmarsbdx, true, true. wish there was a way to expose containers through the host machine..19:57
kwmonroecory_fu: can you have a hyphen in auto_accessors?  https://github.com/juju-solutions/interface-dfs/blob/master/requires.py#L2519:58
cory_fukwmonroe: Hyphens are translated to underscores19:58
kwmonroecool, thx cory_fu19:59
=== med_ is now known as Guest62846
cory_fucmars: That systemctl mask fix worked perfectly.  Thanks!20:01
jrwrenbdx, cmars, you could use it with some tweaking of juju-default lxd profile, but its not likely to do what you want.20:01
cmarscory_fu, sure thing20:01
cmarsjrwren, ah, that's true. i do some lxd bridging at home, but that doesn't always work on public clouds. ipv4 addresses are in short supply20:02
bdxcmars, jrwren: I'm thinking layer:letsencrypt might be best consumed by a public facing endpoint20:05
bdxthe public endpoint (nginx/haproxy), could also be a reverse proxy20:06
bdxcmars, jrwren: so it could be used as an ssl termination + reverse proxy20:06
cmarsbdx, a frontend that negotiates certs for its backends, and then does a passthrough to them (haproxy's tcp mode)... interesting20:07
bdxcmars: exactly20:07
bdx^ is a skewed model bc it takes for granted that a user would be deploying lxd continers on the aws provider :/20:09
bdxbut not in all cases20:12
bdxjust the one I want20:12
cmarsbdx, i think i'd like to keep layer:lets-encrypt lightweight and usable in standalone web apps like mattermost.. but also usable in such a frontend charm as you describe20:12
bdxentirely20:13
cmarsbdx, with the layer:nginx part removed, i think we'll have something reusable to this end20:13
bdxright20:13
arosalesbdx: did you see the fixes for lxd/local20:15
arosalesit was indeed to tune kernel settings20:15
arosalesbdx: ping if you are still hitting the 8 lxd limit20:15
cmarsbdx, it'd be really cool if the frontend could operate as a DNS server too. then it could register subdomains AND obtain certs20:15
bdxarosales: no I haven't .... just the warning to reference the production lxd docs when bootstrapping .... is this what you are talking about?20:16
bdxcmars: thats a great idea ... just what I've been thinking too20:17
bdxarosales, rick_h_: I've seen and heard bits and pieces about the rbd lxd backend, and also that getting rbd backend compat with nova-lxd isn't on the current roadmap. Can you comment on the status/progress of lxd rbd backend work to any extent?20:24
arosalesbdx: yes, you have to tune settings on the host to spawn more than 8 lxd20:40
arosalesbdx: re nova-lxd rockstar may have more info20:41
arosalesbdx: but for general bootstrapping with lxd make sure to tune your host settings per the bootstrap info20:41
arosalesif you want > 820:42
arosales:-)20:42
jhobbsis there a way to rename a model?20:47
rick_h_jhobbs: no, not currently20:48
jhobbsrick_h_: ok, thanks20:48
* magicaltrout went to production with a model called Jeff recently.....20:49
arosalesmagicaltrout: even for testing I would have been thought of a more whimsical model name than Jeff20:50
magicaltroutah well, when you're trying to  get LXD containers running inside Mesos my ability to come up with names  fails me20:51
arosalesmesos is taxing :-)21:04
iceyarosales: I deployed the entirety of openstack (16 units?) onto LXD at the charmer's summit without tuning lxd at all...21:10
bdxicey: you were using ubuntu desktop which has different default /etc/sysctl21:15
iceytouche bdx :)21:16
bdx:)21:16
arosalesicey: I have as well. It depends on your host settings. We do know on stock ubuntu server that the host settings need to be adjusted21:16
anastasiamaclazyPower: ping22:44

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!