Hue | heyy | 06:01 |
---|---|---|
Hue | i want to be a ubuntu user, teach me to setup! | 06:02 |
Odd_Bloke | marcoceppi: I'm catching up on email from Friday; I have submitted a MP for the charm-helpers changes in https://code.launchpad.net/~daniel-thewatkins/charms/trusty/ubuntu-repository-cache/handle_mounted_ephemeral_disk/+merge/261356 | 09:14 |
Odd_Bloke | marcoceppi: You can find that MP at https://code.launchpad.net/~daniel-thewatkins/charm-helpers/lp1370053/+merge/260864 | 09:14 |
Odd_Bloke | marcoceppi: But I wasn't seeing any movement on that, so I was carrying the patch locally until it landed. | 09:15 |
marcoceppi | Odd_Bloke: awesome, thanks, I'll talk a look today! | 09:28 |
Odd_Bloke | marcoceppi: Thanks! | 09:28 |
Odd_Bloke | marcoceppi: (Up early, or in a different TZ?) | 09:28 |
marcoceppi | Odd_Bloke: up early, I've got a flight to catch | 09:28 |
Odd_Bloke | Early flights. D: | 09:28 |
bloodearnest | Does anyone know how to configure storage/block-storage-broker to work with local provider? | 10:25 |
bloodearnest | I can set provider to be local on storage, block-storage-broker just doesn't like deploying on local at all afaics | 10:26 |
lazyPower | bloodearnest: its only confirmed working on AWS and OpenStack | 10:26 |
bloodearnest | lazyPower, right, I don't want to actually use it - I just want to have my services/relations work unchanged on local | 10:27 |
bloodearnest | sounds like I need to conditionally add broker + relation if I detect we're using openstack | 10:28 |
lazyPower | that or file a bug so BSB can determine if its running locally and noop | 10:28 |
lazyPower | with the storage support juju has grown, i wonder how much shelf life of BSB will retain. | 10:29 |
bloodearnest | lazyPower, indeed, but it will be a good while before we can use 1.24 in prod, and I need it *now*, so... :( | 10:30 |
lazyPower | ah | 10:30 |
lazyPower | fair counter point | 10:30 |
bloodearnest | hence why I am not that motivated to fix the charm, too | 10:30 |
bloodearnest | as its on life support | 10:30 |
bloodearnest | lazyPower, so, about these docker juju images | 10:31 |
lazyPower | that, i know something about ;) | 10:31 |
lazyPower | whats up? | 10:31 |
bloodearnest | I think this might be useful for the devs on our team, who have had bad experiences trying to get mojo/juju setup to run reliably and fast on local provider | 10:32 |
lazyPower | charmbox does work with local provider, but it requires dancing of the jig to get it to work | 10:32 |
lazyPower | you have to bootstrap the local provider, then fire up the docker image. its a bit of a strategic process, and can sometimes yield odd behavior | 10:32 |
lazyPower | a lot of that should go away if we ever get a LXD based local provider. | 10:33 |
bloodearnest | lazyPower, so does it do nested lxc's? Or deploy to local provider on the host? | 10:33 |
lazyPower | you're in an isolated sandbox for dependencies, and leveraging juju-client effectively. The local provider exists on the host | 10:33 |
lazyPower | its not as native of an experience as the vagrant image provides, but its faster and lighter weight | 10:34 |
bloodearnest | lazyPower, thumper said he was working on lxc provider as friday project, dunno if he's made progress | 10:34 |
bloodearnest | lazyPower, so you still need juju on the host? | 10:34 |
lazyPower | to leverage local provider, yes | 10:34 |
bloodearnest | right | 10:34 |
lazyPower | the AppArmor/CGROUP schenanigans in docker are wonky to say the least. | 10:34 |
bloodearnest | indeed | 10:34 |
lazyPower | i have yet to find the right brew to get a local provider running in the docker image | 10:34 |
lazyPower | cory_fu is the one that actually pioneered that front and found success | 10:35 |
lazyPower | bloodearnest: the instructions for running local provider w/ the docker image are outlined in the charmbox readme | 10:35 |
bloodearnest | my attempts to find a usage that works with both dev and prod have been blocked by that issue. lxc's apparmor profiles are much simpler | 10:35 |
bloodearnest | lazyPower, thanks, I will try it out | 10:35 |
lazyPower | well, we're using it in Jenkins | 10:35 |
lazyPower | any of the juju-ci results you see have been run through these images. That was our primary testing grounds for the images before pushing them out into the wild, getting them stable enough to run our CI Env | 10:36 |
bloodearnest | lazyPower, to deploy production services? | 10:36 |
lazyPower | http://juju-ci.vapour.ws/view/Juju%20Ecosystem/job/charm-bundle-test-aws/181/console | 10:37 |
lazyPower | for example | 10:37 |
lazyPower | as well as my Drone setup that's achieving the same results: http://drone.dasroot.net/github.com/chuckbutler/docker-charm/drone-juju-integration/4ce159d936f4a42ac910aa3ec7f4d498d209dcdb | 10:38 |
bloodearnest | right | 10:38 |
bloodearnest | so, I'm talking about using docker to deploy app payloads in a charm | 10:39 |
lazyPower | That's completely do-able too, whats the application stack you're trying to deploy? | 10:40 |
bloodearnest | many, but lets pick ubuntu sso, a django app | 10:41 |
lazyPower | bloodearnest: actually, this may be of some interest to you. We built a docker/juju based ad-hoc PAAS for dockercon. | 10:41 |
lazyPower | in the interest of saving time, i wrote a single compose charm that clones a git repo, and runs docker-compose pull && docker-compose up | 10:42 |
bloodearnest | the thing is, I want to dev on the code base locally, using the charm deployed on local provider as the dev env | 10:42 |
lazyPower | ah, that's going to be tricky | 10:42 |
bloodearnest | but docker build doesn't work in an lxc | 10:42 |
lazyPower | docker in lxc is notoriously painful | 10:42 |
bloodearnest | right | 10:42 |
lazyPower | I have a MAAS box sitting behind me i use for that | 10:42 |
lazyPower | or i shell out the clams for cloud time | 10:42 |
* bloodearnest think lxd is likely gonna work better for us than docker | 10:43 | |
lazyPower | thats entirely possible | 10:45 |
lazyPower | if only the rest of the community felt that way, we wouldn't be investing as much effort in bridging the gap :) | 10:45 |
g3naro | whats difference of lxd vs docker ? | 11:13 |
g3naro | or got a good link to article on this | 11:14 |
lazyPower | g3naro: http://www.zdnet.com/article/ubuntu-lxd-not-a-docker-replacement-a-docker-enhancement/ | 11:19 |
lazyPower | g3naro: to put it in my own words - LX[D|C] is focused on full OS containers, a very flexible solution that still gives you the full surroundings of your os, like an init system, multiple processes in the container. Its a lighter weight alternative to KVM without hardware layer isolation. Docker is intended to be immutable process containers, where you deliver a single application thread per container. Such as strictly a web serever, or | 11:21 |
lazyPower | middleware, or a worker process, while LXC can handle the full stack in a single container. There are some key differences such as the backend technology - docker moved to libcontainer in 2014, while LXC is still based on the LXC/Cgroups code being cranked out by stgrabers team. | 11:21 |
g3naro | ahhh | 11:24 |
g3naro | ok, yeah i have been u sing lxc and seems like a better solution to running a kvm machine | 11:24 |
g3naro | i guess could you just build a cluster of boxes with MAAS and then lxc containers,, vs openstack+kvm ? | 11:24 |
lazyPower | we actually have some openstack deployments that leverage lXC for density on a small number of machines | 11:25 |
lazyPower | it co-locates services using LXC isolation to condense the requirement down for a devstack to ~ 2 machines. | 11:25 |
g3naro | but what would you need to have lxc ontop of openstack then ? | 11:25 |
lazyPower | basically run everything on one machine, then fire up nova-compute on a secondary machine dedicated to providing the vm images. | 11:25 |
g3naro | interesting | 11:25 |
lazyPower | There's a nova-lxd driver charm, which will allow you to consume LXD as your hypervisor. | 11:25 |
g3naro | ahh | 11:26 |
g3naro | so you're juju'ing it on there anyways | 11:26 |
lazyPower | we're all over that stack with containers :) | 11:26 |
lazyPower | hattip @ jamespage and company for exploring that | 11:26 |
g3naro | interestng concepts | 11:26 |
g3naro | so lxd is the hypervisor | 12:16 |
g3naro | https://linuxcontainers.org/lxd/introduction/ | 12:17 |
coreycb | gnuoy, jamespage: hello, can I get a review of this from one of you? https://code.launchpad.net/~corey.bryant/charm-helpers/install-warning/+merge/264340 | 12:17 |
jamey-uk | I'm trying to deploy my Rails apps using the Rails charm but it fails when it comes to building the json Gem: https://gist.github.com/anonymous/8271efd25a30732e12c4. This application has been deployed locally and to production Ubuntu servers with no issue. Does anyone know what could be causing this? | 13:55 |
coreycb | niedbalski, would you be able to review this by any chance? https://code.launchpad.net/~corey.bryant/charm-helpers/install-warning/+merge/264340 | 14:55 |
beisner | hi gnuoy, coreycb - this lil race is becoming more noticeable. it's always been a bit racey, but it's pretty consistent with a few of the charms. input on getting away from an arbitrary wait on this one? bug 1474030 | 15:07 |
mup | Bug #1474030: amulet _get_proc_start_time has a race which causes service restart checks to fail <amulet> <openstack> <uosci> <Charm Helpers:New> <neutron-api (Juju | 15:07 |
mup | Charms Collection):New> <neutron-gateway (Juju Charms Collection):New> <openstack-dashboard (Juju Charms Collection):New> <https://launchpad.net/bugs/1474030> | 15:07 |
coreycb | beisner, basically it just expects the pid to change since the service is restarted so maybe the code could get the pid ahead of time then make the config change, then watch the pid until it changes | 15:09 |
beisner | coreycb, yeah i think that would simplify things too. check pid before. change something. watch with a timeout, to see if pid changes. | 15:10 |
coreycb | beisner, sounds good | 15:13 |
beisner | coreycb, gnuoy - on a different race topic :-/ the mojo-os approach of using juju run on all units to determine if hooks and relation data have settled ... appears to no longer be reliable. | 15:14 |
gnuoy | beisner, I think one of the charms has a fix | 15:14 |
beisner | coreycb, it's baaaack - even with a double juju run check. unexpected relation data in cinder cinder-ceph storage-backend - key 'broker_rsp' does not exist | 15:14 |
beisner | gnuoy, coreycb - ^ juju-deployer says a-ok, ready. the juju run x 2 against all units says a-ok. yet a bit of relation data isn't always present. if i run it manually, then wait who-knows-how-long, that relation data eventually arrives. cannot for the life of me figure out how to know when. | 15:15 |
beisner | gnuoy, re: pid race, do you know which? i see a few variants on the pid check in c-h. | 15:16 |
gnuoy | beisner, sorry, otp | 15:27 |
beisner | np gnuoy | 15:27 |
beisner | gnuoy, coreycb - i'm dealing with 2 separate races. 2 bugs to track: | 15:27 |
beisner | bug 1474036 | 15:27 |
mup | Bug #1474036: amulet openstack tests have race - some tests start before relations/hooks have settled <amulet> <openstack> <uosci> <Charm Helpers:New> <cinder-ceph (Juju Charms Collection):New> <https://launchpad.net/bugs/1474036> | 15:27 |
beisner | bug 1474030 | 15:28 |
mup | Bug #1474030: amulet _get_proc_start_time has a race which causes service restart checks to fail <amulet> <openstack> <uosci> <Charm Helpers:New> <neutron-api (Juju | 15:28 |
mup | Charms Collection):New> <neutron-gateway (Juju Charms Collection):New> <openstack-dashboard (Juju Charms Collection):New> <https://launchpad.net/bugs/1474030> | 15:28 |
=== scuttle|afk is now known as scuttlemonkey | ||
=== ming is now known as Guest86022 | ||
=== ericsnow is now known as ericsnow_afk | ||
=== lukasa is now known as lukasa_away | ||
=== liam_ is now known as Guest62504 | ||
mbruzek | marcoceppi: I need to run a grep in a set -e bash script that might fail, but I need the result of the grep 0 or 1. I forget how to do that without exiting the script. Can you enlighten me? | 17:43 |
=== lukasa_away is now known as lukasa | ||
lazyPower | mbruzek: set +e | 17:43 |
lazyPower | then check $? | 17:44 |
mbruzek | lazyPower: Yeah I guess I can do that, but this is a charm script so the best practice is to use set -e | 17:44 |
lazyPower | mbruzek: temporarily disable error checking then re-enable | 17:44 |
lazyPower | thats acceptable in a charm | 17:44 |
thedac | grep $SEARCH || true also works IIRC | 17:44 |
mbruzek | lazyPower: I know I can change it just for that command. .. | 17:44 |
mbruzek | thanks to you both! | 17:45 |
=== ericsnow_afk is now known as ericsnow | ||
pmatulis | does a configuration change to environments.yaml always require a bootstrap, and thus the current env needs to be destroyed first? | 21:51 |
thumper | pmatulis: changing something in environments.yaml does not impact any running environments | 22:02 |
thumper | pmatulis: if you want to change a setting on a running environment, use 'juju set-env' | 22:02 |
pmatulis | thumper: so i need to do everything run-time (juju set ...) right? | 22:02 |
thumper | pmatulis: bootstrap uses the values in environments.yaml, but if you have a running environment that you are trying to change, then yes, | 22:03 |
thumper | set-env | 22:03 |
thumper | set is for service config | 22:03 |
pmatulis | thumper: so easy to lose track of configuration changes i suppose? | 22:04 |
pmatulis | ok re 'set-env vs set' | 22:04 |
pmatulis | thumper: any idea -> http://paste.ubuntu.com/11874729/ | 22:16 |
thumper | pmatulis: yeah, some environment attributes are immutable after an environment has started | 22:46 |
pmatulis | thumper: ok, time to restart. thanks | 22:48 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!