[15:28] Yaym, upgrading to the latest LXC broke my workflow. :) [15:31] doh [15:38] No worries. I needed to migtate to LXD anyway [15:38] but, y'know, not today. :) [16:19] wait, there's lxd now? [16:19] * greg-g missed something [16:21] greg-g: Yeah, that's for user-land lxc containers [16:22] ahhhh [16:31] I can't go back to old-school lxc anymore [16:31] I didn't even know people still used old lxc [16:31] heh [16:31] I still miss the bind to my home dir in lxd [16:31] that was the killer feature that I can't get past [16:31] yeah, some of us still have "legacy" [16:32] yeah, it'd be nice to do a convenience thing like how vagrant does it [16:32] though it was fun to launch 40 lxd containers with 4 models of realtime-syslog-analytics at once yesterday [16:32] pas a --developer or something [16:32] Strangely, I added the stable ppa and upgraded and things are back to normal [16:32] rick_h_: you can still bind homedir with lxd. [16:32] cmaloney: the thing is old lxc was so confusing to me that I prefered to not use it [16:32] jrwren: ? I couldn't find a way to do that? [16:32] it's like, doing simple things required reading the man page [16:33] now it's all super simple and the commands make sense [16:33] jcastro: If I could get lxd to run without bitching then I'd switch to it [16:33] 16.04? [16:33] jrwren: there was stuff about how mounting the home dir wasn't allowed as part of preventing security issues/etc [16:33] 14.04 [16:33] oh, well you can just do it whenever you upgrade? [16:33] rick_h_: yes, it requires a privilged container, which is what lxc was using before. I'll link you to a script. [16:33] rick_h_: i've done it. lots of yellow team is doing it. [16:33] because tbh, if you're using lxd you also should use zfs, it changed my life [16:34] jcastro: Right, and I'll make that change later. :) [16:34] +1 lxd on ZFS is a must. [16:34] like, real measurable minutes per deployment that adds up to real efficiency gains [16:34] even if it is a ZFS that is on a loopback file. its still excellent. [16:34] and that's just for one person [16:34] jcastro: jrwren do you use zfs on it's own device? [16:34] yea, I just use that loopback atm [16:34] if your team is using it you're literally saving money [16:34] rick_h_: I have machines on both. [16:34] rick_h_: yeah, one HDD, one SSD for caching [16:35] rick_h_: ZFS on LVM thinpool too [16:35] jcastro: We're just now working on Ansible deployments, so baby-steps [16:35] loopback is a nice workaround but it prefers dedicated disks [16:35] * cmaloney braces for the OMGWTFBBQ [16:35] because I might as well take advantage of snapshotting etc. for other things [16:36] what I do on all new installs now is install, then I make a new zfs pool with a dedicated disk, I call it "home" [16:36] which then gets automounted as /home [16:36] then recreate my user dir, chown it, blammo, zfs for home directory [16:36] tell zfs and docker to use the zfs backends, done. [16:37] less than 2 seconds for each new instance of an OS [16:37] rick_h_: https://github.com/bac/yellow-tools/blob/master/lxd-launch [16:37] jrwren: ty [16:37] rick_h_: the tricks are -c security.privileged=true and lxc config device add $name home disk source=$HOME path=/home/$user [16:37] jrwren: gotcha [16:38] See, this is the ubuntu-us-mi channel I love: make an off-handed comment and get multiple ways on how to make things better. :) [16:38] lol [16:38] without a doubt lxd with zfs is probably one of the top 5 things this decade that has literally changed my professional life [16:38] brb - lunch [16:38] it's up there with "SSDs" [16:38] and "3 monitors" [16:39] ha! [16:39] now if only charms weren't such a pain, eh jcastro ? /zing [16:39] * jrwren ducks [16:39] yeah but that's your fault jay [16:39] lol. [16:40] I'll never need KVM or virtualbox ever again [16:40] there's just no escape, <2 seconds to an instance is just too brutally awesome. [16:41] and I don't mean like, 2 seconds, then wait to do stuff [16:41] I mean 2 seconds counting going into the new instance [16:41] or less. [16:41] "But wait, most of that is typing the exec command to go into the container". [16:41] yes. [16:41] surely on that new BEAST of a server you got its less than 500ms. [16:42] I feel like it is well under 2s on my ancient home server. [16:42] it's a beast, but it's old [16:42] so it's like, 2010 ear [16:42] oh! [16:42] but it's about 3 seconds [16:42] how are you timing? I want to check mine. [16:42] on a modern machine, with an NVM-E SSD? sheeeeeeeeet. [16:43] time lxc launch ubuntu:16.04 [16:43] watch mine take minutes because it refreshes the image. [16:43] hahaha,,, yup... retreiving image. [16:43] yeah, it just means you haven't used that image in 10+ days [16:43] yup. [16:44] also, I totally forgot some bash shell things that are useful [16:45] lxc delete juju-whatever-[1..10] will kill 10 orphaned containers [16:45] oh, you'll need a --force on that one [16:46] even fetching the image it launched in 40s. second launch took 4.47s [16:47] I guess i'm not as impatient as I thought. [16:47] yeah, for one offs 2 secs vs. 5 is no big deal [16:47] it's when you're like "hey coworker wants you to test this 15 node monster" when it really pays off [16:49] ya know what bugs me??? juju uses destroy. lxc uses delete. snap uses remove. ALL TO DO CONCEPTUALLY THE SAME THING!!! [16:49] * jrwren rages [16:49] I know [16:50] I find myself `juju list`ing alot [16:50] which comes from `lxc list` [16:50] oh yeah. [16:50] to be fair, snaps ootb learned from juju's evolution though [16:51] `snap login` is exactly the same as juju's login thing [16:51] they both 2FA the same too [16:52] yup, lots of good stuff. [17:29] Hey, as soon as LXC does WIndows I'll ditch Virtualbox. :)