Checkmate | Hey i want make a backup of a folder | 00:49 |
---|---|---|
Checkmate | i want compress all folder with small size i dont know how to do it with 7zip | 00:49 |
dpb1 | 7z a filename.7z directory/path | 01:29 |
eraserpencil | Hey guys | 04:23 |
eraserpencil | Could someone share with me why an Nginx reverse proxy server coupled with an Apache web server is more popular than the vice versa? | 04:24 |
halvors | Is there an recommended Web UI interface for LXD? | 04:43 |
nacc | halvors: not afaik | 04:44 |
halvors | Also when i create an LXD container in Ubuntu, i have to choose an image. Does the whole OS get runned inside like a VM? | 04:55 |
nacc | halvors: no, there's no kernel | 04:57 |
nacc | halvors: you may want to read the LXD documentation and/or the difference between containers and VMs | 04:58 |
halvors | nacc: Thanks did that. But i wounder how i can run Ubuntu 16.04 inside the LXD container on the ubuntu 18.04 daily-build. | 05:02 |
halvors | Do all the libraries etc exist on both the host and on the container? | 05:03 |
halvors | I get that kernel is only on host. | 05:03 |
nacc | halvors: you are runing a 16.04 userspace basically | 05:06 |
nacc | (in the container) | 05:06 |
chamar | playing with LXD too... fun so far. | 05:16 |
halvors | nacc: Is there a way to only export the diff i'vem made to a container? So basically just files that i've changed? | 05:23 |
chamar | I think there's a snapshot feature in LXD | 05:23 |
jdr | is LXD a bare metal dilly? | 05:27 |
jdr | or like vagrant? | 05:27 |
nacc | jdr: LXD is a container hypervisor | 05:28 |
halvors | chamar: That can be used to export only the diff? | 05:31 |
chamar | halvors, My understand is that it will take a "snapshot" (an image at that point in time) to which you could revert back to. | 05:31 |
halvors | chamar: Yeah, but what i was interested in was to get the diff from the initial image, to easily export my configuration. | 05:32 |
chamar | halvors, gotcha.. no idea if such feature exists.. still having a first look at it too | 05:33 |
halvors | :) | 05:33 |
chamar | Reason being, my lab runs out of mem with standard VM :/ | 05:33 |
halvors | I see. | 05:36 |
jdr | mind......blown | 05:36 |
jdr | just watched a youtube vid on it | 05:36 |
chamar | and what blown your mind? | 05:37 |
jdr | they were doing simple creating of the vm's | 05:37 |
jdr | I am use to hardware based vm's | 05:37 |
jdr | not software | 05:37 |
jdr | what is shared with the root container? | 05:38 |
chamar | ressources for sure | 05:39 |
chamar | you can limit the usage of your LXD container, but you don't have to assign how much memory you need for example | 05:39 |
jdr | I would want to set a limit of how much the vm's could use....not so much on a per vm, but as a pool | 05:41 |
chamar | I quote: We don’t support resource limits pooling where a limit would be shared by a group of containers, there is simply no good way to implement something like that with the existing kernel APIs. | 05:42 |
chamar | Link: https://stgraber.org/2016/03/26/lxd-2-0-resource-control-412/ | 05:43 |
halvors | How can i upgrade a container from ubuntu 14.04 to 16.04. Is there an elegant way to do that? | 05:43 |
chamar | Never did it, but probably do-release-upgrade would works | 05:44 |
chamar | (same as a standard VM / bare metal install) | 05:44 |
halvors | chamar: Yeah, but what about the metadata then? | 05:54 |
chamar | metadata? | 06:04 |
chamar | halvors, https://github.com/lxc/lxd/issues/3874 | 06:09 |
halvors | Anyone knows why sudo "snap install conjure-up --classic" is not working? cannot run the command afterwards. | 06:10 |
chamar | try to logout / login maybe? | 06:11 |
chamar | (had mixed result with conjure-up so didn't had a deep look into it) | 06:11 |
halvors | chamar: Thanks. | 06:13 |
chamar | np | 06:14 |
chamar | btw, I just updated a LXD container with `do-release-update` and it seems to work fine. | 06:21 |
=== devil is now known as Guest38645 | ||
=== Guest38645 is now known as devil__ | ||
=== devil__ is now known as devilz | ||
=== devilz is now known as devil__ | ||
=== devil__ is now known as devil_ | ||
halvors | chamar: But what about the metadata, does it update automagically? | 06:31 |
=== devil_ is now known as devilz | ||
chamar | halvors, What do you mean by metadata? | 06:31 |
halvors | chamar: do "lxc info "containername" | 06:31 |
halvors | or config | 06:31 |
halvors | dont remember. | 06:32 |
halvors | But it says what version of ubuntu it is. | 06:32 |
chamar | let me see | 06:32 |
chamar | lxc info doesn't give anything related to the image / version | 06:33 |
halvors | You may be right, i cannot se the version. | 06:33 |
halvors | Yeah. | 06:33 |
halvors | Thanks, so basically just like any other vm then :) | 06:33 |
chamar | not sure if it keeps tracks of it .. but I get what you mean by metadata now ;) | 06:34 |
chamar | hum. I think it will only how the "BASE IMAGE", which is the initial image.. | 06:36 |
chamar | I'm out. good night all. | 06:37 |
halvors | good night :) | 06:47 |
tekk | are ubuntu dev's aware that when unattended upgrades is turned on /boot can become full pretty quickly and you get into a terrible apt cycle of not being able to resolve the issue without manual intervention? | 11:55 |
ikonia | boot can become full if you dont size it appropriately | 12:16 |
ikonia | it's up to you to either a.) size your file system in line with your needs b.) put house keeping in place | 12:16 |
tekk | i'm aware | 12:22 |
tekk | but | 12:22 |
tekk | i'm assuming unattended upgrades is popular with people who want no fuss and go with default partitioning schema etc | 12:22 |
tekk | in which case they'll be upside down | 12:22 |
andol | Was a while since I ran the server iso installer, but isn't the recommended choice (if you want no fuss) to just go with one big partition? | 12:29 |
ikonia | I'm not aware of the default partition table having a seperate /boot | 12:33 |
tomreyn | ikonia: i think it does when you choose automatic partitioning with lvm, or with lvm and dmcrypt-luks | 13:07 |
ikonia | it has to if you chose crypt | 13:13 |
ikonia | or it can't boot | 13:13 |
ikonia | but....if you chose cyrpt you should have a basic enough understanding to be able to manage your box in the event of automated upgrades | 13:14 |
phormulate | hey all, using xenial, and it has this tendancy to overwrite my /etc/network/interfaces... I have no network manager installed and it is driving me nuts trying to find the application modifying it, any ideas? | 13:25 |
TJ- | phormulate: when does the file get written to, and what gets written into it? | 13:26 |
phormulate | at boot, just a standard dhcp of interfaces/alias | 13:27 |
TJ- | phormulate: are you sure it's not the other way around, as in it's returning to a default file because any changes you made weren't permanently written to the underlying device? | 13:29 |
TJ- | phormulate: is it Bare Metal or a Virtual Machine ? | 13:30 |
phormulate | vps, rolled it using debootstrap | 13:30 |
phormulate | rw root | 13:30 |
phormulate | I'm not used to ubuntu's general conventions, but hell, I needed an easier route to lxd than debian offered at the time | 13:31 |
TJ- | what kind of VPS? KVM with full disk boot process (boots raw disk image containing a boot loader) ? | 13:31 |
ikonia | nothing will touch /etc/network/interface file | 13:31 |
TJ- | phormulate: LXD? so this is a container not a VM then | 13:32 |
phormulate | the ubuntu is on the vps "metal" mentioning lxd isn't helpful, let | 13:33 |
phormulate | 's forget I said lxd | 13:33 |
ikonia | it does matter though | 13:33 |
TJ- | phormulate: I matters very much; is this an LXD container ? | 13:33 |
phormulate | xenial running on vps as host to a few lxd containers... lxd does nothing to alter my /etc/networking/interfaces | 13:34 |
phormulate | yes, kvm | 13:34 |
ikonia | nothing "should" touch that file, however a container, with an isolated file system being fed from the hosts services it does matter | 13:35 |
phormulate | ubuntu is not running within a container, it is running under kvm | 13:35 |
ikonia | so you're running a VM guest, thats running containers under it | 13:35 |
phormulate | yes | 13:35 |
phormulate | it is very odd, because it doesn't always get wiped on reboot | 13:36 |
TJ- | phormulate: check it's persistent; look at the "ephemeral:" value with "lxc config show <name>" | 13:37 |
phormulate | tj, lxc/d has no part of modifying my ubuntu setup | 13:38 |
tomreyn | ikonia: responding to your statement that you cannot have FDE without /boot: well you could do this https://wiki.archlinux.org/index.php/Dm-crypt/Encrypting_an_entire_system#Encrypted_boot_partition_.28GRUB.29 (and then /boot can just reside on / if you don't need LVM) | 13:38 |
tomreyn | (you'd still need a plain ESP / biosboot) | 13:39 |
ikonia | tomreyn: that's not supported on ubuntu's installer though | 13:39 |
TJ- | You can have /boot/ as LUKS+LVM, or LVM+LUKS | 13:39 |
tomreyn | right ikonia | 13:40 |
phormulate | another wonderful gem I ran in to, systemd "Error on shutdown: Failed deactivating swap" | 13:40 |
rh10 | guys, what's the best way to install php 7.1 to 16.04 LTS? | 16:03 |
ikonia | 7.1 isn't in the repos is it ? | 16:06 |
ikonia | I thought it was 7.0 | 16:07 |
TJ- | rh10: I'd suggest creating a 17.10 container (using LXD) where you can easily install it | 16:08 |
rh10 | ikonia, yep, there is 7.0 in the repo | 16:08 |
rh10 | ikonia, there is no 7.1 in official repo | 16:08 |
rh10 | TJ-, got it. can i work with it as a localhost? i mean - files will be in my local system | 16:10 |
rh10 | ? | 16:10 |
TJ- | rh10: LXD is treated like an /almost/ virtual machine (but shares kernel with host), so you could install a web server and edit a site /inside/ the container and connect to it's HTTP server on port 80 - the container will have an IP address | 16:13 |
rh10 | TJ-, got it, thanks! | 16:14 |
TJ- | rh10: so you can do "lxc launch ubuntu:17.10 mycontainername" | 16:14 |
TJ- | rh10: then "lxc start mycontainername" then to get a shell inside it "lxc exec mycontainername /bin/bash" | 16:15 |
TJ- | rh10: at whch point you use all the regular package management commands, e.g. "apt install php7.1 apache2 ..." | 16:16 |
rh10 | TJ-, quite cool! thx! | 16:16 |
TJ- | rh10: and if you want to you can map a host file-system directory into the container to make editing the files on the host transparent to there being a container | 16:16 |
TJ- | rh10: this next step is not quite correct for sharing but gives you a clue what to research: 'lxc config device add mycontainername sharedtmp disk path=/path/to/share/in/guest source=/path/to/share/from/host' | 16:19 |
TJ- | rh10: there's some permissions issues to deal with for the above share command to work correctly (with unprivileged containers) | 16:19 |
rh10 | TJ-, awesome! | 16:19 |
rh10 | TJ-, another question here. which way better to deploy code from such kind of container to external webserver into the internet? | 16:21 |
rh10 | how to handle it correctly? | 16:21 |
TJ- | rh10: well, if you're sharing a host directory which you're mapping into the container web-server's document root, then you'd just copy the host's directory heirachy to the other server | 16:22 |
TJ- | rh10: e.g. if you're mapping $HOME/public_html to container's /var/www/ then you'd just rsync/zip $HOME/public_html | 16:23 |
rh10 | TJ-, got it. but can i use git in container? or how can i add git repo in that scheme? | 16:24 |
TJ- | rh10: or if using git for version control, you can set up your external server as a git remote and use 'git push external' | 16:24 |
rh10 | TJ-, thanks a lot for support! | 16:24 |
TJ- | rh10: in my example $HOME/public_html would be your git base dir | 16:24 |
=== Bilge- is now known as Bilge | ||
phibs | got an issue where i'm creating an ubuntu 18 initrd with debirf and when it does unxz | cpio -i, cpio is NOT extracting /sbin/init even though it is in the archive. Anyone seen anything like this ? | 23:14 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!