[00:58] bah, still stuck on this macvlan thing. I've read all the links I could find, including stgraber's comments but still not joy, unclear what I'm doing wrong: https://github.com/lxc/lxd/issues/1343 [00:58] https://lists.linuxcontainers.org/pipermail/lxc-users/2010-July/000557.html [00:58] that in theory seems pretty clear... [00:58] basically add a macvlan interface on the host and have the containers use that as parent with a macvlan bridge type [00:59] makes perfect sense, but then no joy [00:59] only thing I know that looks fishy is that lxc network show doesn't show mvlan0 [00:59] even tho my container is using it and it works (can get an ip, ping etc) [01:02] drab: pastebin your /etc/network/interfaces and lxd configs and maybe someone can spot it? [01:12] drab: what are you trying to do? use macvlan for your containers and still allow them to talk to the host? [01:16] http://dpaste.com/0G594NT [01:16] stgraber: yes [01:16] which yuo commented on in that github issue and on the ML [01:16] and I think I'm doing what you saif, but I can't get it to work [01:16] the only thing that seems suspicious is the fact that lxc network list does not show mvlan0 [01:16] only eth0 [01:17] and if I try to do lxc network create mvlan0 I get an error that it already exists [01:17] drab: your setup looks wrong [01:17] you should have: [01:17] ok, great [01:17] - eth0: unconfigured (which seems fine) [01:17] - mvlan0: macvlan on top of eth0, configured with your host's IP (which seems fine) [01:18] - containers: macvlan on top of eth0 (that's wrong in your setup) [01:18] so set parent to eth0 for your containers and that "should" fix the problem [01:18] ah [01:18] macvlan devices can never talk to their parent, they can only talk to the outside and to their siblings [01:19] right, which is why I set the parent to mvlan, I was hoping to be able to ping eth0 of the host [01:19] but that didn't work [01:19] oh, I missed that, your eth0 is also wrong indeed [01:19] since the eth0 of the host is what is in dns and nodes will try to raech [01:19] your eth0 should be completely unconfigured. What used to be on eth0 should be moved to mvlan0. And then all containers should set parent=eth0 [01:20] oh, so that's basically the same thing you'd do with a bridge then [01:20] yep [01:20] I thought the point of mvlan was that I didn't need to touch eth0, but guess that's true only if I don't care about host/guest ping [01:21] well, point, easier to setup, fewer things to muck with [01:21] right, you don't need to touch eth0 so long as you don't need to talk to it [01:21] which is the case for a lot of people [01:21] is there still a point in macvlan then? just the supposed better perf from container to container? [01:21] assuming you wanna talk to the host [01:21] that is [01:26] there should be a bit more hardware based optimization for macvlan than for standard bridges but bridge+veth is also a much more tested setups so unless you need to squeeze every bit of performance, I'd recommend bridging === Shadowmm_ is now known as shadowmm === mwsb is now known as chu === madsa is now known as Guest5519 === mwsb is now known as chu === JanC is now known as Guest79186 === JanC_ is now known as JanC === andol_ is now known as andol === Agent_ is now known as Agent [18:15] Can someone tell me how it is possible to have a disk that is 54% full and complain there is no free space? [18:17] what's the output of df -h, and the exact message of disk full? [18:32] your inode space could be full [18:33] spat: What FS are you using? [18:33] ext4 [18:33] cncr04s: how can I check that? [18:34] df -i ? [18:34] yep [18:34] df -i [18:34] Filesystem Inodes IUsed IFree IUse% Mounted on [18:34] devtmpfs 505187 378 504809 1% /dev [18:34] tmpfs 505610 1 505609 1% /dev/shm [18:34] tmpfs 505610 499 505111 1% /run [18:34] tmpfs 505610 3 505607 1% /run/lock [18:34] tmpfs 505610 16 505594 1% /sys/fs/cgroup [18:34] tmpfs 505615 4 505611 1% /run/user/5701 [18:35] sorry should have pastebinned that [18:35] you only appear to have tmpfs and no actual / [18:35] /dev/root 655360 655172 188 100% / [18:35] ah, yep thats all i needed [18:35] your at 100% [18:36] didnĀ“t print because of the / [18:36] yep noticed that as well [18:36] how can I see where they are used? [18:36] your using 655,000 files somewhere [18:39] du -hc / [18:49] Almost always it's /var/log or /var/spool [19:33] on my 16.04LTS server openvpn doesn't start correctly on boot. its service is enabled with my config and it starts great when i do systemctl start after boot has finished. here are the logs: http://pastebin.com/tDHKnUS7 top is after boot, bottom is after running "systemctl start openvpn@server-tcp" [19:53] /join #openvpn [20:02] did you, systemctl enable openvpn, and systemctl enable openvpn@server-tcp.service, assuming your config is called for openvpn /etc/openvpn/server-tcp.ovpn or /etc/openvpn/server-tcp.conf [20:03] patdk-lp: yes [20:03] which is how i get the first part of the log i pasted === ewook_ is now known as ewook [21:00] stgraber: can you comment on the nested container issue and zfs? am I understanding correctly that the combination won't work? [21:00] https://github.com/lxc/lxd/blob/master/doc/storage-backends.md [21:01] that page suggests that nesting is not supported with ZFS backend [21:01] and I need directory storage or btrfs [21:01] but maybe I'm missing something === Edward123 is now known as EdwardIII === EmilienM_ is now known as EmilienM === mwhudson is now known as Guest34607 === Valfor is now known as Guest45243 === PaulePan1er is now known as PaulePanter === Agent is now known as Guest95782 === Guest95782 is now known as Agent === trippeh_ is now known as trippeh === Pyrrhic is now known as Pyrrthon [23:02] what could a soft lock up cpu stuck indicate to? [23:03] im getting kernelpanics with this error [23:05] hardware error