[00:58] <drab> bah, still stuck on this macvlan thing. I've read all the links I could find, including stgraber's comments but still not joy, unclear what I'm doing wrong: https://github.com/lxc/lxd/issues/1343
[00:58] <drab> https://lists.linuxcontainers.org/pipermail/lxc-users/2010-July/000557.html
[00:58] <drab> that in theory seems pretty clear...
[00:58] <drab> basically add a macvlan interface on the host and have the containers use that as parent with a macvlan bridge type
[00:59] <drab> makes perfect sense, but then no joy
[00:59] <drab> only thing I know that looks fishy is that lxc network show doesn't show mvlan0
[00:59] <drab> even tho my container is using it and it works (can get an ip, ping etc)
[01:02] <sarnold> drab: pastebin your /etc/network/interfaces and lxd configs and maybe someone can spot it?
[01:12] <stgraber> drab: what are you trying to do? use macvlan for your containers and still allow them to talk to the host?
[01:16] <drab> http://dpaste.com/0G594NT
[01:16] <drab> stgraber: yes
[01:16] <drab> which yuo commented on in that github issue and on the ML
[01:16] <drab> and I think I'm doing what you saif, but I can't get it to work
[01:16] <drab> the only thing that seems suspicious is the fact that lxc network list does not show mvlan0
[01:16] <drab> only eth0
[01:17] <drab> and if I try to do lxc network create mvlan0 I get an error that it already exists
[01:17] <stgraber> drab: your setup looks wrong
[01:17] <stgraber> you should have:
[01:17] <drab> ok, great
[01:17] <stgraber>  - eth0: unconfigured (which seems fine)
[01:17] <stgraber>  - mvlan0: macvlan on top of eth0, configured with your host's IP (which seems fine)
[01:18] <stgraber>  - containers: macvlan on top of eth0 (that's wrong in your setup)
[01:18] <stgraber> so set parent to eth0 for your containers and that "should" fix the problem
[01:18] <drab> ah
[01:18] <stgraber> macvlan devices can never talk to their parent, they can only talk to the outside and to their siblings
[01:19] <drab> right, which is why I set the parent to mvlan, I was hoping to be able to ping eth0 of the host
[01:19] <drab> but that didn't work
[01:19] <stgraber> oh, I missed that, your eth0 is also wrong indeed
[01:19] <drab> since the eth0 of the host is what is in dns and nodes will try to raech
[01:19] <stgraber> your eth0 should be completely unconfigured. What used to be on eth0 should be moved to mvlan0. And then all containers should set parent=eth0
[01:20] <drab> oh, so that's basically the same thing you'd do with a bridge then
[01:20] <stgraber> yep
[01:20] <drab> I thought the point of mvlan was that I didn't need to touch eth0, but guess that's true only if I don't care about host/guest ping
[01:21] <drab> well, point, easier to setup, fewer things to muck with
[01:21] <stgraber> right, you don't need to touch eth0 so long as you don't need to talk to it
[01:21] <stgraber> which is the case for a lot of people
[01:21] <drab> is there still a point in macvlan then? just the supposed better perf from container to container?
[01:21] <drab> assuming you wanna talk to the host
[01:21] <drab> that is
[01:26] <stgraber> there should be a bit more hardware based optimization for macvlan than for standard bridges but bridge+veth is also a much more tested setups so unless you need to squeeze every bit of performance, I'd recommend bridging
[18:15] <spat> Can someone tell me how it is possible to have a disk that is 54% full and complain there is no free space?
[18:17] <alkisg> what's the output of df -h, and the exact message of disk full?
[18:32] <cncr04s> your inode space could be full
[18:33] <tsimonq2> spat: What FS are you using?
[18:33] <spat> ext4
[18:33] <spat> cncr04s: how can I check that?
[18:34] <cncr04s> df -i ?
[18:34] <cncr04s> yep
[18:34] <spat>  df -i
[18:34] <spat> Filesystem     Inodes  IUsed  IFree IUse% Mounted on
[18:34] <spat> devtmpfs       505187    378 504809    1% /dev
[18:34] <spat> tmpfs          505610      1 505609    1% /dev/shm
[18:34] <spat> tmpfs          505610    499 505111    1% /run
[18:34] <spat> tmpfs          505610      3 505607    1% /run/lock
[18:34] <spat> tmpfs          505610     16 505594    1% /sys/fs/cgroup
[18:34] <spat> tmpfs          505615      4 505611    1% /run/user/5701
[18:35] <spat> sorry should have pastebinned that
[18:35] <cncr04s> you only appear to have tmpfs and no actual /
[18:35] <spat>  /dev/root      655360 655172    188  100% /
[18:35] <cncr04s> ah, yep thats all i needed
[18:35] <cncr04s> your at 100%
[18:36] <spat> didn´t print because of the /
[18:36] <spat> yep noticed that as well
[18:36] <spat> how can I see where they are used?
[18:36] <cncr04s> your using 655,000 files somewhere
[18:39] <cncr04s> du -hc /
[18:49] <genii> Almost always it's /var/log or /var/spool
[19:33] <tpw_rules> on my 16.04LTS server openvpn doesn't start correctly on boot. its service is enabled with my config and it starts great when i do systemctl start after boot has finished. here are the logs: http://pastebin.com/tDHKnUS7 top is after boot, bottom is after running "systemctl start openvpn@server-tcp"
[19:53] <tpw_rules>  /join #openvpn
[20:02] <patdk-lp> did you, systemctl enable openvpn, and systemctl enable openvpn@server-tcp.service, assuming your config is called for openvpn /etc/openvpn/server-tcp.ovpn or /etc/openvpn/server-tcp.conf
[20:03] <tpw_rules> patdk-lp: yes
[20:03] <tpw_rules> which is how i get the first part of the log i pasted
[21:00] <drab> stgraber: can you comment on the nested container issue and zfs? am I understanding correctly that the combination won't work?
[21:00] <drab> https://github.com/lxc/lxd/blob/master/doc/storage-backends.md
[21:01] <drab> that page suggests that nesting is not supported with ZFS backend
[21:01] <drab> and I need directory storage or btrfs
[21:01] <drab> but maybe I'm missing something
[23:02] <DK2> what could a soft lock up cpu stuck indicate to?
[23:03] <DK2> im getting kernelpanics with this error
[23:05] <ikonia> hardware error