[03:35] <DammitJim> is heartbeat still pretty popular to set up a high available load balancer like nginx or haproxy?
[04:01] <JMichaelX> just upgraded server from trusty to xenial tonight, and mpd seems to no longer be working... at least it is not outputting sound. has anyone else here experienced this?
[04:46] <ball> Does Ubuntu Server support running from a software RAID mirror?
[04:46] <ball> (will it let me create one for the installation?)
[05:12] <jayjo> if I'm trying to get a service to start and run on reboot (a systemd until file), and I've put the file in /etc/init.d/ - what are the remaining "steps"?
[06:11] <sarnold> jayjo: I think you need to make multiuser.target or something "want" the file too
[06:12] <sarnold> jayjo: search for multi-user.target on https://wiki.ubuntu.com/SystemdForUpstartUsers for the short version
[08:12] <lordievader> Good morning.
[11:01] <sileht> </1
[14:28] <leeyaa> hello
[14:28] <leeyaa> I have one Ubuntu Dapper server that I really need to upgrade to a non EOL release. (I need to keep it the way it is as I can't convert it or migrate it). is it still possible somehow to upgrade from dapper, to hardy, to precise and so on ?
[14:28] <leeyaa> I remember last year I did it for another server, but it is no longer working this way
[15:04] <DammitJim> I am so confused... for Ubuntu 16, do I need to be using systemd, upstart or else?
[16:15] <ogra_> DammitJim, systemd
[16:17] <DammitJim> thanks ogra_
[16:17] <DammitJim> man, what a mess... one upgrades one thing and all of a sudden, the project is huge!
[16:18] <DammitJim> it's my mess, though... not Ubuntus
[18:00] <coreycb> jamespage, beisner: newton-proposed is ready to promote to newton-updates when you get a chance please
[18:01] <beisner> coreycb, jamespage - have we run the default.yaml against newton-proposed?
[18:02] <coreycb> beisner, no, just next.yaml
[18:03] <beisner> coreycb, we need to run against the stable bundle for -updates moves
[18:04] <coreycb> beisner, sure i can do that.  fwiw with release last week the charms hould be basically the same right now.
[18:04] <beisner> coreycb, true enough.  but i think we need to see stable bundle tests for stable package updates as a matter of course.
[18:08] <coreycb> beisner, yep agreed
[18:08] <coreycb> beisner, i'll holler back
[18:59] <coreycb> zul, can you look into sponsoring Frode's patches in the mitaka stable release of horizon? bug 1666827
[19:16] <zul> coreycb: yep
[19:17] <coreycb> zul, thanks, i updated the bug a bit.  one of the patches is included in the latest stable release.
[19:27] <drab> eeer, why is lxd depending on dnsmasq?
[19:27] <drab> anybody around here managing the packaging of lxd?
[19:27] <jgrimm> stgraber, ^^
[19:27] <sarnold> wild guess, so there's something around on the bridge to do dhcp
[19:28] <stgraber> yup, lxd uses dnsmasq for dhcp and dns on its networks
[19:28] <drab> yeah, that's the intention, and maybe I'm missing something
[19:28] <nacc> drab: yes, in the default configuration, dnsmasq is spawned
[19:28] <drab> but I already have dnsmasq runnign elsewhere and want that to provide ips for containers
[19:28] <nacc> drab: then you need to configure lxd for that (aiui)
[19:29] <drab> yeah, I will, the thing is, I wanted to get rid of dnsmasq on that host, don't like keeping stuff around I don't need
[19:29] <drab> but I can't because lxd depends on it
[19:29] <drab> guess I can just /etc/default disable it
[19:29] <drab> and prevent it from starting
[19:29] <stgraber> LXD depends on dnsmasq-base not dnsmasq
[19:30] <stgraber> dnsmasq-base doesn't have an init script so won't start the system service if you configure LXD to use another bridge
[19:31] <drab> oooh, I had missed that, thank you very much for clarifying, appreciate the help
[19:34] <sarnold> drab: btw there's an 'equivs' package that can fake up packages for the cases when you absolutely don't want a dependency. All the usual warnings about 'you get to keep both pieces' apply of course :)
[19:35] <stgraber> sarnold: yeah, not going to work so well with LXD as the daemon does dependency checking on startup and will just fail to start :)
[19:36] <sarnold> stgraber: hehee :D
[19:36] <sarnold> apparently you've met people like me before..
[19:36] <stgraber> yeah, we don't like surprises :)
[19:37] <jgrimm> sarnold, neat; i didn't know about that. thanks
[19:39] <drab> sarnold: thanks, will keep that in mind
[20:02] <drab> I'm getting pretty confused here and #linuxcontainer doesn't seem to be up...
[20:02] <drab> I've installed lxd
[20:02] <drab> various guides ref /etc/default/lxd-bridge, which no longer exists
[20:03] <drab> there's instead a /etc/default/lxd-bridge.upgraded , which I guess is ok. that file references a bridge called lxdbr0
[20:03] <drab> notice the d
[20:03] <drab> however after installing from pkgs lx*D*
[20:03] <sarnold> drab: hopefully helpful https://insights.ubuntu.com/2016/04/07/lxd-networking-lxdbr0-explained/
[20:03] <drab> I've ended up with a bridge called lx*c*br0
[20:04] <drab> so that seems inconsistent or maybe I'm missing something
[20:04] <nacc> drab: i think that's because the new package doesn't setup the bridge there anymore?
[20:04] <drab> the bridge is up
[20:04] <nacc> drab: the .upgraded is a debconf/dpkg thing
[20:04] <nacc> drab: lxcbr0 is for lxc1
[20:04] <nacc> drab: what ubuntu are you on?
[20:04] <drab> by default after installing the pkgs I end up with a bridge, altho that's not associated with any interface as far as brctl is concerned
[20:05] <drab> xenial
[20:05] <drab> I installed lxd from the stable ppa, running 2.10
[20:05] <drab> maybe I had left over lxc's stuff from the default install?
[20:06] <drab> which would also explain why it seems lxd provices a sinble lxc binary but I still have an awful lot of lxc-something bins around
[20:07] <nacc> lxc- is for lxc1
[20:07] <drab> k
[20:07] <drab> so yeah, guess I have leftovers to clean up
[20:08] <drab> or not, lxd depends on lxc1
[20:08] <nacc> it depens on liblxc1 afaik
[20:08] <nacc> and lxd-client (which provides `lxc`)
[20:09] <drab> oh you'r right, I was apt-get remov'ing too much and catching liblxc1
[20:10] <drab> oh, that lxc bridge is gone
[20:17] <drab> ok now it's a lot clearer, thank you, I couldn't tell what belonged to what anymore
[20:17] <drab> guess I should bootstrap from mini
[20:21] <drab> ok, one more questions, I went through the init which configured the bridge and that's great
[20:21] <drab> I assumed, I geuss incorrectly, that those values would be saved in /etc/default/lxd-bridge.upgraded
[20:24] <nacc> drab: /etc/default/lxd-bridge.upgraded is just a backup of what was in /etc/default/lxd-bridge on the update
[20:24] <nacc> see /usr/lib/lxd/upgrade-bridge
[20:24] <drab> ok, somehow I don't have /etc/default/lxd-bridge
[20:24] <nacc> drab: it uses 'lxc network' now
[20:24] <nacc> drab: right, you shouldn't with 2.10
[20:25] <drab> ah, ok
[20:25] <drab> so where's all the config stuff stored? I couldn't find it in /var/lib/lxd
[20:25] <drab> and there doesn't seem to be any /etc/lxd/
[20:25] <drab> I'm assuming lxd init wrote that stuff somewhere
[20:28] <nacc> drab: i think it's the lxd database now, but i'm not sure
[20:28] <drab> k
[20:28] <drab> thanks
[20:28] <nacc>  /var/lib/lxd/lxd.db
[20:28] <nacc> stgraber: --^ ?
[20:28] <drab> yeah, sqlite, looks like it
[20:39] <skylite> will my disk I/O will be faster if I use ex. 6 disks in raid0? How much faster it would be?
[20:54] <nacc> skylite: using raid doesn't change the speed at which your disks read or write. I think you want to rephrase your question to be more specific. Also "faster" is sort of vague. Do you mean read IOPS? write IOPS? throughput? latency? etc.
[21:02] <skylite> nacc: im trying to run 10 vm's in a dell server for my students but the vms are too slow.  I think the bottleneck is disk speed
[21:03] <skylite> since the whole thing is runing under 2 hard disks and its not in raid
[21:03] <skylite> I think if I put more disks and put them in raid It would be faster. not super fast but not painfully slow
[21:06] <nacc> skylite: reads will get faster with raid0 (depends on the benchmark as to how much) but i believe writes will get slower
[21:07] <nacc> skylite: aiui, if your concern is speed, then raid is not exactly the solution -- get better disks :)
[21:08] <nacc> skylite: but RAID0 implies risking your data as well, on single disk faliure
[21:08] <nacc> *failure
[21:08] <nacc> skylite: err, taking back that write comment, it should speedup writes too
[21:09] <nacc> skylite: how are you determining the IO speed is the bottleneck?
[21:13] <skylite> nacc: I just know :D just imagine 10 vm runing on the same disk
[21:13] <skylite> I got 20 gigs of ram and 2xIntel(R) Xeon(R) CPU           E5410  @ 2.33GHz
[21:15] <drab> raid0 will improve reads and writes, but yeah, you're playing russian roulette with your data
[21:15] <drab> if you cannot get better disks there's a couple of choices
[21:15] <beisner> coreycb, belated ack & thanks :)
[21:16] <drab> one way would be to create a ramdisk, if you have enough ram, and load the OS to ram adding some partition on the disk for persistance
[21:16] <drab> I've tried this in the past and ime it's really convoluted, but that's fundamentally what they do with squashfs for liveISOs, so it's doable
[21:17] <skylite> aah no I need the ram for the vms its barely enough
[21:17] <nacc> skylite: also, your IO controller may start to factor in at the scale you want
[21:17] <skylite> but data loss is absolutely no problem
[21:17] <drab> the other option would still require buying new hw, but cheaper maybe, by using a cache disk
[21:17] <nacc> skylite: as in, yes, you'll get data striping, but you're not guaranteed to get the striping you want, afaik
[21:17] <drab> this is what I do for my nas with zfs and it works pretty well
[21:17] <drab> but you can also use dm-cache
[21:17] <nacc> skylite: so it's still possible (albeit perhaps unlikely) to get VMs on the same stripe, so still IO bound on that disk
[21:17] <nacc> *VM data
[21:18]  * nacc hasn't setup a RAID in a while, so maybe talking out his you know what
[21:19] <drab> what you might do if yuo know your number of VMs before hand is to partition the disk, one part for VM, and stripe those and tehn assign the md to the VM
[21:19] <drab> not very flexible but shuold solve the problem nacc is mentioning, which is possible altho not necessarily an issue
[21:20] <skylite> Im thinkig about using dell raid so I can save cpu time from the host and the VMs
[21:20] <drab> since it doesn't really matter where the VMs are, what really bites you on spinning drive is seek time
[21:20] <nacc> drab: good point, seek time is not helped by RAID0 (afaict)
[21:21] <drab> nope, that's just hw, nothing to do there
[21:21] <nacc> skylite: dell raid being BIOS raid or a dedicated controller?
[21:21] <skylite> bios raid I think
[21:21] <nacc> skylite: generally, fake RAID is not worth it, and you either should jsut use swraid via mdadm or a dedicated controller
[21:21] <drab> so that's fake raid
[21:21] <nacc> i don't know the specifics for that controller
[21:22] <skylite> usually i would also use swraid but I think it would save cpu time if I used hwraid
[21:22] <skylite> its a dell pe2900 btw
[21:23] <drab> is your time worth money to the school? because they are going to spend less by financing a $40 SSD tahn paying your for all the hrs to try to make this faster when it really has little chances to
[21:24] <drab> I mean it's just to learn stuff, an ubuntu VM won't need to be more than 15GBs even with lots of goodies on it
[21:24] <skylite> drab: the server is mine I just offered it to the school so we can play with it
[21:24] <drab> 10 VMs is 150GB, plus host, a 250GB SSD will do
[21:24] <drab> and that's maybe $80
[21:25] <skylite> hm yea
[21:25] <drab> got it, up to you, I've learned the hard way to spend less on aspirins than hw :P
[21:26] <drab> altho these days I'm primarily volunteering for NGOs and it's all about $0 budget
[21:26] <skylite> but I would try with the raid0+old 1TB HDDs first
[21:26] <drab> sounds good
[21:26] <drab> if data loss isn't a problem you may even toy with the sync timings
[21:27] <drab> so that data is flushed to disk less often, that might give you a boost
[21:51] <axisys> server gets no network but when boot using a live CD network works
[21:51] <axisys> so may be something wrong with network driver?
[21:52] <axisys> is there a way to fix network driver on OS while on live CD?
[21:52] <drab> what hw is it? quite unusual for drivers to be missing
[21:53] <drab> you can get a shell and check tho
[21:53] <axisys> drab: I am on live CD and have a shell
[21:54] <drab> ok, what does lspci say? or lshw -C Network if it's installed
[21:54] <drab> can you see the card?
[21:55] <axisys> drab: http://dpaste.com/3W0QZEA
[21:55] <drab> ok that broadcom card, so the system sees it
[21:55] <drab> what's the output of ifconfig -a ?
[21:56] <axisys> http://dpaste.com/0N5M7GN (sanitized)
[21:57] <drab> especially the intels nic should work out of the box with the e1000 module
[21:57] <drab> no reason not to, and they are listed as compatible in that module
[21:57] <drab> ok, so they are all there
[21:58] <drab> and enp0s10 even has an ip
[21:58] <drab> why are you saying you have no network?
[21:58] <axisys> hardware works.. I am on the network
[21:58] <axisys> drab: when boot from OS
[21:58] <axisys> drab: when boot from OS, it does not get netowkr
[21:58] <drab> oh, I see. likely not a driver problem, probably an /etc/network/interfaces problem
[21:59] <drab> wrong nic set to auto, and the others aren't brought up or something
[21:59] <drab> if you boot from the OS and run ifconfig -a do you see  diff output?
[21:59] <drab> I'd boot from OS and repeat those two commands
[21:59] <drab> if you see the itnerfaces as in this case you hve no driver problem
[21:59] <axisys> ok.. let me do that..
[22:00] <drab> and just a config problem, likely like I said /etc/network/interfaces pointing to the wrong one
[22:00] <axisys> what is the command to eject the cd and reboot?
[22:00] <axisys> eject; reboot ?
[22:00] <axisys> it might suck the cd back in
[22:00] <axisys> I can go to the lab and take the cd out otherwise
[22:01] <genii> eject -T
[22:01] <axisys> http://dpaste.com/0S5NN5G
[22:01] <axisys> did not work
[22:02] <genii> But yes, if the tray is out when it restarts, usually it sucks it back in during bootup
[22:02] <axisys> I will just do it from the lab..
[22:02] <axisys> give me a sec..
[22:02] <drab> https://github.com/lxc/lxd/blob/master/doc/storage-backends.md
[22:03] <drab> this page says "Restore from older snapshots (not latest)" on ZFS is no
[22:03] <axisys> Please remove the installation medium, then press ENTER:
[22:04] <drab> am I understanding that right that I cannot restart from any arbitrary snapshots on zfs?
[22:04] <drab> less concerned about no nesting, but no arbitrary snaps is kind of a prob
[22:21] <axisys> drab: mac address changed on me after last kernel update
[22:21] <axisys> needed to chage it to eth3 and it is working now
[22:21] <axisys> I wish I could call it eth0
[22:25] <drab> you can, just create /etc/udev/rules.d/70-my-net-names.rules
[22:25] <drab> and put SUBSYSTEM=="net", ACTION=="add", ATTR{address}=="00:xx:xx:x...", NAME="whatever_you_wanna_call_it" in it
[22:25] <drab> and it'll match the mac to the name and call it like that
[22:26] <drab> you probably have alreadya  bunch of lines in it, which is why it's picking eth3
[22:26] <drab> so in theory you could also just remove the old/other mappins and it'll pick up eth0 next reboot
[22:26] <drab> axisys: ^^^
[22:34] <axisys> drab: oh you mean by removing that file? gotcha
[22:34] <axisys> drab: thanks a lot!
[22:38] <drab> axisys: not removing the file, editing
[22:38] <axisys> drab: gotcha
[22:39] <drab> you will likely have entries in it that aren't being used or something, I can't tell, prolly because you have 3 interfaces
[22:39] <drab> so likely eth0 and 1 or whatever have been assigned to those
[22:39] <drab> or you had other cards in it and maps were left over
[23:03] <ruben23> guys any help i have installed ubuntu server 12.04.5 LTS 64 bit but when i do this command ------> apt-get update && apt-get upgrade && apt-get install linux-headers-server <----------------- getting this error --> http://pastebin.com/rTCVncYb
[23:03] <ruben23> any idea guys
[23:03] <nacc> ruben23: the errors are pretty clear
[23:04] <nacc> ruben23: it can't find the repository you've configured
[23:06] <ruben23> nacc:: how  do i reslved this somehow or workaround please
[23:06] <ruben23> please help
[23:06] <nacc> ruben23: why did you configure your system to use that repository?
[23:06] <nacc> ruben23: you can presumably use the default repositories instead, if you want
[23:07] <ruben23>  nacc: how to usd the default repo..?
[23:07] <ruben23> i want to used the default somehow
[23:11] <ruben23> nacc:: please help, any idea
[23:19] <sarnold> ruben23: deb http://archive.ubuntu.com/ubuntu/ precise main universe  \n deb http://archive.ubuntu.com/ubuntu/ precise-security main universe \n deb http://archive.ubuntu.com/ubuntu/ precise-updates main universe
[23:31] <drab> urm, ok, I finally got to start the first container, and its fs is not created in the zfs pool.. mighty confused
[23:32] <drab> I did lxd init and picked zfs and it says it has it
[23:32] <drab> lxc storage list shows the correct thing
[23:32] <drab> (tank0/lxd)
[23:32] <drab> and the default profile shows "default" as "pool", which is what the zfs name is
[23:33] <drab> so everything matches
[23:33] <drab> but when starting an new instance there's nothing in tank0/lxd and files appear in /var/lib/lxd/containers
[23:33] <drab> am I missing something?
[23:34] <sarnold> check both the global template and the configuration for that specific container
[23:35] <drab> urm, I ran mount just on a hunch and...
[23:35] <drab> tank0/lxd/containers/x1  899G  752M  899G   1% /var/lib/lxd/storage-pools/default/containers/x1
[23:36] <drab> tank0/lxd/containers/x1 doesn't even exist...
[23:36] <drab> there's nothing under tank0/lxd, it's an empty dir
[23:37] <drab> oh, urm
[23:38] <drab> I guess I thought I wasn't giving a path when I did lxd init, but it sounds like I should had...
[23:38] <drab> the storage profile source is tank0/lxd
[23:38] <drab> could it be that should have been /tank0/lxd ?
[23:38] <drab> like mount point
[23:38] <sarnold> good question. I'd expect if it knew you were configuring zfs to use a dataset path rather than a directory path
[23:40] <drab> yeah, that was my guess, but as output of mount tank0/lxd makes not sense, ie a rel path
[23:41] <drab> urm maybe there's something else I don't get about zfs
[23:42] <drab> I have these too in mount tank0 on /tank0 type zfs (rw,relatime,xattr,noacl)
[23:42] <drab> so tank0 I guess is valid
[23:42] <drab> tank0/lxd on /tank0/lxd type zfs (rw,relatime,xattr,noacl)
[23:42] <drab> or maybe that's screwed too
[23:42] <stgraber> drab: zfs list -t all
[23:43] <stgraber> drab: LXD will create filesystems under the dataset you told it about, but it will ALWAYS mount them under /var/lib/lxd/storage-pools/NAME/...
[23:43] <stgraber> drab: so if you see lxd/* entries in "zfs list -t all", then LXD is using your zpool just fine, it's just not using your zpool's default mountpoint for its filesystems
[23:43] <drab> that actually looks ok (output of zfs list): http://dpaste.com/05SNTDW
[23:44] <stgraber> (and in fact, only mounts just the bits it needs, keeping the rest unmounted to avoid stressing the kernel needlessly)
[23:45] <drab> thank you for explaining, very useful