[02:57] <roasted> hello friends
[02:58] <roasted> Curious if any mdadm experts can chime in. I'm looking to take my RAID1 (2x3TB WD Reds) and migrate it to a RAID6 with 2 additional 3TB Reds I just purchased. Am I understanding correctly that best case is to create a RAID 6 (with two failed drives) with the new drives, let them sync, then rsync data from RAID1 to RAID6 degraded array, then after add the RAID1 drives to the RAID6 pool?
[03:00] <sarnold> roasted: that's the approach I'd take with zfs, I presume mdadm is similar enough in this case
[03:00] <roasted> sounds good. I am rsyncing all the data to my external. Just waiting on it to finish up and give this a go.
[03:01] <roasted> sarnold, do you know if any formatting is required with the 2 original drives currently in the RAID1? Or would you think I can just drop them from RAID1, add to RAID6, and let the software figure it out?
[03:02] <sarnold> roasted: sorry, I don't know
[03:02] <roasted> sarnold, I'll have crazy up to date backups, so maybe I'll just, ya know, try it and see what happens. ;)
[03:03] <sarnold> roasted: woo! :)
[04:18] <mahdi> hi all
[04:18] <mahdi> i configure my main board for use raid 1 with two hard driver and install ubuntu 14.04 server and in boot i get these error
[04:18] <mahdi> mdadm create group disk not found
[04:19] <mahdi> mdadm create user root not found
[04:20] <mahdi> and system not boot and these message print in loop and when i reset system with alt+ctrl+delete the boot to ubuntu is lost
[07:47] <Gyakomo> Good day, my MAAS DHCP server doesn't respond to requests for PXE Boot. What can I do?
[07:49] <Gyakomo192856> Hope to avoid disconnections again, I'm Gyakomo
[09:00] <halvors> Hi. I'm woundering will phpmyadmin be updated to support PHP7.0?
[09:00] <halvors> Im talking about the upcoming LTS release 16.04
[09:09] <ikonia> doubtful
[09:09] <ikonia> as ubuntu doesn't make phpmyadmin, so if it works with 7 or not is nothing to do with ubuntu
[09:10] <ikonia> and as ubuntu is not shipping php7 in "main" it's only in an additional repo, I don't think the focus will be on shipping a version oh myphpadmin that works with 7
[09:10] <lordievader> halvors: Are there efforts upstream towards that goal?
[09:13] <jelly> dependencies of 4.5.3.1 packaged in debian show it's compatible with php7
[09:13] <jelly> Depends: libapache2-mod-php5 | libapache2-mod-php5filter | php5-cgi | php5-fpm | php5 | libapache2-mod-php7.0 | php7.0-cgi | php7.0-fpm | php7.0 [etc]
[09:14] <lordievader> Then I suppose that it will be updated somewhere in the future to support php7 in Ubuntu too.
[09:15] <jelly> what's the codename for 16.04?
[09:15] <lordievader> Xenial
[09:16] <jelly> halvors: I guess looking at packages.ubuntu.com/phpmyadmin would answer your question, then
[09:16] <jelly> (click on http://packages.ubuntu.com/xenial/phpmyadmin)
[10:45] <synbit> Hello, I'm trying to do something very specific with proftpd on Ubuntu 12.04. Is this the right place to ask for help? I'd be grateful if you could point me to the right direction.
[10:47] <rbasak> synbit: you're welcome to ask here, or perhaps some proftpd-specific place
[10:49] <synbit> rbasak: thanks. I searched for a proftpd-specific channel with no luck, so I thought I'd ask here instead.
[10:51] <synbit> So here it goes: I have set up proftpd with virtual users. I'm trying to use ExecOnCommand of the mod_exec module (http://www.proftpd.org/docs/contrib/mod_exec.html).
[10:53] <synbit> In order to test this I have two accounts: adminUser and testUser. These users have a different UID, but testUser has the same GID as adminUser.
[10:55] <synbit> Upon successful upload of a file (that is STOR command, http://www.castaglia.org/proftpd/doc/contrib/ProFTPD-FTP-commands.html) I execute a script which moves the uploaded file from testUser's home directory to adminUser's home directory.
[10:56] <synbit> The error I'me getting is "STOR ExecOnCommand '/srv/sftp/scripts/sftpTest.sh' failed: Operation not permitted"
[10:57] <ikonia> you just need to debug that script
[10:57] <ikonia> operation not permitted could be a bug with something being called in the script, or as simple as the script not having execute permissions
[10:58] <synbit> Trying to replace the mv command with something like "echo 'bla' > /tmp/test.txt" works fine. Seems like a permissions issue to me. Any ideas how to do this with proftpd?
[10:58] <ikonia> no idea about proftpd - it's an ftp daemon, not aware of how to use it as a file system manager
[10:58] <synbit> ikonia: thanks for your input. The script is executable by anyone
[10:59] <synbit> I'd prefer to use the built-in functionality of the ExecOnCommand that proftpd provides rather than implement my own "file-mover" with a cronjob or something...
[11:00] <synbit> If anyone has done that I'd appreciate their input!
[11:00] <rbasak> Try running the script manually as you expect it would be called, but as the ftp user. Also check for AppArmor denials.
[11:01] <rbasak> Make sure the filesystem isn't mounted noexec
[11:01] <rbasak> Somehow I suspect that your script isn't running at all.
[11:03] <synbit> rbasak: hmmm... I'll try running it as this virtual user (If I can switch to that). That's why I suspected as well, but when I replaced the mv with the echo to some file in /tmp it worked. Which means it is executed.
[11:03] <synbit> I'll try switching to this user first though... Thanks for your suggestion
[11:22] <tomreyn> maybe it's the users' shell?
[11:22] <tomreyn> for user in testUser adminUser; do echo "Shell of $user: $(getent passwd $user|awk -F: '{print $NF}')"; done
[11:23] <tomreyn> synbit: more likely, though, based on what you discussed so far the target directory is not writable for your testUser, though.
[11:28] <synbit> tomreyn: I've explicitly added testUser to the same group as adminUser and have also given appropriate group permissions on the folder the script is attempting to move files to
[11:29] <synbit> I tried switching to testUser, but as this user is virtual (defined within proftpd's config) I get "Unknwon id" back as I would expect.
[11:31] <synbit> Also proftpd has not an explicit apparmor profile (the service is not listed apparmor_status command's output).
[11:33] <synbit> I am not very familiar with apparmor, but what I'm thinking now is checking the default profile for apparmor (I suspect this is where proftpd will belong since there is no specific profile?)
[11:47] <synbit> right, so I disabled apparmor with the teardown option according to the wiki, I ckeched there are no profiles loaded, tried again and still got the same error...
[11:50] <synbit> Somehow I feel the problem can be solved within proftpd's config... Running chmod 777 on the adminUser's home (as a test) did not help either.
[11:55] <synbit> tomreyn: Sorry, I missed your comment regarding the users' shell. The problem is that these users don't exist in the passwd file. Tried to pass a different file as an argument to getent and I got "Unknown database" error.
[11:56] <synbit> Looking into proftpd's specific passwd file, I can tell you both users have /bin/bash defined as their shell if that helps.
[12:43] <tomreyn> synbit: if it's virtual users they'll be restricted to *at most* the possibilities the system user running the server process has.
[12:44] <tomreyn> also there can be a chroot configuration (in proftpd's configuration files) in place by default restricting things further.
[12:45] <tomreyn> also in openssh's configuration in case you're using that for sftp / scp transfers + authentication
[13:01] <synbit> tomreyn: You are right (http://www.proftpd.org/docs/contrib/mod_exec.html#ExecOnCommand).
[13:03] <synbit> I'm not sure what's wrong though with the script being executable by anyone and the involved directories and files rwx by at least the group that both virtual users belong to.
[13:04] <tomreyn> its location by chance
[13:05] <synbit> do you mean the location of the script?
[13:05] <synbit> because that's fine as well.
[13:05] <tomreyn> yes, if it's outside of the area these users have access to permission denied is what you'd get to see
[13:08] <synbit> yeah, you are right. I'm not sure what role the location of the script plays in this, but I suspect it's fine.
[13:09] <synbit> Otherwise when I replaced the script's contents with "echo 'bla' > /tmp/file.txt" the script executed successfully and the file /tmp/file.txt was created as well
[13:09] <zin_> Hi! my mdadm raid wont auto mount after power failure. is it posssible to fix this?
[13:10] <rbasak> proftpd isn't using seccomp or something like that , is it?
[13:11] <synbit> rbasak: honestly I have no idea what seccomp is... Let me have a look and I'll post back
[13:19] <synbit> zin_: Not sure I'm the best person to give advice on this, but check if there is an entry in /etc/fstab for your raid
[13:20] <zin_> synbit: yes it is in fstab
[13:20] <zin_> synbit: /dev/md0        /home   ext2    defaults        0       0
[13:20] <tomreyn> RAIDs don't mount, they are assmbled and activated, providing access to file systems (and other structures) stored on them
[13:21] <tomreyn> well check the file system stored on /dev/md0
[13:22] <tomreyn> and review the status of your RAID using cat /proc/mdstat
[13:26] <zin_> md0 : active raid1 sda1[2] sdc1[3]       1953382336 blocks super 1.2 [2/2] [UU]
[13:26] <zin_> and the md0's filesystem is the same as in the fstab
[13:26] <zin_> and raid state is clean
[13:31] <zin_> if i restart pc it works. but after power failure it wont
[13:32] <zin_> it is still under /dev/md0
[13:32] <zin_> and still clean
[13:33] <zin_> but i have to write manually 'mount /dev/md0 /home'
[13:35] <synbit> rbasak: I did apt-cache search seccomp and it came back with libseccomp-dev, libseccomp0 and libseccomp1
[13:36] <synbit> none of these packages are installed on the system. Am I right to assume that's a module? I did lsmod and couldn't find anything related either
[13:38] <synbit> Searching the web I found that vsftpd is using it as of version 3.0.0, but I couldn't find any references to proftpd. If you know a definitive way of checking whether proftpd is using seccomp please let me know.
[13:41] <pitastrudl> how could i monitor&log the network connection stability of a virtual server?
[13:42] <pitastrudl> to see when it lost connectivity and for how long and maybe some additional info if possible
[13:43] <hateball> pitastrudl: there are simple things like smokeping, or you can use a real monitoring solution like icinga, zabbix etc
[13:45] <pitastrudl> thanks hateball, ill look into that
[14:03] <ren0v0> Hi, when using ssh, how do i tell ubuntu to use all keys under ~/.ssh/  ?
[14:03] <ren0v0> currenetly it only looks for id_rsa
[14:04] <lordievader> You write an ssh config?
[14:10] <ren0v0> lordievader, nope
[14:10] <ren0v0> lordievader, i'm pretty sure normally if i add a key to that folder its just "tried" by default
[14:10] <ren0v0> maybe i'm tripping :)
[14:12] <ren0v0> lordievader, say i'm doing this for a git repository, and i have 4 repos all with different key access, what then? do i just set the host in ssh config to the repo url or something? And this is definitely the normal thing to do, because i've never done it before
[14:13] <lordievader> SSH configs are lovely. Also you don't want to try all your keys at some random host.
[14:13] <lordievader> ren0v0: https://www.reddit.com/r/netsec/comments/3frnxb/my_ssh_server_knows_who_you_are_seriously_try_ssh/
[14:15] <ren0v0> lordievader, hmm, maybe its a .gitconfig option and i'm stupid
[14:15] <ren0v0> just can't remember ever writing any of this to a config, and it just "working", you're right about trying all keys, but i'll only have 5 and a few repos
[14:24] <soahccc> I have a problem with fail2ban not unbanning IPs. The status of the jail shows 1 banned IP, the iptables have hundreds rules till there. The log shows "iptables .. returned 100" errors and seems to not retry.
[15:01] <lordievader> soahccc: It could be a save/restore thing that got iptables and fail2ban out of sync.
[15:02] <soahccc> lordievader: hmm maybe it had a problem with >19k violators? :D
[15:02] <lordievader> Ouch, that is a lot.
[15:02] <soahccc> I just removed the entries manually for now
[15:21] <synbit> Thanks for your suggestions regarding the proftpd issue. I found this http://www.proftpd.org/docs/faq/linked/faq-ch5.html#AEN478
[15:23] <synbit> number 5 is a little suspicious... I've not found a built-in solution for this. If I do find one, I'll let you know. If not I'll have to do it with a cronjob unfortunately.
[16:13] <sdeziel> Hello, I debootstrapped a Xenial VM and installed acpid. Unlike previous versions, acpid(.service) isn't running by default so my VM didn't shutdown when signaled by virsh.
[16:13] <sdeziel> I've read that ACPI events should be handled by logind but this didn't work in my case because logind requires dbus to start and it wasn't installed (only a recommends on systemd).
[16:13] <sdeziel> So to make a long story short, what's the recommended way to deal with ACPI on headless servers/VMs?
[16:22] <rbasak> I thought it was still acpid. AFAIK, that still works on our Xenial cloud images, or is that broken?
[16:22] <rbasak> I'd start by comparing against what an official cloud image.
[16:22] <rbasak> what an official cloud image does.
[16:23] <rbasak> OOI, why are you using debootstrap directly?
[16:23] <rbasak> Incidentally, debootstrap on Xenial is broken I discovered. I filed a bug in Debian yesterday.
[16:23] <sdeziel> rbasak: OK, I'll checkout cloud image
[16:24] <sdeziel> I used debootstrap out of habit but I guess it's time to revisit that choice
[16:25] <rbasak> Pre-prepared images FTW, IMHO. Saves a ton of time.
[16:25] <rbasak> debootstrap isn't going away though. We use it to make our pre-prepared images :)
[16:26] <nacc> rbasak: ping
[16:26] <rbasak> nacc: o/
[16:28] <sdeziel> rbasak: thanks
[16:30] <nacc> rbasak: hey, I've not seen anything yet as to ondrej's proposal; but the few folks that have responded to the LP are clearly those that want to use PHP7. There seem to be functional means to have PHP5 and PHP7 co-installed (even if the packages won't allow it from our repository, as they conflict). But then there's the open support question. Is this the point where I should take it to jgrimm and kirkla
[16:30] <nacc> nd ?
[16:32] <rbasak> nacc: I'd certainly like to hear their views now, yes. Shall we arrange a meeting?
[16:33] <nacc> rbasak: yeah, i pinged kirkland separately, but calendar indicates he might be at a sprint? I can schedule a hangout though ... would today be ok? I don't want to keep you around later than necessary
[16:34] <jgrimm> nacc, rbasak:  i sent email response on what my thoughts are at the moment.  yes, i'm open to a hangout.
[16:34] <jgrimm> nacc, is there any new news, new thoughts since we last chatted?
[16:35] <nacc> jgrimm: only what i've updated the lp bug with
[16:35] <jgrimm> nacc, link?
[16:36] <nacc> jgrimm: https://bugs.launchpad.net/ubuntu/+source/php5/+bug/1522422
[16:38] <jgrimm> thanks!
[18:57] <BrianBlaze420> hello beautifuls
[18:58] <BrianBlaze420> I am using ufw and have set a limit to my ssh connections and it works lovely but I am wondering if it's possible to change the limit... I know it has defaults but can I change them?
[19:10] <jdstrand> BrianBlaze420: it is not currently a configurable option. you can workaround that by adjusting /lib/ufw/user*.rules if you wanted
[19:35] <BrianBlaze420> awesome that looks good jdstrand I am guessing the only thing special about it is you have to configure it that way and not with a ufw command
[19:53] <jdstrand> BrianBlaze420: well, you can use the ufw command then modify it. if you'd prefer something less hacky, add the rules to /etc/ufw/before*rules
[19:54] <BrianBlaze420> well I already did it the first way and it worked so I am good
[19:54] <BrianBlaze420> thanks a lot jdstrand
[19:54] <jdstrand> np
[20:10] <carlwgeorge> kirkland: Can you post your slides (or video) of your scale talk soon-ish?
[20:15] <kirkland> carlwgeorge: these?  http://people.canonical.com/~kirkland/SCALE%2014x-%20adapt%20install%20anything.pdf
[20:15] <kirkland> ;-)
[20:16] <carlwgeorge> yessir, thank you much
[20:55] <thebwt> boom
[20:56] <jgrimm> kirkland, heh.. your slides on 'adapt' are timely.. i was just telling nacc earlier today about the prototyping you'd done earlier in the year on that
[20:56] <dasjoe> kirkland: very interesting! I have no idea how containers work, what about packages which build kernel modules via DKMS?
[20:57] <rbasak> dasjoe: those would go on the host I'd expect.
[20:58] <jancoow> Hi. When will the openssl patch be available?
[21:01] <dasjoe> rbasak: I thought so, right
[21:02] <rbasak> I think with the right configuration you _could_ modprobe from inside a container, but I don't see why that would ever be needed.
[22:49] <trippeh_> how much space does a ubuntu mirror use nowadays? packages/sources only, not isos.
[22:50] <nacc> rbasak: and sort of violates the principle behind the isolation (as the modprobe in the container could theoretically affect other containers)
[22:50] <trippeh_> 680GB in 2013, looks like
[22:50] <teward> trippeh_: you could probably ask #ubuntu-mirrors and get a better answer, but I heard close to a terabyte nowadays
[22:51] <trippeh_> ok
[22:51] <jrwren> also depends on platforms & versions.
[22:51] <sarnold> trippeh_: I did some quick calculations a few weeks ago and figured I'd need ~terabyte, but I'm hoping I'm not too far wrong :)
[22:51] <trippeh_> all the things! :)
[22:52] <sarnold> trippeh_: what, you got an s390 in that shed? :)
[22:52] <trippeh_> nah, I killed all my weirdo archs a decade ago
[22:54] <trippeh_> used to run auth dns on m68k ;)
[22:54] <jrwren> at a previous job we were only interested in x86_64 and server at that, so we used reprepo to mirror and we stripped some larger desktop packages out. IIRC it was well under 30GB for trusty, x86_64 with some filters applied
[23:02] <trippeh_> I'll use ~1TB as the guideline for a full package/source mirror
[23:02] <trippeh_> a little more, a little less, who cares ;)
[23:03] <genii> Last I mirrored the repos it was around 35G
[23:03] <genii> ( not including restricted and partner)
[23:05] <trippeh_> hey look, a couple of leftover 1TB SSDs. what a coincidence ;)
[23:06] <sarnold> oo
[23:10] <teward> trippeh_: i'd suggest maybe 2TB as a safe bet, but it depends on how many releases and such you're pulling in
[23:10] <teward> genii: that's the ISOs, I thought?  I heard different from -mirrors