[00:01] sarnold: Hah, yeah I had pondered whether this would all just be handled by the intel microcode, but as far as I understand the Intel ME doesn't necessarily run via that; hell, it has its own x86 processor! Running Minix, apparently! [00:02] powersj: thar, email sent. [00:02] keithzg: hehe yeah, I thought that was pretty cool :) I wish I knew _what_ those microcode updates fixed. :( [00:05] cyphermox: what list did you send it to? [00:06] ubuntu-server [02:56] I just installed pfsense in a vm in proxmox and my 200 mbps internet connection is now testing at around 120 mbps. my upload speed was also affected. any idea of what is causing this? [02:57] did you set up passthrough for your NIC to the VM? [02:59] sry I'm just starting... how exactly do I do that? :) [03:00] sweet, they've got a nice little wiki page about it! :) https://pve.proxmox.com/wiki/Pci_passthrough [03:01] thank you! I'll do some reading. [04:39] I have an ubuntu live cd that I would like to install to a host but I can't run an installer. If it's possible to just copy root to the disk from the live cd, what else needs to be modified? [06:41] good morning [07:14] Good morning [07:44] hiho lordievader [07:45] hope you have a good morning as well [07:45] Jup, doing good here :) [09:53] Is it possible to either forward or BCC all emails sent by MAILER-DAEMON to a specific adress? [09:54] So for instance if a user is sending emails to a user on a server that doesn't exist, they'll get a MAILER-DAEMON email back. I'd like for that server to also send the same message BCC or forwarded to a specific address. [09:55] Message being the MAILER-DAEMON message [09:59] necrophcodr: https://www.howtoforge.com/configure-custom-postfix-bounce-messages === jelly-home is now known as jelly [09:59] necrophcodr: TL;DR it seems you can edit the template being used [09:59] necrophcodr: try adding a BCC there [09:59] cpaelzer, that's actually a pretty good idea, i'll give that a shot [09:59] assume this is passed to something sendmail compatible [09:59] so it might catch and follow the bcc statement [09:59] necrophcodr: good luck [10:00] thanks! [10:32] ahasenack, cpaelzer: looks like the git importer had hung. [10:32] I've also found the fix for the applied branches not working and have an MP up for it. [10:32] https://code.launchpad.net/~racb/usd-importer/+git/usd-importer/+merge/334131 [10:32] I'd like to avoid restarting the importer pending the landing of that fix [10:33] Then we won't end up with broken applied imports [10:33] rbasak: sounds reasonable [10:33] It fairly simple [10:33] (the MP) [10:33] already looking [10:33] Thanks! [10:33] rbasak: it is quite a while since I imported from git instead of from snap - is there anywhere a "watch out for this" doc or something? [10:34] well I can do things in a container [10:34] the snap won't interfre there [10:34] That's what I've been doing. [10:34] In a container. [10:34] An artful container specifically. [10:35] to have the same deps as intended for the snap I guess? [10:35] I can give you the list of deps [10:35] Yeah one moment [10:35] cpaelzer: http://paste.ubuntu.com/26026686/ is my container setup [10:36] Includes all deps I have needed so far. [10:36] In addition you'll need "git config --global user.name" and email. [10:36] And on first run the importer will prompt you for you LP username. But it won't require Launchpad auth unless you try to push. [10:36] That should be all. [10:37] just need to clear a few other things, thenn will try to review asap [10:37] Thanks! [10:37] slashd: I looked slightly deeper at PCP for the questionyou adde dyesterday [10:37] cpaelzer: FYI, I'm not blocked. [10:38] This is just to restart the importer. [10:38] which blocks a lot of us atm :-) [10:38] slashd: my personal TL;DR is this: run away [10:38] slashd: modd wise I'm short of filing an archive removal, I just don't have enough hard facts (and hate doesn't count) [10:39] s/modd/mood/ [10:39] * cpaelzer does a mood reset before checking rbasak's MP [10:39] ommmm [10:57] rbasak: I replied on the MP from a code review POV [10:57] I start testing now, but if you think you want to follow my suggestion let me know and I stop testing until you implemented so [10:58] Ah thanks. I wasn't aware of that wrapper. I'll look now. [11:00] cpaelzer: I think you're right. There are things covered there that I didn't cover, such as debian.patches. I should refactor and alter that method as needed. [11:08] ok ping me again when I shall fetch to check again rbasak [11:08] ack [11:28] cpaelzer, about dpdk, how come it's not built for armhf? it seems like it's supported there no? [11:32] cpaelzer, also you don't use $ update-maintainer? =) [11:33] cpaelzer: can I run a revised plan by you first before implementation and test please? [11:33] I'd like to change quilt_env to take a treeish instead of a commit hash. [11:33] Instead of extracting to a temporary worktree, I'll examine the tree object directly using the existing follow_symlinks_to_blob function. [11:34] That should raise a KeyError (I'll check) if not found, or a blob object otherwise. [11:34] So I can use that for the os.path.exists tests for the rest of quilt_env, and then it won't need a commit any more. [11:35] Then in import_patches_applied_tree, after dropping my previous change, I'll wrap quilt calls in repo.quilt_env. [11:35] The required treeish will be the previous treeish from the previous loop iteration. [11:35] For the first loop iteration, I'll have to generate a treeish in the case that the .pc handling above didn't do it. [11:37] repo.dir_to_tree exists. [11:37] I think this code predates it. [11:37] So I may switch to using that instead. [11:37] EOD [11:37] How does that sound? [11:38] Separately, I'm thinking about restarting the importer now, before landing this. [11:38] As it'll take a while to do it right. [11:39] The applied branches will continue to be broken for a while. [11:39] But that has already regressed, and we will have to reimport the world anyway. [12:07] xnox: I'm the deb maintainer together with bluca so - yes I might have forgotten an update maintainer but it is only formally incorrect [12:08] xnox: also it is mostly in sync now - working on the next in debian to be syncable again [12:08] xnox: and finally - the only real question/issue I think - armhf I have no-one to test/work on it at all [12:08] xnox: for arm64 I had linaro folks with me and tested myself on cavium systemd [12:08] but armhf I have neither peers to run it nor HW to test [12:09] xnox: and from every other arch the lessons learned was that it fails initially [12:09] right, ack. [12:09] xnox: does that make sense or would you want me to enable it untested? [12:10] xnox: on which release did I miss the maintainer so that I can fix it on (if) I upload for it next time? [12:10] cpaelzer, no, i was mostly poking it to see if s390x is supported or not =) and noticed that there is armhf and x32, and got curious why they were not explicitely enabled vs explicitely disabled. [12:10] artful & bionic, but fixed now. [12:10] in bionic that is [12:10] thanks [12:11] patch submitted to debian too [12:12] xnox: thanks - I'll convert the debian bug to a fix in our repo then [12:14] hmm [12:14] we already did that [12:14] oh I see, the one test misses the arch qualifier [12:14] well, it was not in debian sid / ubuntu bionic; for the last test case; was in for the other test cases. [12:18] xnox: I have put it onto the gerrit, it will be in any 17.11.x later on then [12:18] rbasak: now I'm back with you [12:18] rbasak: sorry thursday is my alternating short/long lunch break [12:20] rbasak: the suggested approach seems sound to me [12:20] rbasak: and I ack on restartig the importer [12:20] rbasak: applied branches are the less important things, so I think we are ok for now [12:20] given we know we reimport the world at some point soon [12:34] cpaelzer: OK. Thanks! [13:13] jamespage: might I ask you on your OVS plans for bionic - especially in regard to bug 1733325? [13:13] bug 1733325 in openvswitch (Ubuntu) "Update in Bionic to match DPDK 17.11" [Undecided,New] https://launchpad.net/bugs/1733325 [13:15] Hi guys, I have an issue, I get 530 login auth failed, where do I see error log for pure-ftpd ? [13:29] arunpyasi: http://manpages.ubuntu.com/manpages/trusty/man8/pure-ftpd.8.html [13:29] arunpyasi: by default it seems to go to journal I'd think [13:30] arunpyasi: but there are plenty of options to increase verbosity and set an explicit log file [13:37] cpaelzer, I get this error :http://dpaste.com/1WV31PG does this mean it doesn't recognize the username of ftp ? [13:40] arunpyasi: sorry I don't know but thtowing that error in a search engine give plenty of hints [13:40] arunpyasi: I'd assume if you rfollow the first few you will find a resolution [13:40] cpaelzer, yeah, doing so. [13:53] too bad it is a calculation error, dpdk just got me 291.43Pb/s on one case [13:54] hi gusy [13:54] how to remove a route like this ? 172.16.2.0 * 255.255.255.0 U 0 0 0 ens3 [13:55] rizonz: list it with ip route [13:56] rizonz: and you can delete it more or less with "ip route del [13:56] cpaelzer: https://pastebin.com/ERK9eqkv [13:56] cpaelzer: yeah I know but those wildcard GW's are never nice [14:07] cpaelzer, is there any channel were I would get support for pure-ftpd ? [14:07] it seems weird issue [14:08] it was working just before I created new user :P lol === oerheks_ is now known as oerheks [14:12] it is a community thing - around here you are good, but usually rely on somebody having the experience on a particular program [14:12] maybe they have an own channel somewhere [14:14] cpaelzer, didn't get it :P [14:14] cpaelzer, there is no channel for it I guess :P [14:14] not according to their webpage [14:14] So, I will need you guys help. [14:15] which means one around here that uses it more often [14:15] arunpyasi: people are generally willing to help, but in this case if you happen to create clean steps to reproduce the issue that might help [14:15] arunpyasi: you'd need to for a bug report to ubunut as well - as given the case as I see it that is most likely the first question [14:16] arunpyasi: especially since my search check before brought zillions of links about broken config causing such an issue [14:16] cpaelzer, yes there are but they didn't work. [14:16] arunpyasi: so get a container and try to simplify to the smallest number of steps to recreate the issue [14:16] cpaelzer, I think I need to switch for vsftpd or proftpd [14:16] if that's an option - sure [14:17] cpaelzer, which one would you suggest ? :P [14:17] I am always confused with vsftpd or proftpd [14:17] vsftp [14:17] based on having uses one but not the other [14:18] cpaelzer, Ok thanks for your suggestion [14:20] I have a file named --help no idea how I got it created and now it wont remove :P I get --help for every command I entere :P lol haha [14:47] rm ./--help ? === jelly-home is now known as jelly [18:02] anyone got a recent tutorial on how to setup a stupid fast LEMP stack? [18:06] recent for LTS https://www.unixmen.com/how-to-install-lemp-stack-on-ubuntu-16-x/ [19:20] has anyone had any success changing the fans on a proliant se316m1 g6 (I believe its the same as the dl160 g6)? I've seen some fan mods that involve some electronic skills but I was looking for a plug and play solution. [20:15] hi guys, I'm checking the existence of the landscape-client package (-client, -common: 2 binary packages) in the server installer iso [20:15] I thought it had been removed because it's py2 [20:15] yet if I mount the artful server iso, I can find it in the image: http://pastebin.ubuntu.com/26030067/ [20:16] even python2.7 is in there [20:16] looks like I'm mistaken in my assumption, but maybe someone here remembers the story in more detail? [20:17] hm, maybe it was just about cloud images? https://bugs.launchpad.net/ubuntu/+source/landscape-client/+bug/1427275 [20:17] Launchpad bug 1427275 in cloud-utils (Ubuntu Vivid) "clean cloud images of python2" [High,Fix released] [23:35] hey guys, can anyone help, i'm doing scp -r /big/file user@server.com:/bla and at some point I get No such device or address, what's going on? [23:42] Why -r with a file? [23:51] genii, it's a directory sorry [23:51] and it's a big one