[06:56] <goose_> Hi so I've got a pretty difficult task to accomplish. I am a beginner at best with servers but I manage the website for a non profit, and our friend hasnt paid his hosting and we have an important article coming out tomorrow. I have my own hosting on greengeeks or I could host it on my computer. I have all the auth info from go daddy to switch it over I just dont know how to, totally new to ubuntu using 16.04 only a couple weeks 
[06:57] <goose_> I need to point the dns to my hosting and download then upload the site right? its wordpress
[06:57] <goose_> sorry if this is the wrong forum, its just that a server is at the centre of all of this
[08:16] <hallyn> nacc: i'm on the road till the weekend, sorry.
[08:16] <hallyn> please keep me in the loop on the open-iscsi one.  my understanding was it dabbles in hardware therefore is inappropriate in containers
[12:02] <whirlmind> Any inputs on this thread : https://ubuntuforums.org/private.php?do=showpm&pmid=2552188
[12:04] <whirlmind> I mean this thread : https://ubuntuforums.org/showthread.php?t=2355754
[13:09] <kyle__> Any preseed gurus here?  I've been fighting with it for a few days, trying to get my preseed recipe to do an EFI install properly
[13:10] <cpaelzer> whirlmind: you approach is fine, as you take a backup before you should go along
[13:11] <cpaelzer> whirlmind: shrinking is always coming with a risk, but mostly works
[13:11] <cpaelzer> and the "mostly" is where the backup kicks in
[13:11] <cpaelzer> whirlmind: btw in general, backup to the same disk than you run on is rarely useful (reading in your intention why you shrink)
[13:12] <kyle__> I can get the installer to run through, but it apparently never catches the EFI install part, because on the next boot, it just pxeboots insto the installer and runs through the install agian.
[13:43] <whirlmind> cpaelzer: Thank you for your inputs. As to the backup being on the same disk, you are right. I will be taking a copy on to an external device. I usually leave a copy of the backup on another partition on the same disk as well, just for ease of access.
[14:46] <nacc> hallyn: ack, will do!
[15:23] <adrian_1908> I didn't get an answer in ##php, so maybe someone here can help me. If I have a VPS with just one core, what's a good number of child processed for PHP-fpm to use? Is the default (5) better than say 1 or 2?
[15:26] <nacc> adrian_1908: i assume there's a certain amount of tweaking, but i wouldn't change the default unless you know you should
[15:26] <nacc> adrian_1908: which process manager are you using?
[15:27] <adrian_1908> nacc: just the vanilla php7.0-fpm that Ubuntu offers by default. If you mean something else, let me know.
[15:27] <nacc> adrian_1908: fpm itself has a process manager
[15:28] <nacc> adrian_1908: i don't have one in front of me, let me look at the defaul
[15:28] <adrian_1908> nacc: do you mean the static/dynamic/ondemand models?
[15:28] <nacc> adrian_1908: yeah
[15:29] <adrian_1908> nacc: not sure yet, but i'm thinking static with 1 or 2 children. I imagine the default expects more hardware and load. I don't have any load issues to worry about, but I'm interested in an economical setup none-the-less.
[15:31] <adrian_1908> It's not always as simple as 1 core = 1 thread or 1 process though, so i thought maybe someone has experience with this :)
[15:32] <nacc> adrian_1908: seems like dynamic is the default (if i'm reading it right)
[15:32] <nacc> adrian_1908: and core is probably the wrong term to use here
[15:33] <nacc> adrian_1908: do you mean you have 1 logical cpu in /proc/cpuinfo?
[15:33] <adrian_1908> nacc: i meant in the sense that my VPS is assigned only one "vcore".
[15:34] <nacc> adrian_1908: ok, in my perspective what matters is what linux sees as # of cpus
[15:34] <nacc> adrian_1908: you can hyper-optimize as to what youa re running on physically/virtually
[15:35] <nacc> but linux doesn't know any of that
[15:36] <adrian_1908> fair enough, i'm not well versed in that. my VPS reports `cpu cores : 1`.
[15:36] <nacc> adrian_1908: what does /proc/cpuinfo say?
[15:37] <adrian_1908> nacc: just that, an Intel Haswell with 1 cpu core.
[15:37] <nacc> adrian_1908: oh i see
[15:37] <nacc> adrian_1908: sorry, i thought you meant the VPS web view or something
[15:38] <nacc> adrian_1908: even a single-cpu system can handle multiple processes running (well it definitionally is doing htat)
[15:38] <nacc> adrian_1908: there's not a reason to have more if you know you aren't processing tons of requests
[15:38] <nacc> adrian_1908: 1 seems like a bad idea, just because if it is busy then you'll notice, i think?
[15:41] <adrian_1908> yeah, if there's any kind of delay or bottleneck, 1 might just be the wrong choice. Maybe 2 then. I just wasn't sure of there's some complex load balancing going on in the back, that makes e.g. 5 faster on a single cpu (even on low load). Hence my asking.
[15:41] <nacc> adrian_1908: that i don't know about  :)
[15:42] <adrian_1908> nacc: ok, but thanks! :)
[16:06] <jonah> hi can anyone please help with an error I have on iptables: http://pastebin.com/QmNXv8p9
[16:06] <jonah> much appreciated for any help
[16:11] <lordievader> jonah: Does the DOCKER chain exist?
[17:40] <jamespage> mwhahaha: hey - the pike UCA pockets are now populated - its just in sync with Ocata ATM; will get bumped to Pike for milestone 1
[17:45] <mwhahaha> jamespage: sounds good, thanks
[17:45] <jamespage> yw
[19:05] <mwhahaha> jamespage, coreycb: are you guys aware that your fwaas tempest tests are still out of date for ocata? http://logs.openstack.org/57/446657/1/check/gate-puppet-openstack-integration-4-scenario001-tempest-ubuntu-xenial/49fe74b/console.html#_2017-03-16_18_22_06_516241
[19:05]  * mwhahaha is checking our tempest exclusions from m2/m3 packaging
[19:14] <coreycb> mwhahaha, i see what's going on there
[19:14] <coreycb> mwhahaha, it's not so much out of date, well it is, but upstream hasn't released a point release yet with the fix
[19:15] <coreycb> mwhahaha, i'll cherry pick the patch and upload it
[19:16] <mwhahaha> coreycb: ok, we've just be excluding since it broke
[19:49] <coreycb> mwhahaha, i've uploaded that so it'll work it's way back to the cloud archive for ocata.  you can track it in bug 1667736.
[19:49] <mwhahaha> coreycb: thanks
[23:06] <drab> ok, million dollar question of the day...
[23:06] <drab> udel net rule seems to conflict with bridging
[23:06] <drab> probably because the rule mentions the mac address the the bridge is supposed to get the same mac
[23:06] <drab> so it gets confused
[23:07] <drab> any idea how to untangle this mess?
[23:07] <drab> I'm on xenial, using unpredictable names, aka eth0... (yes I've disable predictable names)
[23:08] <drab> I further set up a rule in /etc/udev/rules.d/70... network to assign the name "lan" to eth0 (based on mac address)
[23:08] <drab> in /etc/network/interfaces I then have the usual 5 lines to set up "lan" to inet manual and the bridge to add that as a port/device and up it
[23:09] <drab> the setup works fine if I take out all the udev rules and use "eth0
[23:09] <drab> "
[23:09] <drab> as the name
[23:13] <drab> https://github.com/systemd/systemd/issues/3374
[23:13] <drab> seems related
[23:22] <drab> altho nobody wants a million dollars uh? :)
[23:52] <tsimonq2> drab: I want a million bucks, can I get it without answering the question? XD
[23:56] <OerHeks> wild guess .. predictable interface naming, not eth0 but ensp3 or simular like that .. we share the million, tsimonq2
[23:59] <OerHeks> oh, i see the disable precictable names now... no million.