[00:10] <ZeroWalker> where does nginx store the default page so i can edit it etc?
[00:34] <ZeroWalker> okay found it;d
[05:48] <wxl> artul dot one image is fixed
[06:16] <cpaelzer> good morning
[07:01] <masber> good afternoon, can I setup 2 ips in same nic or do I need to setup a sub interface for that?
[07:01] <masber> the reason I ask is because I have this interfaces file content which doesn't work (second IP is not populating) https://bpaste.net/show/203d0158f4d9
[07:02] <hateball> with the old naming you would add an iface ens192.1 and set IP there
[07:02] <hateball> but I am not sure what the syntax is now with the new naming
[07:03] <cpaelzer> this should do "sudo ip addr add <IP/net> dev <device>"
[07:03] <hateball> sorry ens192:1
[07:03] <cpaelzer> not sure on config files thou
[07:05] <hateball> masber: I guess it should work if you just copy the section and put ens192:0 on the second device where you want the second ip
[07:05] <hateball> so you have ens192 section with .56, and ens192:0 section with .57
[07:05] <masber> yes, that is what I am trying to do now
[07:05] <masber> copy the whole section?
[07:06] <masber> event gateway and dns names?
[07:06] <masber> still doesn't work
[07:07] <masber> hateball, this the file content https://bpaste.net/show/e5f607959166
[07:08] <hateball> masber: yea, that's what I'd have done
[07:08] <hateball> but I guess things might require different config now
[07:08] <masber> then I just need to run sudo ifconfig ens192 down && sudo ifconfig ens192 up?
[07:09] <hateball> well you are creating a new device, so you need to bring up ens192:0 also
[07:10] <masber> SIOCSIFFLAGS: Cannot assign requested address
[07:10] <masber> but the IP is free no body is using that IP
[07:10] <lordievader> Good morning
[09:38] <rbasak> jamespage, coreycb: FYI, I'm sprinting with Otto and Lars in BlueFin this week. If you have any questions for the others, this week as a good time :)
[09:38] <rbasak> Eg. otto maintains galera-3 in Debian, and I think you have a percona-galera-3 in Ubuntu?
[09:38] <jamespage> rbasak: ack
[15:00] <tobasco> jamespage: could you please give your input on the following https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg114532.html
[15:01] <tobasco> the ci fails on that from time to time which is the primary reason that ubuntu is non-voting for puppet, does not happen on centos and we are kind of stuck
[15:01] <tobasco> one of the two last things to fix causing failure in the ci piplines
[15:16] <jamespage> tobasco: hmm - looking at the error I would say the timeout is coming back from keystone (via keystoneauth1) ?
[15:19] <jamespage> might be wrong - maybe the request is made via keystoneauth1 to neutron.
[15:21] <jamespage> tobasco: I'm not convinced the request is actually making it from nova->neutron
[15:22] <tobasco> jamespage: i've been staring at it for a long time and what I can see the request actually gets to neutron-server, it's processed but it never responds to nova
[15:23] <tobasco> if you match the timestamps and when searching for the thing I said in that email you can see neutron-server says 200 for that
[15:23] <jamespage> tobasco: hmm - are you sure? I was trying to match up the req-<UUID> but could not see it in the neutron-server log
[15:23] <tobasco> hm, I never compared req ids but went by URI matching timestamp when it was sent
[15:24] <tobasco> weird thing there are no restarts, all other tests passes and this doesn't happen every CI run just sometimes
[15:25] <tobasco> we never found a solution for it, i've been thinking apparmor but I have no insight into that
[15:25] <tobasco> neutron-server isn't run in apache either so no issues with wsgi apache -> wsgi apache or smth either
[15:25] <jamespage> tobasco: if apparmor blocked something you would see a DENIED in the syslog or kern.log
[15:26] <tobasco> okok hm, i'm stuck at that my next step is basically running the integration testing manually until it occurs and check but not sure how that would help me either
[15:37] <lucidguy> Reading through these patch notes and it seems all patches for ubuntu1604 are not avaialable yet?  Regarding Spectre/Meltdown?
[15:43] <lucidguy> Am I the only one scrambling to get this patched and confused?
[15:44] <dnegreira> its on testing phase
[15:45] <dnegreira> tomorrow will be released as a regular update afaik
[15:45] <dnegreira> https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/SpectreAndMeltdown
[16:34] <catphish> i'm looking to use lvm with sanlock, i notice ubuntu has packages for lvm, and for sanlock, but i can't find lvmlockd, do i need to recompile lvm, or is there simply a package i'm missing
[17:05] <nacc> catphish: hrm it does seem a bit odd, given manpage references: http://manpages.ubuntu.com/manpages/xenial/man8/vgchange.8.html
[17:07] <nacc> powersj: ping
[17:07] <powersj> nacc: sup
[17:08] <nacc> powersj: were you and stgraber able to figure out what's going on with the git-ubuntu jenkins ci?
[17:08] <powersj> nacc: sorry, tbh I forgot to chase that down with the ec2 merge last week, let me add it to my list and I can probably look later today or tomorrow
[17:08] <powersj> that work?
[17:08] <nacc> powersj: that's great, thanks
[17:11] <nacc> powersj: do you want a bug filed?
[17:11] <powersj> nacc:  yeah that would be great!
[17:12] <nacc> powersj: in lp:usd-importer or the github project or ... ?
[17:13] <powersj> usd-importer
[17:14] <nacc> powersj: ack, will do it in a moment
[17:18] <catphish> nacc: thanks for looking, the manuals and example configs suggest it can be enabled, but i don't think it's compiled, i wonder if this is something i could talk to the package maintainer about, might be a matter of stability
[17:20] <DammitJim> is my understanding correct that there will be a patch for ubuntu released tomorrow for meltdown?
[17:20] <nacc> catphish: i'd file a bug, yeah
[17:21] <catphish> thanks
[17:21] <catphish> i'm interested in using it next year, so would be great to find out if it's on course to be in the builds used in 18.04
[17:22] <catphish> i'd really rather avoid my own hacky builds
[17:23] <rbasak> DammitJim: see https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/SpectreAndMeltdown
[17:23] <catphish> it's my intention to run kvm across several hosts and store the VM disks on a single large SAN LUN to allow VM migration, lvmlockd seems like the best solution by far
[17:34] <DammitJim> thanks
[17:34] <DammitJim> rbasak,
[18:11] <nacc> xnox: when you get a chance, could you look at LP: #1740892 ? And correct my systemd ordering comments, if I'm wrong
[18:19] <powersj> wxl: do you remember what the date was on the artful ISO that had install failures? I kicked the automation on artful and the one dated 2018-01-08 passed
[18:22] <wxl> powersj: 20180105.1 if i remember correctly
[18:35] <nacc> xnox: just put another update up
[18:36] <nacc> dpb1: --^ I think that should give slashd & co. enough of a hint of what to fix. I think it's working (broken) as expected right now, and I don't see how upstream is making the assertion that it's nnot
[18:36]  * dpb1 reads
[18:37] <nacc> dpb1: the last update is verbose, but I think is the most useful
[18:37] <nacc> jamespage: --^ i'm sure you'll get the LP notification, but also FYI
[18:41] <dpb1> slashd: looks like that could be a simple fix, if the 'PartOf' directive is correct.  could use an xnox or another systemd expert to chime in though
[18:43] <nacc> dpb1: i thikn partof is too strong, the wants= in the last is all that's needed (but it introduces a corosync -> pacemaker dependency). My reading of the manpage is that if corosync is installed and pacemaker is not, it should not impact corosync's status
[18:43] <dpb1> nacc: what happens in that case?
[18:44] <slashd> dpb1, yep will do some testing today. Hopefully we can get a fix soon, and then wait for the build farm to become operational.
[18:44] <dpb1> uninstall pacemaker, does corosync still start/stop correctly?
[18:44] <dpb1> ok
[18:44] <dpb1> right
[18:44] <dpb1> not like we can do much more til tomorrow (hopefully)
[18:44] <nacc> dpb1: yeah, i'm doign that now, it looksl ike it does
[18:45] <nacc> dpb1: basically unsastifiable Wants= (since there is no pacemaker.service) have no impact on commands affecting the configured service (ie. corosync in this case)
[18:45] <dpb1> makes sense
[18:45] <dpb1> good
[18:45] <nacc> i don't know how much we care aobut that case for this bug, but in generall it should be fine (i'd still like xnox to look at it)
[18:46] <nacc> powersj: LP: #1741949 filed and assigend to you
[18:46] <powersj> thx
[18:46] <nacc> powersj: i gave one example, but it's hitting every CI run, afaict
[18:46] <nacc> powersj: possibly switching to using the snap will 'fix it' :)
[18:46] <powersj> heh I'll try locally then
[18:47] <nacc> powersj: ack, although we should debug and figure out why a snapd path is being used at all if we're not using hte lxd snap in the VM
[18:52] <slashd> thanks nacc
[18:53] <nacc> slashd: yw, let me know if you need anything else
[18:53] <krambiorix> Hi, i have this cronjob that runs at night... It takes the performance of the server totally down... Am i doing something wrong? https://pastebin.com/1PjYKx7W
[18:55] <sdeziel> krambiorix: I'd recommend piping mysqldump into gzip and save the uneeded IOs. I don't know if zip can be used in a pipeline
[18:56] <sarnold> you're using pipes incorrectly
[18:57] <krambiorix> sarnold, howcome?
[18:57] <sarnold> I think sdeziel's point of using gzip instead of zip is probably a good start -- once you re-write it around that to not use local file storage you'll be on the right path
[18:58] <sarnold> krambiorix: when you use > foo to write stdout to a file, then use | to pipe stdout into another program, you've written an error..
[18:58] <nacc> krambiorix: what is your intent in that (very long) line?
[18:58] <sarnold> krambiorix: you *probably* intended to use && to run the next command only if the previous command succeeded
[18:59] <sdeziel> mysqldump | gzip -9 | ssh root@10.10.10.3 'cat > /home/backups/sqlbackup$(date +\%Y\%m\%d\%H\%M\%S).sql.gz'
[18:59] <sdeziel> should avoid writing anything locally
[18:59] <krambiorix> sdeziel, wow is that a simplified version?
[19:00] <krambiorix> yes i needed &&
[19:01] <sarnold> yeah, sdeziel's suggestion is spot-on
[19:01] <sdeziel> krambiorix: it's a suggested version to be tested :) I recommend not passing the mysql password in your cron job and use ~/.my.cnf instead.
[19:01] <krambiorix> sdeziel, but shouldn't | be && then?
[19:01] <sarnold> it runs the risk of not actually saving something useful if the connection is terminated or destination storage is full or whatever
[19:02] <krambiorix> sarnold, but that's not a problem
[19:02] <sdeziel> krambiorix: the all "|" version streams the dump over to the remote machine, hence sarnold's point
[19:04] <krambiorix> sdeziel, ok thx i'll test it
[19:04] <sdeziel> krambiorix: for the password thing, I think the simplest way is to use "mysqldump --defaults-extra-file=/etc/mysql/debian.cnf"
[19:05] <krambiorix> sdeziel, but where's the nice thingy?
[19:05] <sdeziel> krambiorix: you can add it yourself but I don't think you'll benefit much from it because most of the work will be done by mysqld
[19:06] <krambiorix> sdeziel, ok thanks, i'll let you knwow!
[19:06] <sdeziel> well, gzip might be a bit taxing so yeah, nicing it would be a good idea
[19:07] <krambiorix> sdeziel, should i put nice in front?
[19:07] <sdeziel> krambiorix: if you put that pipeline in a script, you could add this line before calling mysqldump: renice +19 -p "$$" > /dev/null
[19:07] <nacc> rbasak: ahasenack: cpaelzer: would be good if you could look at https://code.launchpad.net/~nacc/usd-importer/+git/usd-importer/+merge/333499 and the corresponding bug and help me think about the implications :)
[19:09] <krambiorix> sdeziel, does that do something to the running processes?
[19:10] <sdeziel> krambiorix: it renices the "main" script so all the child will inherit the lower priority
[19:10] <krambiorix> sdeziel, okay that's nice, i'll put it in a script
[19:11] <sarnold> gzip's only going to max out one core at the absolute worst though
[19:11] <sarnold> and if it does give you trouble maybe switching to lz4 or similar would work out
[19:19] <krambiorix> ok, it seems to have worked!! THanks guys!
[19:19] <sarnold> excellent ;) have fun
[19:23] <krambiorix> sarnold, sdeziel i want to add the document backup line also in the script: https://pastebin.com/HuFjJKwT   -> how could i make this shorter?
[19:23] <nacc> krambiorix: that still looks ... wrong to me?
[19:24] <sarnold> krambiorix: with a similar transformation, tar cf - /home/myfils | gzip | ssh root@10.10.10.3 'cat > ...'
[19:24] <nacc> krambiorix: the whole point of pipes is that you would't have the same file referred to multiple times
[19:26] <krambiorix> ow ok
[19:51] <xnox> nacc, PartOf=/ConsistsOf= is not as strong as BindsTo=/BoundBy=
[19:51] <xnox> Configures dependencies similar to Requires=, but limited to stopping and restarting of units. When systemd stops or restarts the units listed here, the action is propagated to this unit. Note that this is a one-way dependency — changes to this unit do not affect the listed units.
[19:51] <xnox> When PartOf=b.service is used on a.service, this dependency will show as ConsistsOf=a.service in property listing of b.service. ConsistsOf= dependency cannot be specified directly.
[19:52] <xnox> Requires is all about "starting" and "stopping"
[19:52] <xnox> PartOf is kind of like, "oh and by the way, restart that too, if you can" a wants-like stanza, for "stopping/restarting"
[19:52] <xnox> cause wants is like "oh and by the way, start that too, if you can"
[19:53] <xnox> BindsTo= would be too strong, and would actually require two things to be always present.
[19:54] <xnox> nacc, re-reading the bug again, not sure how above is relevant.
[19:54] <xnox> will add to my to do to read/do
[20:13] <krambiorix> sarnold, i tried the documentbackup thing but i had to stop it because my server space was full
[20:13] <krambiorix> now it's still full, where can i find that temp file?
[20:13] <sarnold> krambiorix: "that temp file"?
[20:14] <krambiorix> sarnold, well, i asume it creates a temp file because i can't find the zipped file in the path /home/myfiles
[20:14] <krambiorix> because the zipping was stopped
[20:15] <sarnold> krambiorix: if you switched to tar and gzip as I suggested, there is no temporary file -- the output of one tool is fed to the input of the other tool in small blocks and shipped off the OS without ever hitting the disk
[20:15] <krambiorix> sarnold, the zipping is more than 10GB and i can't find any file but my server is full
[20:15] <sarnold> "my server" -- source or destination?
[20:16] <krambiorix> sarnold, source
[20:17] <sarnold> krambiorix: pastebin df -h output .. maybe there's something easy ..
[20:17] <krambiorix> sarnold, https://pastebin.com/v9SVFYu3
[20:18] <cpaelzer> nacc: commented on the MP, let me know if I shot into the wrong direction
[20:18] <cpaelzer> nacc: I thought I give it a review for the race you mention intentionally without syncing with you - it is either giving it a new POV that might be interesting - or - be total crab
[20:19] <cpaelzer> nacc: in case of the latter ignore it or if you want use replying on my example to better outline the actual problem with the cache race that you see
[20:19] <sarnold> krambiorix: okay, try using du -x / | sort -n    to find the largest directories
[20:28] <krambiorix> sarnold, nothing special
[20:29] <sarnold> krambiorix: hrm, the second thing you pasted showed making a zip file of /home/myfiles/ and storing the zip file in /home/myfiles/ ..
[20:30] <sarnold> krambiorix: how many times did you run that tool? :)
[20:33] <nacc> xnox: I thought requires is all about starting and stopping, but at least on xenial, it seems to be only about stopping :)
[20:33] <nacc> xnox: but thank you
[20:34] <nacc> cpaelzer: thanks
[20:34] <krambiorix> sarnold, one time
[20:34] <krambiorix> sarnold, i see that /root/.cache is rather big
[20:35] <sarnold> krambiorix: hrm, the usual theory with ~/.cache/ directories is that they can be removed at any time
[20:35] <nacc> xnox: i think what we want to ensure is that whenever corosync starts/stops/restarts that, if pacemaker is also present, it is also started/stopped/restarted
[20:35] <sarnold> krambiorix: but it might be worth looking around in there to find out _what_ writes there. that's strange.
[20:37] <krambiorix> sarnold, i don't see new files
[20:37] <krambiorix> sarnold, my server storage had 80% and now 99%
[20:38] <krambiorix> sarnold, forget it, i took a copy of my files folder before testing the script
[20:38] <krambiorix> i'm so sorry :)
[21:02] <mburke2> nacc: hello!
[21:03] <nacc> mburke2: hiya!
[21:03] <nacc> dpb1: --^ mburke2 is having trouble getting in touch with canonical for support
[21:03] <lucidguy> Why is it taking so long for Ubuntu to provide official patches for Meltdown&Spectre?
[21:04] <rbasak> lucidguy: see https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/SpectreAndMeltdown
[21:04] <rbasak> lucidguy: all further discussion in #ubuntu-hardened please.
[21:06] <lucidguy> rbasak: I'm aware of that page, thats how I know the official updates are not available??
[21:07] <rbasak> lucidguy: your question has an answer on that page there. If you want to talk about this further, using the #ubuntu-hardened channel please, as that's where security discussions take place. Questions asking for security updates are off topic on this channel. And most of the relevant people aren't even in this channel.
[21:08] <rbasak> *use* the #ubuntu-hardened channel please. Sorry for the typo.
[21:10] <lucidguy> So, what topics are allowed?  Any topic that doesn't have it's own channel I guess.  Kinda Silly
[21:10] <sdeziel> lucidguy: the idea is to reach the relevant people
[21:11] <rbasak> Generally we try to stick to not more than one channel for any given topic. Then the conversation doesn't get scattered everywhere, and people who are interested don't miss most of the conversation.
[21:11] <rbasak> https://wiki.ubuntu.com/IRC/Guidelines
[21:11] <rbasak> "Ask your question in the channel that is most relevant to your query. Don't post in multiple Ubuntu channels or in channels with unrelated topics."
[21:23] <mburke2> dpb1: I subscribed to Ubuntu Advantage on AWS today and am trying to get an ESM token. I don't see any way to get one from AWS, Launchpad, or other sites (like auth.livepatch.canonical.com provides for live patch). I tried filling in the support form on support.canonical.com and haven't received any confirmation that it was received
[21:23] <mburke2> and none of the canonical phone numbers are picking up
[21:38] <slashd> mburke2, ping me in private message please.
[21:39] <nacc> slashd: thanks!
[22:40] <nacc> rbasak: i'd really like to get, if at all possible, at least your current testing spike stuff landed -- so that i can rebase on to it for other lp-beta stuff
[22:40] <nacc> rbasak: let's sync on that tmrw, if you have the time
[22:57] <mburke2> nacc / slashd, I seem to be all setup now. thank you very much for your help, I really appreciate it!
[23:01] <slashd> mburke2, my pleasure
[23:03] <dpb1> glad to hear