[01:19] <law> hey all, I'm trying to netboot an ubuntu-server iso over ipxe/uefi, but it hates me.  Is the ubuntu-server image UEFI-compatible in Xenial?
[04:01] <mason> law: yes
[05:24] <masber> good afternoon, for some reason my ubuntu server does not want to boot
[05:24] <masber> it says "gave up waiting for root device"
[05:25] <masber> and goes to (initramfs)
[05:29] <masber> https://pasteboard.co/H6bAYZY.png
[06:29] <cpaelzer> good morning
[06:31] <cpaelzer> masber: in the early env there is no lsblk yet
[06:31] <cpaelzer> masber: try looking for /dev/<dependingonyourdevice>
[06:31] <cpaelzer> masber: also dmesg might be interesting to see if anything failed while initializing the disk or controllers
[07:34] <lordievader> Good morning
[09:15] <jamespage> cpaelzer: hmm well all seems good with a fresh snapshot
[09:15] <jamespage> \o/
[09:16] <jamespage> https://bileto.ubuntu.com/#/ticket/3124
[09:19] <cpaelzer> oO
[09:19] <cpaelzer> nice
[13:15] <jamespage> coreycb: I'm going to push staging->proposed now
[13:16] <coreycb> jamespage: +1
[13:16] <coreycb> jamespage: i'm re-running nova dep8 tests. they passed for me locally.
[13:16] <jamespage> coreycb: ok
[14:45] <rbasak> cpaelzer: I wonder if you might know if there's an easy way to tell how much memory of a process cannot be evicted from the resident set? Eg. the heap, which would need to go to swap, vs. memory mapped libraries which can just be evicted.
[14:45] <rbasak> I ask because the amount on a vanilla server install seems to have gone up, so I'd like to measure it.
[14:50] <TJ-> rbasak: /proc/self/map ~ /proc/$PID/map  ?
[14:50] <rbasak> TJ-: I'm aware. I'm wondering if there's a tool that'll give me a number, instead of having to calculate it myself.
[14:51] <rbasak> (and it's maps, not map)
[14:51] <TJ-> rbasak: gotya, I saw some perf tooling a while ago but can't recall where or what exactly
[14:53] <TJ-> ahhh, "mem_usage"
[14:57] <TJ-> Mine notes I sourced it from https://elinux.org/images/d/d3/Mem_usage
[14:59] <cpaelzer> rbasak: look at the tool smem
[15:00] <cpaelzer> it has no good huge page support yet, but other than that is great
[15:02] <rbasak> cpaelzer: thanks! I'm trying it, but I think it's still counting memory that could be swapped out though?
[15:02] <rbasak> Uh
[15:03] <cpaelzer> it counts it "as it is"
[15:03] <cpaelzer> so it can be mapped, swapped or both
[15:03] <cpaelzer> if it is swapped, but not discarded
[15:03] <rbasak> What I want is the total memory per process that _has_ to be swapped out when under pressure.
[15:03] <rbasak> Rather than how it is now.
[15:03] <cpaelzer> _has_ -> could ?
[15:04] <rbasak> Not exactly
[15:04] <rbasak> Say I have no swap
[15:04] <cpaelzer> define "has to be" please
[15:04] <rbasak> I'm under memory pressure.
[15:04] <cpaelzer> ok
[15:04] <rbasak> What are the processes that I'm not using but are resident that are getting in the way?
[15:05] <cpaelzer> the ones with the biggest USS in this output
[15:05] <rbasak> In that situation, the kernel could evict everything that has a backing file.
[15:05] <cpaelzer> smem -tk -c "pid user command swap vss uss pss rss”
[15:05] <cpaelzer> there is another cmdline for the mapping view
[15:05] <rbasak> USS will include what is in memory but could be evicted, no?
[15:05] <rbasak> The figure will be inflated.
[15:06] <cpaelzer> hmm - I see, you want to know the minimal set that has to stay - is it that?
[15:06] <rbasak> Right
[15:06] <rbasak> Without putting the system under pressure, since that feels like it could introduce noise in my results depending on how I do it, what else is happening, etc.
[15:07] <cpaelzer> rbasak: but they are "just" userspace
[15:07] <rbasak> I don't follow
[15:07] <cpaelzer> rbasak: essentially all but maybe the mlocked pages "could" be swapped - in your non-swap case that can't be done
[15:07] <cpaelzer> so you look for "shrinkable" memory
[15:07] <cpaelzer> that the processes will not hold
[15:08] <cpaelzer> the smem output of above is what the program has on it's own allocated
[15:08] <cpaelzer> it is NOT what e.g it has in page cache (discardable) due to reading a file
[15:08] <rbasak> I thought USS would include, say, an mmapped ro file that's read only of which some is resident but that's the only process that has it mapped.
[15:09] <cpaelzer> you mean because it can be discarded and brought back
[15:09] <cpaelzer> hmm
[15:09] <rbasak> Right
[15:09] <cpaelzer> yeah you are right
[15:09] <cpaelzer> it is overaccounting for your check
[15:10] <cpaelzer> rbasak: but
[15:10] <cpaelzer> rbasak: see the second command above
[15:10] <cpaelzer> to get closer to what you want you should be able to remove all that is mapped on files there
[15:11] <cpaelzer> so your mmapped ro file would show up there and have an USS matching its file
[15:12] <cpaelzer> rbasak: I don't know how to check (in this simplified view) what is dirty/cow and can therefore not be discarded
[15:14] <cpaelzer> rbasak: do you need that for a single process?
[15:14] <rbasak> I'd like a per-process amount
[15:14] <cpaelzer> rbasak: so process X - how much of its open file #2 is in memory?
[15:14] <cpaelzer> so a system wide overview still
[15:14] <rbasak> I'd like a per-process _total_ amount
[15:14] <rbasak> Right.
[15:14] <cpaelzer> hmm, no then I don't know a tool yet
[15:14] <rbasak> I want to identify the "bad" processes, and how bad they are.
[15:14] <cpaelzer> proc smaps parsing
[15:15] <cpaelzer> maybe a small extension to smem
[15:15] <cpaelzer> it is python after all, so you might be close
[15:15] <rbasak> OK, thanks
[15:15] <cpaelzer> I once had a hugepage extension
[15:15] <cpaelzer> not too bad (but a bit outdated)
[15:15]  * rbasak wonders if it's worth it
[15:15] <cpaelzer> fyi the reverse I asked for is "man mincore"
[15:15] <rbasak> There are two candidates I've less subjectively identified
[15:15] <rbasak> less objectively
[15:16] <rbasak> iscsid and snapd
[15:18] <simulant_> hi can anyone please help me out. I have two machines on my local network that can't ping each other. they can ping other devices, other devices can ping them - but they can't ping each other!
[15:18] <simulant_> it's driving me bonkers if anyone can please help
[15:20] <rbasak> simulant_: I would narrow down the problem by seeing what is leaving and what is arriving where, using tcpdump. Sorry I don't have the time to go through the details with you.
[16:39] <nacc> cpaelzer: was the llast comment in LP: #1733572 a private ping?
[16:42] <nacc> i have the 'fix' for that issue in particular in the MP (wich you can review). I have a feeling we are being bit by an openssl issue, but not sure yet.
[16:43] <cpaelzer> nacc: It was making clear that you are already on it
[16:43] <cpaelzer> and for the review last week it was WIP - you say it is open for review now?
[16:46] <nacc> cpaelzer: no, it's still not
[16:46] <nacc> i haven't figured out teh segv yet
[16:46] <cpaelzer> ok waiting for your ping still then
[16:47] <cpaelzer> no urge by me, just needed to check if still in the state I knew
[16:47] <nacc> *but* it passes onn debian
[16:47] <nacc> and historically this has been due to some ssl issue, when the same segfault is seen
[16:48] <nacc> they are also only testing 7.0 and 7.1 :)
[18:08] <HackeMate> hello, is there a deployment generator for ubuntu server? such like opengnsys or the old rembo
[18:08] <HackeMate> or any image-restore via network
[18:11] <sarnold> cloud-init, maas, fai-server are all popular and serve slightly different purposes
[18:27] <HackeMate> thank you sarnold
[19:42] <coreycb> jamespage: networking-bagpipe/bgpvpn were a pita for b3 as they both reverse-depend on each other and won't pass tests without b3 of each other
[19:42] <coreycb> jamespage: and i see they're ftbfs in proposed now
[19:48] <coreycb> jamespage: if we can upload 8.0.0~b3-0ubuntu1 of both packages to queens-proposed, they have tests disabled. then we can promote the current version to queens-proposed.
[21:14] <axisys> I need to call a script at network change.. running it on my laptop which is sometimes on wifi at home and sometimes on ethernet at work.. where is the right place to put the script, so it gets called at network change? I am on 16.04 lts
[21:20] <mason> axisys: Use /etc/network/interfaces and have "up" directives. You can have multiple location-based configs for each interface.
[21:21] <axisys> mason: for laptop that file is mostly empty
[21:21] <mason> axisys: It can be as empty or as full as you make it.
[21:21] <mason> It's how I do it for my laptop. Works well.
[21:21] <axisys> mason: yep, use it for server..
[21:22] <mason> If you're looking for a way to do this with Network-Manager, then 1) don't do that, and 2) there's no way to do that.
[21:26] <sarnold> axisys: investigate ip monitor or rtmon
[21:27] <axisys> sarnold: thanks a lot!
[21:27] <sarnold> axisys: don't thank me just yet. :) you might not like these solutions much, hehe
[21:31] <powersj> nacc: I think this run will get us self-test: https://jenkins.ubuntu.com/server/job/git-ubuntu-ci-redux/4/console
[21:32] <nacc> powersj: will watch
[21:33] <nacc> powersj: i'm also wondering if we can possibly leverage https://forum.snapcraft.io/t/snapcraft-clean-doesnt-clean-with-snapcraft-container-builds-set/2291
[21:33] <nacc> powersj: (well the popey-mentioned commands int hat bug report)
[21:34] <nacc> to reuse the containers for building in CI
[21:34] <nacc> that would speed it up significantly, sinnce our deps don't geerally change (only our code itself)
[21:34] <powersj> that would be really nice
[21:35] <nacc> powersj: https://insights.ubuntu.com/2017/11/22/announcing-snapcraft-2-35/
[21:35] <nacc> i will see how that works if the underlying build target keeps changing
[21:38] <mason> sarnold: rtmon looks interesting, but the dinosaur in me wants to manually control which network I'm on
[21:39] <sarnold> mason: heh, yeah, I never really "got the hang" of network manager ..
[21:40] <mason> sarnold: I remember a long while back asking how to tie scripts to network-up events, and the NM guys told me it wasn't possible. So I kept using ifupdown.
[21:40] <mason> I still don't think it's possible, despite it being an obvious thing to want to do.
[21:40] <sarnold> mason: it seems like such a basic thing :(
[21:41] <mason> Hrm, this says NM just drives ifupdown: https://askubuntu.com/questions/258580/how-to-run-a-script-depending-on-internet-connection
[21:42] <mason> But I don't think that's true nowadays.
[21:51] <nacc> powersj: ok, yeah, it seems that will do what we want
[21:51] <nacc> powersj: i should test it a bit locally
[21:51] <powersj> nacc: any issues if we are running multiple CI jobs at a time?
[21:51] <nacc> powersj: ah probably :)
[21:51] <nacc> let me ask
[22:01] <nacc> powersj: see #snappy?
[22:01] <powersj> I'm not in that channel
[22:02] <nacc> powersj: ah ok
[22:02] <nacc> basically, the container used is dtermined by the directory it's called from (and the snap's name)
[22:03] <nacc> i'm trying to think of the matrix of decisions we want to make here
[22:05] <powersj> nacc: doh failed: 'bash -l -c git-ubuntu -h'
[22:07] <powersj> not sure why that failed, I thought it exits 0. Anyway I'll rerun without the call to -h and just skip to self-test
[22:08] <nacc> powersj: hrm, that exited 1 here .. not sure why either
[22:11] <nacc> xnox: is there a way with gpg2 to find out where private keys are stored (filename)? it used to be you could pass --secret-keyring, but that's ignored now and keys are put in private-keys-v1.d/ ... asking because php-horde-crypt calls gpg with --secret-keyring still and assumes it is honored
[22:17] <xnox> nacc, that will not work, no. one is supposed to use gnupg-agent; non-agent access to private keys is no longer supported.
[22:18] <nacc> xnox: ok, so the test as-written just doesn't make sense
[22:18] <xnox> nacc, you can and should use GNUPGHOME, then you can control which toplevel dir will be used.
[22:18] <nacc> xnox: yep, they do
[22:18] <nacc> i guess i could just look for the one key created in there :)
[22:19] <nacc> xnox: thanks
[22:41] <powersj> nacc: https://paste.ubuntu.com/26527006/
[22:43] <nacc> powersj: sigh ok
[22:43] <nacc> powersj: will pivot back to that in a bit
[22:44] <powersj> ok, you can basically re-run that last build to re-test
[22:44] <powersj> I added the failure stop to, so it won't run the integration tests next time
[23:42] <nacc> powersj: https://paste.ubuntu.com/26527216/
[23:42] <nacc> powersj: you're building on xenial, right?
[23:43] <powersj> nacc: correct
[23:43] <nacc> powersj: with updates enabled?
[23:43] <nacc> i'm not sure why you're versioning is different, which also ssems a bit odd
[23:43] <nacc> powersj: do you have the log from the buildl?
[23:44] <powersj> all I have is the jenkins log https://jenkins.ubuntu.com/server/job/git-ubuntu-ci-redux/4/console
[23:44] <powersj> woops 5 was the failure of self-test: https://jenkins.ubuntu.com/server/job/git-ubuntu-ci-redux/5/consoleText
[23:45] <nacc> powersj: ack, reading