[02:10] <Rob__> so err, is dhcp with nfsboot broken on 14.04?
[02:12] <Rob__> perhaps net module aren't being loaded before dhcp...
[02:15] <Patrickdk> dunno
[02:15] <Patrickdk> iscsi root works
[02:18] <Rob__> Patrickdk, it works if i manually specify an ip
[04:35] <jshsmn> I was wondering when the VEMOM patched qemu/kvm will be available in the icehouse cloud archive
[04:40] <sarnold> jshsmn: I think the 2.0.0+dfsg-2ubuntu1.11~cloud0 packages in the cloud archive are the updated packages
[04:41] <sarnold> .. at least I see them here, http://ubuntu-cloud.archive.canonical.com/ubuntu/pool/main/q/qemu/ and this says 2.0.0+dfsg-2ubuntu1.11 is the fixed version: http://www.ubuntu.com/usn/usn-2608-1/
[04:41] <sarnold> I'm not sure why the ~cloud0 rebuild though
[04:44] <sarnold> jamespage,Odd_Blok1, why the ~cloud0 rebuilds in the cloud archives? :)
[04:48] <jshsmn> I'm not seeing that package in the  http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/icehouse main
[04:48] <xavpaice_> I don't see it, maybe the mirrors aren't synced yet?
[04:48] <xavpaice_> ha, +1
[04:52] <jshsmn> I see it shows up in proposed as well
[04:53] <sarnold> I see... strange.
[04:54] <sarnold> jamespage,Odd_Blok1, why hasn't the qemu update migrated to precise-updates/icehouse main yet?
[04:54] <sarnold> bedtime for me, sorry I don't have answers for you jshsmn.
[04:54] <jshsmn> sarnold: Thanks for your time :)
[06:16] <sbeattie> jshsmn: according to http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/icehouse_versions.html it should be there, now.
[06:34] <jshsmn> sbeattie: thanks, I see it now
[09:24] <Vexena> Anyone knows/uses a good android app that monitors your ubuntu server and gives you notifications if something goes wrong?
[13:20] <carandraug> hi! I'm creating a virtual machine using Ubuntu's vmbuilder. It started 1 hour and half ago. I have no idea how long it usually takes, but I thought it would be faster. I found people online saying that it hangs for them, and one reply saying that it really takes time. But ps tells me that it's sleeping (S+)
[13:39] <smoser> strikov, http://paste.ubuntu.com/11130592/
[13:40] <smoser> that will run seemingly indefinitely on a instance of utopic
[13:40] <smoser> but fails pretty much immediately on vivid
[13:59] <strikov> smoser: define 'pretty much immediately' please
[14:00] <strikov> smoser: 1.5k commands is too early?
[14:02] <smoser> oh.
[14:02] <smoser> really.
[14:02] <smoser> it was failing < 100
[14:03] <smoser> nice.
[14:03] <smoser> there.
[14:03] <smoser> i just caught it on that vivid instance run 73
[14:03] <smoser> run byobu there.
[14:05] <smoser> strikov, ^
[14:05] <strikov> smoser: heh
[14:05] <strikov> [90418.02] 2639: pt1
[14:05] <strikov> BLKRRPART: Device or resource busy
[14:05] <strikov> failed[1]: ptwrite/blockdev: partition 1
[14:05] <smoser> on utopic ?
[14:05] <strikov> vivid
[14:06] <smoser> ah. yeah.
[14:06] <smoser> make sure we're not running it at same time :)
[14:06] <strikov> so basically we need to check for blockdev's return code
[14:06] <smoser> that'd be kind of unfair
[14:06] <strikov> ah, i see
[14:06] <smoser> well, no...
[14:06] <smoser> blockdev shoudln't return fail because nothing should have that busy
[14:07] <smoser> ie, we could have checked that return code, certainly. and you're right, that we did not in curtin
[14:07] <smoser> but it shouldnt fail. if it does its a race.
[14:07] <smoser> note, that very obnoxiously, there is no way to tell sfdisk < 2.26 to *not* call blockdev
[14:07] <smoser> that is really why i dropped using it and in that script use dd
[14:08] <strikov> smoser: my point was mostly about blockder --rereadpt
[14:08] <smoser> and in sfdisk 2.26 and later, it doesn't call that at all.
[14:08] <smoser> right... but we need to do that. and there should *not* be anyuthing making that "busy"
[14:08] <strikov> it may silently return error (busy) but we ignore it and run settle
[14:08] <smoser> the fact that something is busy is udev handle on it (blockdev possibly) having not finished.
[14:09] <strikov> smoser: let's try partprobe <dev> instead of blockdev
[14:09] <strikov> just for checking
[14:10] <smoser> apw, rbasak lets put all conversation here. ^
 smoser: just been looking at the udev source. I'm pretty sure it's racy. Not sure if what we're trying to do is supported behaviour.
 I believe there's a race in that "udevadm settle" can return before udevd sees an event that the kernel has queued.
[14:11] <smoser> rbasak, well, apw implied that the kernel does not return from blockdev until it has added the event to the queue
[14:11] <rbasak> smoser: so event_queue_update() in udevd.c is the thing that informs "udevadm settle" via a sentinel file AFAICS.
[14:11] <apw> smoser, that BLKRRPART just tells you you oculdn't change the partitiont table because one of the parititons was busy
[14:11] <smoser> right
[14:11] <smoser> but it should not be busy.
[14:11] <rbasak> smoser: it is only called a while after epoll_wait() returns.
[14:11] <apw> that doesn't tell you that the partition change worked and you waited and moved on before it was done
[14:12] <smoser> the only reason it is busy is udev re-acting to previous things.
[14:13] <smoser> apw, maybe i'm just assumign something wrong. but what i'm saying is 'udevadm settle' should have completlye finished with those events.
[14:13] <rbasak> An added complication is that "udevadm settle" does "ping" the udevd control socket first, but I don't think this necessarily eliminates the race.
[14:13] <smoser> and thus nothing should be using that device
[14:13] <apw> right, but you can only rely on udev settle to settle any events, that doesn't mean nothing was spawned in the background
[14:13] <rbasak> smoser: I think your busy thing is a red herring. Probably caused by a udev rule.
[14:14] <rbasak> It might need to be fixed too, but there's still a separate race.
[14:14] <smoser> apw, right. if that is the case, that udev just takes the events anad forks somthing, then i would have to wait until all those things were finished.
[14:14] <apw> if for example we sent a dbus message to something else and it opened things
[14:14] <smoser> but... how could i know that.
[14:14] <smoser> its quite possibl eyou're right.
[14:14] <smoser> but how would i know "i'm all done now".
[14:14] <apw> smoser, i would be wondering if LVM is installed in the image
[14:14] <smoser> thats what i want.
[14:15] <apw> and i am not sure you have any cirtain way of doing that
[14:15] <rbasak> smoser: I think you need much finer control of what udev does during your partition creation time. Effectively the only udev hooks should be your own ones under your own direct control.
[14:15] <rbasak> smoser: you don't really want other general distro stuff doing anything when you create the partition. Maybe even suspend udevd during that time or something.
[14:15] <smoser> rbasak, well i dont think its a red-hearing.
[14:15] <smoser> its the problem.
[14:15] <rbasak> It's only part of the problem. I still think there's a separate race despite that.
[14:16] <rbasak> Inside udevd.
[14:16] <apw> rbasak, what makes you think that
[14:16] <smoser> if i went on and tried to 'mkfs.ext4' at that point (after partitioning), then mkfs could find it busy also.
[14:16] <rbasak> apw: I think "udevadm settle" can racily be blind to an event queued to udevd immediately before it is called.
[14:17] <rbasak> apw: as an implementation detail in the communication between udevd and "udevadm settle".
[14:17] <rbasak> I'm not absolutely certain though. There's an extra thing in the implementation that complicates things a little.
[14:17] <smoser> so... if udev gets an event, and then forks a bunch of stuff into background, then 'udevadm settle' is pretty worthless.
[14:18] <smoser> apw, so... lvm is not in either image.
[14:18] <apw> so the quesition is whether that is the reason
[14:18] <smoser> but bcache is in vivid
[14:18] <smoser> which could be doing that.
[14:20] <smoser> well, i just purged bcache-tools from the image and recreated in vivid
[14:20] <smoser> so its not specifically that
[14:22] <rbasak> smoser: write "ps axf" to a file just before you call partition2? Then when you get the failure you should have a process listing showing what had the partition device open.
[14:22] <rbasak> smoser: also maybe lsof.
[14:22] <Munt> Hello there folks, I’m running a pptp vpn on my home server in order to hide my home ip from the big bad internet.    is there a more “standard” way of doing this ?   my way seems a bit convoluted
[14:22] <apw> yeah i was going to say an lsof would be most useful
[14:22] <rbasak> As long as the race stays for the time that takes.
[14:23] <apw> yep as long as that
[14:23] <apw> smoser, rbasak, it occurs to me that in the new world order things interact with udev in a different way
[14:24] <apw> they can consume the incoming events directly, by monitoring the udev queue directory
[14:24] <apw> directly
[14:25] <apw> i am pretty sure that systemd does this, and would start jobs based on those events
[14:25] <rbasak> apw: does the udev queue directory actually contain content?
[14:25] <rbasak> apw: AFAICT /run/udev/queue is just a file that is empty when stuff is in the queue, and not there when there's nothing in the queue.
[14:25] <rbasak> Or are you talking about something else/
[14:25] <rbasak> ?
[14:25] <apw> rbasak, i am not quite sure how it works, but many things consume the event stream directly via libudev
[14:26] <apw> rbasak, i am talking about the events that udev emits
[14:26] <apw> not the ones it is consuming
[14:26] <rbasak> I see
[14:26] <apw> but things consume those and run things, and those would not be something udevadm settle would know about
[14:26] <smoser> well, adding 'ps' and 'lsof' to 'fail' doesnt help much. cant catch anything interesting in it.
[14:26] <rbasak> Yeah anything that basically does what "udevadm monitor" does can essentially do that.
[14:27] <apw> and i am thinking of systemd in particular whihc has a number of device specific jobs
[14:27] <rbasak> smoser: try logging ps and lsof *before* the failure.
[14:27] <smoser> apw, can i get finer granularity on uptime anywhere?
[14:27] <rbasak> After the failure will be too late.
[14:27] <rbasak> Well, could be too late.
[14:27] <rbasak> If you log before and the race goes away, then we know we've been too slow.
[14:28] <smoser> rbasak, right. its too late, but running it before is going to be useless and slow everything down.
[14:28] <apw> sys-devices-pci0000:00-0000:00:1f.2-ata1-host0-target0:0:0-0:0:0:0-block-sda-sda1.device                            loaded active plugged   M4-CT256M4SSD2 EFI\x20System\x20Partition
[14:28] <apw> and for example on my box that systemd unit was created on my system, so clearly it is listening
[14:28] <smoser> ok. so lets say i dont care *why* this exists
[14:29] <smoser> what could i do that would not be racy.
[14:29] <smoser> what can i block until
[14:29] <rbasak> pkill -STOP udevd
[14:29] <rbasak> Do your thing
[14:29] <rbasak> pkill -CONT udevd
[14:29] <apw> oh ugg
[14:29] <smoser> but i need it
[14:29] <smoser> because it is going to create /dev/<dev>1
[14:30] <smoser> udev is what creates that.
[14:30] <rbasak> I know it's ugly.
[14:30] <smoser> and i really dont want to get to managing my own events. or replacing udev.
[14:30] <rbasak> But you don't technically need that. Create a device node somewhere else. You know what it'll be.
[14:30] <apw> that osounds dangerous
[14:30] <smoser> well, even then i still have to know.
[14:31] <smoser> and yeah... its not really safe either if udev is telling other things they should possibly mess with the block device
[14:31] <smoser> as it will still be busy
[14:31] <smoser> the path in /dev/ is not what is busy...
[14:31] <rbasak> Or, what if you inotify watch for the device node to arrive instead?
[14:31] <smoser> still might be busy.
[14:31] <apw> the device is there ok now, it is busy
[14:31] <apw> can't you just retry if it is busy
[14:32] <rbasak> It's only busy in smoser's test
[14:32] <apw> what is it in the real sceanario
[14:32] <rbasak> In reality it isn't busy when writing the partition table, righit?
[14:32] <rbasak> Blat partition table, wait for partition device node to appear, write to partition device ndoe.
[14:32] <rbasak> Blat partition table, ask kernel to reload, wait for partition device node to appear, write to partition device node.
[14:33] <smoser> well, the thing we caught was that after partitioning, when going to 'mkfs' the partition did not exist
[14:33] <rbasak> In a way, it's cleaner and better abstracted away from udev to just wait for the device node to appear.
[14:33] <smoser> rbasak, i dont follow. i dont think your suggestion is safe from udve.
[14:33] <smoser> because 'write to device node' is "mkfs.ext4"
[14:34] <smoser> and that checks "is this busy"
[14:34] <smoser> and even if it didnt' check, and just did it... the fact that it is busy means something is (possibly) doing something to that.
[14:34] <rbasak> No it doesn't. This is Unix - no locks.
[14:34] <rbasak> If the partition device node exists, you can write to it.
[14:34] <rbasak> As long as it matches the partition table you're safe. And I don't think that's an issue, is it?
[14:34] <smoser> but i'm not guaranteed something else isn't using it.
[14:35] <rbasak> You aren't getting EBUSY on the kernel re-read partition table ioctl in practice.
[14:35] <rbasak> If something else is using it it probably doesn't matter anyway. I seriously doubt anything would be *writing* to it.
[14:35] <smoser> i doubt that too.
[14:35] <smoser> but whatever is causing me to get EBUSY could also be causing mkfs to get that.
[14:35] <smoser> right?
[14:36] <apw> it is almost curtainly something trying to identify that device
[14:36] <smoser> its probably blockdev
[14:36] <smoser> i suspect
[14:36] <smoser> and udev doing /dev/by-id/
[14:36] <rbasak> smoser: but you're not getting EBUSY in reality. I don't think you risk that unless you change the partition table twice in quick succession, which you're not doing.
[14:37] <rbasak> smoser: the race you have in your test is that you change it twice in quick succession, and the first change causes stuff to read the partitions which causes the second change attempt to fail.
[14:37] <rbasak> smoser: that's not happening in reality, is it?
[14:37] <smoser> well, in reality, what happened was /dev/<disk><ptnum> did nto exist at all
[14:37] <smoser> when i went to mkfs to it
[14:37] <smoser> which is odd, because the code actually checked "does it exist".
[14:38] <rbasak> Yes, and for that case, just waiting for the device node to appear should be fine.
[14:38] <smoser> which i think means that the code checked and found it, udev continued on, and then removed and created it.
[14:38] <strikov> rbasak: in reality udev settle returned before all the hooks finished
 which is odd, because the code actually checked "does it exist".
[14:38] <rbasak> It did?
[14:38] <smoser> and after the remove mkfs happend
[14:38] <smoser> yes. the code has that. [ -b <partition> ]
[14:40] <smoser> rbasak, code in trusty is at:
[14:40] <smoser>  http://bazaar.launchpad.net/~curtin-dev/curtin/trunk/view/201/helpers/common#L250
[14:40] <smoser> i'm not sure if we were in pt_mbr or pt_uefi
[14:41] <strikov> smoser: assert_partitions checks for /dev/vda1, right?
[14:41] <smoser> strikov, well, yeah. but we didn't have assert_partitions until just yesterday. its not in trusty
[14:41] <smoser> but the  code in trusty does do [ -b ${target}1 ]
[14:41] <smoser> http://bazaar.launchpad.net/~curtin-dev/curtin/trunk/view/201/helpers/common#L191
[14:42] <smoser> so all of that happened, and then after that happened, we tried to 'mkfs' and the device did not exist
[14:42] <rbasak> But that doesn't call mkfs.ext4?
[14:43] <rbasak> And pt_mbr doesn't seem to do the check at all?
[14:43] <rbasak> I'm looking at Wily BTW. Is Trusty materially different here?
[14:44] <rbasak> Oh I'm sorry
[14:44] <rbasak> pt_mbr does check for 1
[14:45] <rbasak> wipefs "--offset=$(($start*512))" "$target"
[14:45] <rbasak> Why not "${target}1" with no offset there?
[14:47] <rbasak> smoser: what if /dev/sda1 existed previously?
[14:47] <rbasak> smoser: say it did, then you repartition, then you ask kernel for reload.
[14:47] <smoser> rbasak, mkfs happens later in other code.
[14:47] <rbasak> smoser: that doesn't happen yet, then previous /dev/sda1 still exists.
[14:47] <rbasak> smoser: your test passes.
[14:48] <rbasak> smoser: then udev sees the old /dev/sda1 going away, so deletes it.
[14:48] <rbasak> smoser: then you try to mkfs.
[14:48] <smoser> rbasak, wipefs blocks
[14:48] <rbasak> smoser: then udev sees the new /dev/sda1 arriving, and creates it.
[14:48] <smoser> it does the same thing. wipes filesystem . blockdev . rereadpt
[14:48] <smoser> err.. wipedev does
[14:49] <smoser> but you're right. it coudl be that.
[14:49] <rbasak> I don't think that matters. Are you checking that the old /dev/sda1 has vanished?
[14:49] <smoser> wipefs  calls  the BLKRRPART ioctl when it has erased a partition-table
[14:49] <rbasak> I'm fairly sure that udevadm settle is racy
[14:49] <smoser> well, clearly it is racy
[14:50] <smoser> that is proven at this point :)
[14:50] <rbasak> So at this stage I think the best solution is to inotify on /dev for what you want, rather than udevadm.
[14:50] <rbasak> If you're vanishing sda1, make sure it has vanished before you continue.
[14:51] <rbasak> But better, I think you could adjust things to make sure that the ioctl gets called once and only once.
[14:51] <rbasak> Hmm. Though then you have a race since you don't know if sda1 is the old one or the new one.
[14:52] <rbasak> So maybe disappear sda1, wait for /dev/sda1 to not exist, then partition as you want, then wait for /dev/sda1 to exist.
[14:52] <rbasak> Given that udevadm settle is racy, best to rely on its actual result with inotify I think.
[14:52] <smoser> freaking annoying
[14:53] <rbasak> BLKRRPART -> EBUSY is a separate problem.
[14:53] <smoser> its not really.
[14:53] <smoser> i don thtink
[14:53] <smoser> its just a result of the race
[14:53] <rbasak> A different race.
[15:20] <caliculk> Hello, I recently installed some auditing programs on my Ubuntu Server installation. Afterwards I ran apt-get auto-purge and it responded with the following line: "Removing symbolic link vmlinuz.old you may need to re-run your boot loader[grub]", so I ran boot-repair just in case. However, at the moment, the boot-repair software has been stuck/running at "Unhide boot menu. This may require several minutes", however it has been roughly 20
[15:20] <caliculk> minutes and it still hasn't progressed. I was wondering if anyone had any suggestions on how to fix this or make sure nothing happens when I restart the server (as I am currently in Sweden and the server is in the US).
[15:21] <caliculk> I could ask someone to boot from a LiveCD, but I am not sure if there is any available where the server is.
[15:50] <med_> heh, jinx (email jinx) jamespage
[15:58] <jamespage> med_, lol
[15:58] <jamespage> med_, endeavouring to get that juno update out as well
[15:58] <jamespage> but might be next week now
[15:58] <med_> wins.
[15:58] <med_> thanks!
[15:59] <med_> next week seems... kind of busy
[15:59] <jamespage> mdeslaur, oh - not even the jinx I mean't
[16:00] <toothe> Hi! Is there a *current* guide to installing Roundcube on Ubuntu? I keep getting a "unable to connect to database" error.
[16:26] <PGNd> On an Ubuntu Trusty server install, the /etc/resolv.conf contains both IPv4 and IPv6 nameserver declarations.  Along with a warning to NOT edit the file directly, as it's maintained by resolvconf.
[16:26] <PGNd> I need to change the IPv6 assignment, but there's no mention of any IPv6 nameserver in /etc/resolvconf/resolv.conf.d/*; I'm not clear where that originates.  In ubuntu-server land, where's the right place to make that change?
[16:38] <YamakasY1> ok, 5GB for / is not that much anymore these days
[16:38] <YamakasY1> I thought it was enough
[16:45] <patdk-wk> PGNd, same as always, in /etc/network/interfaces
[16:46] <PGNd> patdk-wk: Are /etc/resolvconf/resolv.conf.d/* ignored?
[17:21] <Overand> I'm trying to figure out details on *how* ruby/rails/passenger/gems/etc were configured on my Ubuntu Server box.  (So, anyone using ruby and/or rails and/or passenger on ubuntu server (in my case 12.04?))  i don't know what was installed manually vs. the package manager etc
[17:43] <caliculk> Hello, I recently installed some auditing programs on my Ubuntu Server installation. Afterwards I ran apt-get auto-purge and it responded with the following line: "Removing symbolic link vmlinuz.old you may need to re-run your boot loader[grub]", so I ran boot-repair just in case. However, at the moment, the boot-repair software has been stuck/running at "Unhide boot menu. This may require several minutes", however it has been roughly 60
[17:43] <caliculk> minutes and it still hasn't progressed. I was wondering if anyone had any suggestions on how to fix this or make sure nothing happens when I restart the server (as I am currently in Sweden and the server is in the US).
[17:44] <caliculk> I just want to know some reasons that unhide boot menu might stop working, or why it might stall.
[17:45] <caliculk> Here is a ubuntu-pastebin for boot-repair: http://paste.ubuntu.com/11133824/
[18:13] <YamakasY1> damn my server is messed up, it boots but the partition table is doing weird, I cannot do an apt-get upgrade as there is no space left on device (which is)
[18:17] <YamakasY1> but all services run great
[18:30] <pmatulis> YamakasY1: so you filled your disk
[18:30] <YamakasY1> pmatulis: nope
[18:34] <YamakasY1> pmatulis: any other options
[18:40] <pmatulis> YamakasY1: you said it's full then you said it's not. what's going on?
[18:41] <YamakasY1> pmatulis: fixed
[18:41] <YamakasY1> http://techpain.blogspot.nl/2011/07/df-error-df-cannot-read-table-of.html
[18:59] <Overand> Holy crap zsh is smart.  It won't let me tab complete "git add (filename)" if the filename - for example - hasn't been changed since the last commit.  wow.
[18:59] <Overand> (Well, the git module for zsh anyway)
[19:04] <jrwren> i always use git add -up
[19:49] <robertj> sooo...i cought my rabbit, don't know what to do with it now
[19:49] <pmatulis> know thy shell...
[19:50] <robertj> got my netbooting from dhcp working so i can hotplug in 20 or 30 machines and get them all booted up
[19:50] <robertj> but it occurs to me...them all sharing /etc and /var/lock and friends probably aint such a great idear
[19:51] <robertj> so lots of /var can go into tmpfs so that's not a biggy
[19:51] <robertj> but /etc probably ought be a bit fancier...
[21:30] <pythonista> Hi, I am having an issue with updating the mysql package. I accidently restarted while the update manager was running an update and am now having an issue getting mysql to start. Here is a full description of the problem with a print out of the error message: http://askubuntu.com/questions/623797/error-updating-mysql-package
[23:58] <IronDev> Hi I am new to openstack and I want to know where do I find openstack on ubuntu server