[11:06] <NikTh> Hello, is there a way to workaround this problem ? " EE: Unresolved module dependencies in base package ! " 
[11:06] <NikTh> Fail Log: https://launchpadlibrarian.net/205724333/buildlog_ubuntu-trusty-amd64.linux_4.0.1-999.20150506_BUILDING.txt.gz
[11:11] <apw> NikTh, that means you added or moved something into linux-image from extra and it needs something else from the -extra package moved over
[11:11] <apw> if you do not care about the split at all in your use case, then you can just turn it off
[11:11] <apw> if you do care you need to find out what it is whining about and pull it over too
[11:11] <apw> if you don't think you are doing such things then we need to read the log
[11:18] <NikTh> apw: Thanks for the answer. Can you see the log (I have already posted) , last lines. I have searched the web for this error but I couldn’t find something useful. 
[11:18] <apw> NikTh, what did you do to the kernel, what are you changin
[11:19] <apw> are you trying to build a mainline kernel source there ?
[11:20] <NikTh> I added one patch only and another  minor change. Nothing special I guess: https://launchpad.net/~nick-athens30/+archive/ubuntu/trusty4/+packages
[11:20] <apw> NikTh, but against which tree
[11:22] <NikTh> apw: Here is the full config if that helps https://launchpadlibrarian.net/205718414/linux_4.0.1-999.20150505_4.0.1-999.20150506.diff.gz . Which tree ? I'm not sure I understand, but If I understand correctly, the linux-stable. 
[11:23] <apw> if you are building a non-ubuntu base (which it sounds like you are) you need to suppress the split do_extras_package=false
[11:24] <NikTh> I have found an easy way (I think) to ubuntize the upstream kernel. clone: linux-stable, remote : ubuntu-trusty , checkout the debian and debian.master folders... 
[11:25] <NikTh> creating a new flavour (I don't want to change the original  ones), changing the configuration, apply the patches.. compiling..etc. Local , I have not such problems, the kernel builds normal. On Launchpad it fails. 
[11:26] <NikTh> do_extras_package=false inside the amd64.mk ? for instance. 
[11:28] <NikTh> Ok. I have found the option. I will try that way. apw  Thanks :) 
[11:40] <apw> yes some things are subtly different when building on a buildd than building locla
[11:40] <apw> i assume its building the original flavours which break not the new ones
[11:41] <apw> as the inclusion list which enables the split is flavour specific
[14:40] <pkern> I'm confused about 3.13.0-52.86. Shouldn't it be released by now? precise got 52.85 (i.e. known broken wrt audit) yesterday instead of 52.86.
[14:48] <henrix> pkern: looking... give me a sec
[14:51] <henrix> pkern: so, the 3.13.0-52.86 is currently on the kernel PPA, and should hit -proposed soon.
[14:52] <henrix> pkern: there has been a couple of respins of that (and other...) kernels recently, due to regressions, security fixes...
[14:53] <pkern> henrix: Yeah, I'm fetching it from the kernel PPA right now.
[16:52] <lamont> 3.13.0-49-generic running in a vm under 3.13.0-51-generic, with its disk being a vm on a swraid6 pv.. under what circumstances would it decide that it was getting all kinds of disk read errors on /dev/sda?
[17:04] <smb> Guess one vm mean lv...
[17:08] <lamont> disk is an lv on a vg which has one pv of swraid6
[17:09] <smb> if that sda is inside the guest and the vm is qemu/kvm and the host does not whine about swraid failing maybe the device emulation of qemu could be blamed
[17:09] <lamont> May  6 11:01:43 radicchio kernel: [46551.054088] 3w-xxxx: scsi0: Command failed: status = 0xc7, flags = 0x7f, unit #5.
[17:09] <lamont> May  6 11:01:43 radicchio kernel: [46551.054155] sd 0:0:5:0: WARNING: Command (0x28) timed out, resetting card.
[17:09] <lamont> would be relevant
[17:09] <lamont> sde (aka sd 0:0:5:0) is one of the members of the gargantuan swraid6
[17:10] <lamont> because every house needs a 10.74TB raid6, right?
[17:10] <smb> hm, yeah though if there is not more failed ones this should in theory not bother the guest
[17:11] <smb> that is probably mdraid, so does cat /proc/mdstat help?
[17:13] <smb> lamont, not sure ... my house only has 8T raid5 :-P
[17:14] <lamont> -radicchio(root) 326 : grep F /proc/mdstat
[17:14] <lamont> -radicchio(root) 327 : 
[17:14] <lamont> I'm suspectiong thatthe timeout in the host kernel is propogating to the guest, but it used to work just fine..
[17:16]  * lamont will get -52 on both host and guest, and see if that makes any difference
[17:16] <lamont> biggest pain is that the guest is the MTA for me
[17:16] <smb> lamont, It would be odd but with software anything is possible
[17:18] <lamont> true that
[17:20] <smb> lamont, otoh, thinking on it, maybe having "normal" disks as part of the raid (not say those targetting nas and having a shorter recovery period) maybe this is a case of unlucky sync in waiting on an io to finish
[17:23] <lamont> these are your generic, everyday 3TB sata drives, with 2 partiions, the big one in the raid6, and the small one in a raid1
[17:23] <lamont> well, one of 3 raid1s, but whatever
[17:33] <smb> lamont, not sure how helpful that is (since to be sure one would need a failing device when one needs it and not on anything one values either) but there  seems to be a timeout tweak in /sys/block/sda/device (which I guess is there for emulated (not virtio) device for the guest. maybe tweaking that to 60 (seemed to be set to 30) and assuming thats seconds maybe that allows the guest to be more fogiving
[17:34] <lamont> I shall give that a try
[17:42] <smb> lamont, Another option could be to change the guest from using that lv as emulated device into virtio disk (though that has a bit of a risk if mounting inside the guest is not using uuid of label because devices will rename from sdX to vdX). Would improve guest performance and get rid of assumptions made for real hw.
[17:44] <lamont> smb: syntax?
[17:44] <lamont>     <disk type='block' device='disk'>
[17:44] <lamont>       <driver name='qemu' type='raw'/>
[17:44] <lamont>       <source dev='/dev/RADICCHIO/mmjgroup-root'/>
[17:44] <lamont>       <target dev='hda' bus='ide'/>
[17:44] <lamont>       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
[17:44] <lamont>     </disk>
[17:44] <lamont> that's the current config
[17:45] <smb>    <disk type='block' device='disk'>
[17:45] <smb>       <driver name='qemu' type='raw'/>
[17:45] <smb>       <source dev='/dev/datavg/arg-trusty6401'/>
[17:45] <smb>       <target dev='vda' bus='virtio'/>
[17:45] <smb>       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
[17:46] <smb>     </disk>
[17:46] <smb> lamont, not sure one needs all lines 
[17:47] <lamont> smb: ok.  I'll smack that around later
[17:47] <smb> usually I attach virt-mangler err manager to the other side and be lazy. I would drop the address line as some numbering is done magically
[17:48] <smb> lamont, assuming you go virsh edit
[17:50] <lamont> virsh edit i slove
[17:52] <smb> lamont, so maybe you need "<controller type='pci' index='0' model='pci-root'/>" ... but then index may need tweaking because of ide controller and slot for the disk needs not to conflict with other entries...
[17:53] <smb> lamont, meh I'll email you a complete definition
[17:58] <lamont> ta
[18:07] <NikTh> God ! with this kernel build. It's so frustrating (well, when you don't know how to do it correctly) :P 
[18:08] <NikTh> apw: When you're here around and having time,  any explanation on why amd64 succeeded and i386 failed would be appreciated :-)
[18:08] <apw> NikTh, i'd need a log
[18:10] <NikTh> The fail log I guess: https://launchpadlibrarian.net/205750108/buildlog_ubuntu-trusty-i386.linux_4.0.1-999.201505061431_BUILDING.txt.gz
[18:11] <apw> NikTh, that is failing because you don't have the manual pages for the cloud tools in your upstream repo
[18:13] <NikTh> apw: Is this message related to manual pages for cloud tools? "cannot stat '/build/buildd/linux-4.0.1/tools/hv/*.8': No such file or directory" 
[18:13] <apw> NikTh, we carry a patch for those i think, to create some, so you might need that, or rip the bit which is trying to copy them
[18:13] <apw> yes
[18:14] <NikTh> apw: Ok. If I want to patch the kernel with Ubuntu patches, and for this specific version, are these the only patches ? http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.0.1-vivid/
[18:15] <NikTh> the three .patch files I see there ? 
[18:16] <NikTh> I suppose yes. http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.0.1-vivid/README
[18:18] <apw> those are just the packaging we used to build the kernel, but the kernel there is a mainline kernel with no ubuntu sauce on it
[18:18] <apw> those are not "ubuntu" kernels, but build of mainline without ubuntu changes, for ubuntu
[18:19] <apw> they are mearly debug builds to allow problem isolation not production kernels
[18:20] <NikTh> apw: Thanks. I know about the difference between Ubuntu and Mainline kernels. But where I could find the patches you mentioned for cloud tools OR how can I disable this "bit which is trying to copy" the man pages? Thanks. 
[18:20] <apw> the patches you need would be in our git repo
[18:22] <apw> 52ba18cffbebf208ceef5bbf9c830aac7b8c464e seems to be the skeleton man page, or you could just find and remove the command from debian/rules.d/*
[18:37] <NikTh> apw: I think I'll prefer the second option, for now. Thanks again for valuable help. 
[18:51] <lamont> grub-probe: error: disk `lvmid/jlLb2C-1kiF-tFKm-CvIK-sHDL-SJJZ-YE9fED/ftEZLq-JaxY-JSaD-CVuK-fPo4-mpOp-YQOOkD' not found.
[18:51] <lamont> well done
[18:51] <lamont> (looks like snapshots are poorly handled, again)
[18:52] <smb> lamont, err is that the same environment as before... 
[18:53] <lamont> totalluy
[18:53] <lamont>   /usr/sbin/grub-probe: warning: Couldn't find physical volume `(null)'. Some modules may be missing from core image..
[18:53] <lamont> about  8 of those per kernel, and 1 each of the disk not founds per kernel per lvm snapshot
[18:58] <smb> I am a bit confused ... kinda hard to grok without knowing the layout. But one thing is that probing is excessive so everything multiplies
[18:59] <smb> hm... dm-snapshot likely needs to be loaded
[21:26] <lazyPower_> ogasawara: ping
[23:34] <lamont> smb: switched, now shows /dev/sda1 for root, we'll see how well it behaves
[23:38] <lamont> smb: though it alwyas syas sda