[09:04] <apw> j4s0nmchr1st0s (N,BFTL), i don't think that makes any sense
[13:45] <tjaalton> is it normal that booting to a new kernel renders a drive in a md array broken? the server had uptime of 197 days and after booting (trusty) to -39 caused issues
[13:49] <smb> tjaalton, no
[13:50] <tjaalton> I'll try an earlier one..
[13:50] <tjaalton> it's a remote system so kinda awkward to test things
[13:50] <smb> tjaalton, question is how broken, broken means
[13:52] <tjaalton> smb: trying to access /dev/sda fails
[13:52] <tjaalton> so no wonder it got dropped from the arrays
[13:53] <tjaalton> it's a hp microserver that has worked ~4y without issues
[13:53] <smb> tjaalton, ok. but also the chance of a change in the kernel only affecting one drive sounds slim. I would rather believe in cosmic rays
[13:54] <smb> Some unfortunate coincidence with resetting the disk on reboot? 
[13:56] <tjaalton> yeah could be
[13:58] <smb> Sometimes weird shit happens. I had a SSD turning into a brick after resume... not any kind of pre-warning
[14:03] <tjaalton> yeah one of mine broke while I was at the (last) uds
[17:56] <bjf> zyga, trusty kernel and lts-hwe-trusty are being respun