[00:00] <JanC> lifeless: what I was pointing at, is that 2 drives failing at the same time usually points to a bad batch
[00:00] <lifeless> JanC: but thats the thing, it doesn't.
[00:01] <lifeless> JanC: *that* was the point of the google analysis
[00:02] <JanC> well, 2 drives from the same batch failing very prematurely
[00:02] <lifeless> JanC: if you run enough arrays - say 10x2TB arrays, even if you multi-batch the drives in every array...
[00:03] <JanC> probably the closer the serial number the more likely the correlation of errors
[00:06] <lifeless> I'm curious, did you read the google paper ?
[00:08] <JanC> no, but if you see almost all disks from a certain batch fail in less than 2 years, the chance that they fail "together" is quite high...
[00:09] <utter> JanC: thanks for help, tracepath and mtr -u works just fine, traceroute refuse to work. I settle for that .
[00:09] <JanC> so UDP works but ICMP not?
[00:10] <utter> JanC: with mtr i need to use switch -u (UDP) yes
[00:10] <utter> i know its not the firewalls since i switched off both hardware router and ufw
[00:10] <JanC> sounds like a router or firewall blocking ICMP
[00:10] <utter> :P
[00:11] <JanC> *somewhere*
[00:11] <utter> could it be ISP blocking, yeah
[00:11] <JanC> if it happens for all hosts, sure
[00:11] <utter> since its always times out at hop 3.
[00:12] <utter> JanC: many thanks, i am happy now
[00:12] <utter> (admits i even switched ethernet cable on the server and eth port)
[00:14] <lifeless> JanC: so they have data, you are speculating.
[00:15] <JanC> lifeless: they have statistics   ;)
[00:16] <utter> Good noght Ubuntu <3
[00:16] <JanC> lifeless: did they split up statistics on batch, and provide worst/best case scenarios?  ☺
[00:18] <lifeless> JanC: they instrumented every drive in every server, with model age manufacturer service history
[00:18] <lifeless> JanC: including IO load
[00:19] <JanC> lifeless: but the only thing they care about it averages, as they have 100 mirrors to take over
[00:19] <JanC> or, more likely, thousands of mirrors
[00:20] <JanC> but if you have a link I'd happily read the paper  ☺
[00:21] <lifeless> its trivially googlable. The google paper doesn't talk correlation though; for that there are other papers
[00:21] <lifeless> like http://static.usenix.org/events/fast07/tech/schroeder.html
[00:23] <lifeless> anyhow, my point is that there is research on this, we don't need to rationalise or guess
[00:25] <zul> adam_g:  still around? https://code.launchpad.net/~zulcss/quantum/quantum-oslo-config/+merge/149184
[00:27] <JanC> lifeless: that paper says nothing about "bad batches"
[00:28] <JanC> which is explainable: for Google a bad batch is just a minor issue
[00:28] <lifeless> right; the schroeder paper makes a nod to it.
[00:28] <JanC> for a smaller company, a bad batch might be life or death  ;)
[00:31] <lifeless> JanC: you like find http://storagemojo.com/2007/02/26/netapp-weighs-in-on-disks/ an interesting read
[00:33] <lifeless> JanC: it has some further links.
[00:34] <JanC> lifeless: yes, will read it tomorrow
[00:34] <JanC> it's 1:30am here now ;)
[00:34] <lifeless> gnight!
[01:38] <hacosta> hi.. trying to use vmbuilder's existing chroot feature
[01:43] <deeprogram> I download ubuntu server version from "http://www.ubuntu.com/download/server/thank-you?distro=server&bits=64&release=latest" but I get it "ubuntu-12.10-server-amd64.iso" Why amd64 ?
[01:47] <escott> deeprogram, because thats what you downloaded
[01:47] <escott> deeprogram, what did you expect?
[01:48] <deeprogram> escott: I don't understand the name "AMD"
[01:48] <deeprogram> is it same as AMD CPU ?
[01:48] <escott> deeprogram, its their architecture yes
[01:48] <escott> deeprogram, AMD made it Intel copied it
[01:49] <deeprogram> escott: OK. thank you
[02:29] <anon321123> hey guys I need some help: I need to setup access to mysql server on a ubuntu server. I opened port 3306 and created a new user for them. To I need to add my new user to mysql group? This is a brand new server. mysql was already set up on it
[02:39] <anon321123> anybody home?
[02:41] <holstein> anon321123: yup...
[02:41] <holstein> anon321123: im not sure what you are doing... i wouldnt think you should expose mysql like that
[02:41] <holstein> anon321123: im no expert, which is why i didnt answer, but i typically have ssh access.. via keys, and open ports as needed fore services
[02:42] <holstein> ive never exposed msqul and wouldnt have any idea how to do that securely
[02:42] <holstein> all i can say is... can you just give the user ssh access?
[02:42] <anon321123> holstein: Hello. Thank you for listening. I am in america. I have a user that sent me an email from the other side of the world saying they need mysql. I set up a login for them. They have ssh access already
[02:44] <holstein> anon321123: they should have access to what they need then
[02:45] <holstein> anon321123: i dont expose mysql like that, and i dont think you should lightly
[02:45] <anon321123> holstein: Oh okay. Thank you very much.
[02:45] <holstein> the question is, why do they need that port open? and what are you providing them? just a database?
[02:47] <anon321123> holstein: I am trying tpo setup an environment for them to work in. The message I got was pretty vague and I am very new to all this stuff.
[02:47] <anon321123> holstein: I have to run real quick. Be back in a bit if you're still here. Thanks
[02:52] <holstein> anon321123: i would ask for more specifics... what you are setting up seems to me to be very insecure
[03:09] <anon321123> holstein: I did a dpkg reconfigure on mysql and deleted the iptables rule for it. I am just going to setup ssh access for them and take it from there. If anythiong is wrong I am sure they will let me know. Thank you
[03:21] <holstein> anon321123: that sounds safer to me.. im sure you'll get it sorted .. good luck!
[03:34] <anon321123> holstein: thanks
[07:51] <koolhead17> hi all
[07:51] <cfhowlett> koolhead17, greetings
[07:51] <koolhead17> cfhowlett: hi there.
[08:14] <Daviey> Guten Morgen
[08:16] <cfhowlett> Daviey, greetings
[08:20] <leotr> hello! I found in ubuntu-server-12.04.2.iso image in directory preseed following files: cli.seed, ubuntu-server-minimal.seed, ubuntu-server-minimavm.seed and ubuntu-server.seed. In isolinux/txt.cfg menu there is Install Ubuntu Server option that refers to ubuntu-server.seed. Does anything refer to cli.seed or ubuntu-server-minimal.seed?
[08:21] <koolhead17> hola Daviey
[08:29] <Daviey> leotr: not from the cd menu superficially, i believe cli and minimal are implied
[08:30] <leotr> Daviey: what is the difference between ubuntu-server.seed and ubuntu-server-minimal.seed?
[08:30] <Daviey> leotr: diff -u :)
[08:33] <leotr> Daviey: have you experience in remastering installation CD?
[08:34] <jamespage> morning all
[08:35] <jamespage> yolanda, review required if you have time - https://code.launchpad.net/~james-page/quantum/oslo-config/+merge/149217
[08:35] <yolanda> jamespage,s ure
[08:35] <jamespage> yolanda, thanks muchely
[08:36] <leotr> seems like ubuntu-server-minimal is not so minimal :)
[08:36] <yolanda> jamespage, why is that new dep?
[08:37] <jamespage> yolanda, quantum is the first project to start using oslo-config (openstack shared library)
[08:37] <Daviey> leotr: yeah.. you'd hope so :)
[08:37] <jamespage> yolanda, it landed in raring yesterday
[08:38] <yolanda> i approved it
[08:40] <leotr> what do i need to add to preseed file to make installation of ubuntu-server minimal without any questions (just select menu item and *everything* installs by it's own)
[08:42] <Daviey> leotr: You need to look at preseeding.. it's much less complex than i think you are making it :)
[08:43] <Daviey> leotr: check out, https://help.ubuntu.com/12.04/installation-guide/amd64/appendix-preseed.html
[08:45] <leotr> Daviey: is it difficult to add additional packages to CD and make them installed during installation? Is it difficult to figure out what is required to be written to CD so that no Internet connection is required for that?
[08:45] <leotr> second question is about packages
[08:46] <Daviey> leotr: it's a bit dirty.. but reasonable. http://razvangavril.com/linux-administration/custom-ubuntu-server-iso/ (i wouldn't dpo the kickstart bit)
[08:48] <leotr> Daviey: thank you
[10:07] <RoyK> JanC: it's simple statistics, really. nothing more fancy
[10:45] <psivaa> jamespage: I reported bug 1130029 for raring lxc server post install test failure (test_lxc_api) - both amd64 and i386 are impacted
[10:51] <jamespage> hallyn, ^^ can you take a look when you start please
[10:51] <jamespage> psivaa, ^^ FYI
[10:51] <psivaa> jamespage: thanks
[11:12] <Daviey> jamespage: Hey, are you uploading ceph to the grizzly CA?
[11:13] <jamespage> Daviey, will do
[11:13] <jamespage> I pushed a fix for the cluster resource agents last night
[11:13] <jamespage> to raring that is
[11:17] <Jeeves> Ik just heard Canonical finally has some ipv6 space!
[11:27] <jamespage> adam_g, roaksoax: lets discuss the approach to passing the vip between services for the openstack ha stuff later today
[11:47] <koolhead17> jamespage: http://www.mail-archive.com/ubuntu-bugs@lists.ubuntu.com/msg3983923.html
[11:47] <koolhead17> hope someone is showing some love to this openVswitch isssue :D
[11:47] <jamespage> koolhead17, two days ahead of you - fixed it on sat - in proposed and verified
[11:48] <koolhead17> jamespage: awesome. so will take some time to land on the cloud archive?
[11:48] <jamespage> koolhead17, openvswitch is not in the cloud archive
[11:48] <koolhead17> jamespage: ooh ok.
[11:52]  * koolhead17 pokes zul 
[12:04] <jamespage> koolhead17, released to updates now
[12:06] <koolhead17> k
[12:06] <jamespage> yolanda, erm - I broke something yesterday - https://code.launchpad.net/~james-page/quantum/fixup-quantum-agent-conf/+merge/149256
[12:06] <jamespage> please could you +1
[12:06] <jamespage> ta
[12:06] <yolanda> ok
[12:08] <yolanda> done
[12:22] <jamespage> yolanda, ta
[12:49] <Ul_> hello everybody. I can't get qemu-kvm to use the rbd image as disk. I've configured the kvm xml file to use the monitor, I've created a virsh secret and I've added the <auth> tag to define the authentication. when I want to do a virsh create of the xml file, it says "error connecting" to the monitors. I following the steps shown here http://wiki.skytech.dk/index.php/Ceph_-_howto,_rbd,_lvm,_cluster#KVM_-_add_secret.2Fauth_for_u
[13:19] <pmatulis> where can i download alpha2 server?  all i found were cloud images
[13:19] <ogra_> server didnt participate in alpha2
[13:20] <pmatulis> hm.  thanks ogra_
[13:21] <ogra_> like desktop and most other images ... we are moving away from milestones nearly everywhere
[13:32] <Koheleth> any reason why recent security updates of kernel are being held back?
[13:36] <pmatulis> Koheleth: what security updates?
[13:41] <Koheleth> just found out its for 10.04 lts, not 12.04 :)
[13:42] <cfhowlett> ... delivers a digital smack
[13:42] <cfhowlett> to the head
[13:42] <Koheleth> virtualmin still wants to install though
[14:01] <zul> jamespage: quantum broke again
[14:01] <jamespage> zul, hag!
[14:01] <zul> jamespage: hehe
[14:01]  * zul pokies jamespage with a stick
[14:01] <jamespage> ImportError: No module named netifaces
[14:02] <jamespage> blah!
[14:11] <hallyn> jamespage: huh, still trying to figure out what the actual error is suppsoed to be.  it says 0 failures.  is it actually whining if there is any output over stderr?
[14:12] <jamespage> hallyn, test.sh returned code 2
[14:12] <jamespage> but why? nothing in the output
[14:17] <hallyn> jamespage: I don't have permission to set priorities on utah test cases bugs
[14:17] <jamespage> hallyn, ping psivaa
[14:20] <hallyn> I think psivaa is mad at me for taking so much time at last UDS with libvirt :)
[14:20] <hallyn> jamespage: so it's been awhile since i've done it - those two commands to run utah tests can just be done on any cloud isntance right?
[14:20] <psivaa> hallyn: lol no :), i could set the priorities, if you'd want me. and i have asked the UTAH dev team to grant access in the mean time
[14:22] <hallyn> psivaa: :)   thanks.  i think that one probably shoudl be high.  btw are you the maintainer of the utah code base?
[14:23] <hallyn> what is the preferred route for updates?  merge proposals?  debdiffs?
[14:25] <psivaa> hallyn: ok, the priority is set now, but i do not maintain utah code, i'll ask the UTAH dev team to answer that
[14:25] <psivaa> gema: ^^^ could you please ?
[14:27] <hallyn> psivaa: cool, thanks.  see you at uds :)  (btw, that bug - it's still cropping up in various ways!)
[14:28] <gema> hallyn: do you have a fix to submit to utah?
[14:28] <psivaa> hallyn: ack, see you :)
[14:28] <gema> hallyn: the preferred method would be a merge proposal
[14:29] <hallyn> gema: no i don't yet :)  but i will
[14:29] <gema> hallyn: excellent, thanks
[14:29] <gema> we will be looking out for it
[14:30]  * hallyn goes to hide 
[14:30] <gema> hehe
[14:30] <gema> hallyn: smoke testing bugs always get fasttracked :)
[14:30] <smb> zul, hallyn, One of you care to sponsor a little upload of libvirt to raring? chinstrap:~smb/4review
[14:31] <zul> smb:  sure
[14:31] <smb> zul, ta, the changelog should be obvious ... *growl*
[14:37] <Daviey> lolz
[14:39] <zul> smb: looks good to me do you want to have a look hallyn
[14:40] <hallyn> uh, ok
[14:43] <smb> hallyn, FWIW, I also tested in on my Xen box. ;)
[14:46] <hallyn> smb: zul: ok.  did builders actually refuse to apply the patch without htat (seemingly trivial) refresh?
[14:46] <hallyn> smb: zul: in any case, looks good, thx
[14:46] <zul> hallyn: i might have disabled it by mistake
[14:46] <hallyn> right, i see that
[14:46] <hallyn> zul: oh hey,
[14:47] <smb> hallyn, zul, I guess it was an interruption while doing it and then, meh
[14:47] <hallyn> sheepdog - there's a request to enable it.  can it be optioanlly enabled, or would libvirt need to build-dep on something in universe?
[14:49] <zul> i think you would need to do a MIR for sheepdog
[14:51] <hallyn> couldn't just have it in Suggests?
[14:51] <hallyn> (i have no idea how it is hooked up...)
[14:51] <zul> neither do i
[14:51] <hallyn> oh aren't you the maintainer for sheepdog?
[14:52] <hallyn> ok well i don't have time to mess with that today, else i'd try a test build and run...
[14:52] <hallyn> i'll comment on the bug then (i just figured you'd know offhand :)
[14:52] <hallyn> thx - ttyl
[14:54] <zul> smb: uploaded
[14:55] <smb> zul, yay :)
[14:57] <jamespage> zul: https://code.launchpad.net/~james-page/horizon/g3-recompress/+merge/149290
[15:02] <jamespage> zul, I'm not promoting nodejs
[15:02] <jamespage> zul, I don't believe its supportable in main
[15:02] <zul> jamespage:  neither am i
[15:02] <leotr> Hello! I tried to create installation dischttp://razvangavril.com/linux-administration/custom-ubuntu-server-iso/. I added extra packages but now i get error unable to locate package-name (package is in extra directory)
[15:08] <jamespage> zul, fixing quantum now
[15:08] <zul> jamespage: cool im still stuck on quantum
[15:09] <jamespage> cinder?
[15:10] <zul> rtslib changes
[15:24] <Haris> Hello all
[15:25] <Haris> does ubuntu/debian named kickstart file as "preseed" file
[15:25] <Haris> name+
[15:25] <Haris> from ( https://help.ubuntu.com/community/Installation/LocalNet#Advanced:_Hands-Off.2C_Preseeded_Network_Server_Install ) is this ( preseed/url=http://192.168.1.7/preseed-feisty.cfg ) the kickstart file mentioned under point #5 ?
[15:26] <Haris> I need to build a kickstart/preseed file,a basic one, for a minimal install on a remote box. I have a pxe active with 12.0.4.2 LTS imported via cobbler
[15:27] <leotr> Haris no
[15:29] <Haris> I was looking at -> https://help.ubuntu.com/12.04/installation-guide/example-preseed.txt
[15:30] <leotr> kickstart file is produced by kickstart utility, preseed is different thing. But both can be used at the same time
[15:31] <Haris> do we have an example kickstart to 12.0.4 lts ?
[15:31] <leotr> http://razvangavril.com/linux-administration/custom-ubuntu-server-iso/
[15:34] <jamespage> zul, yolanda: https://code.launchpad.net/~james-page/quantum/python-netifaces/+merge/149312
[15:34] <jamespage> seems I like fixing quantum
[15:34] <Haris> why do I need to have an ISO ? I don't have interactive access to the box I need to install 12.0.4 on
[15:34] <Haris> 12.04+
[15:34] <zul> jamespage:  heh just poke it with a stick and it will fall apart
[15:35] <zul> jamespage: there is a quantum-plugin-hyperv package?
[15:36] <jamespage> zul, yeah - I did that over the weekend - its currently empty as I managed to not include the install file in my branch
[15:36] <zul> lgtm
[15:36] <jamespage> zul, thinking about revisiting the way the plugins work to be a little more automatic
[15:37] <zul> jamespage: agreed
[15:37] <leotr> Haris: you don't need it... Just wanted to show you that kickstart and preseed are different things
[15:37] <Haris> ah, thank you!
[15:38] <Haris> checking it
[15:38] <leotr> Haris: but both can be used for unattended installations
[15:39] <Haris> I see
[15:39] <leotr> Haris, but currently i couldn't add extra packages... The way it shown in tutorial doesn't work
[15:40] <leotr> package not found error... but kickstart itself works
[15:40] <leotr> but you have network connection so it shouldn't be important for you
[15:50] <zul> jamespage: https://code.launchpad.net/~zulcss/cinder/cinder-refresh/+merge/149317
[15:53] <jamespage> zul, looking
[15:53] <jamespage> zul: https://code.launchpad.net/~james-page/keystone/grizzly-refresh-01/+merge/149320
[15:54] <zul> jamespage: looking
[15:57] <zul> jamespage: https://bugs.launchpad.net/ubuntu/+source/oslo-config/+bug/1130196
[16:01] <Haris> does having a seperate partition for /boot help ?
[16:05] <jamespage> zul, looks borked "Starting cinder-volume node (version <cinder.openstack.common.version.VersionInfo object at 0x2a9be10>)"
[16:05] <zul> jamespage:  well that sucks
[16:21] <Haris> I need an example ks file for ubuntu. I have a template from centos. But its not working. I'v specified language in it. But the 12.04 installer asks me for language. Also, it asks me for cdrom failure. Where-as I'm not looking to install via cdrom. I'm installing this box via pxe
[16:33] <Haris> also, why does the pxebooted installer of 12.04 ask me for existence of cdrom ?
[16:37] <eutheria> suggestions to which imap server would be fastest to deploy?
[16:37] <phunyguy> hey folks, I am trying to use motion to capture a security camera, and the only way it will work is with  LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libv4l/v4l1compat.so before the command.  How can I add that to the init script in /etc/init.d ?
[16:49] <phunyguy> nevermind.  I got it.  `export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libv4l/v4l1compat.so` in the init script
[17:06] <Haris> how do I specify a network server or archive from where ubuntu will fetch files for installing 12.04, rather than asking for a cdrom
[17:06] <Haris> is this something I can do in the kickstart file
[17:13] <smoser> SpamapS, i'd really love SRU team lvoe to https://launchpad.net/ubuntu/precise/+queue?queue_state=1&queue_text=cloud-init
[17:18] <SpamapS> smoser: bug 1005551 needs a test case
[17:20] <smoser> SpamapS, i can do that.
[17:22] <SpamapS> smoser: ok, it looks good otherwise, will accept as soon as test case is there :)
[17:59] <hallyn> jamespage: when I try to reproduce the utah lxc failure, I get http://paste.ubuntu.com/1683416/
[18:11] <smoser> SpamapS, updated. i'll fix it up a bit, but theres a resonable description/test case there now.
[18:11] <smoser> thank you
[18:26] <SpamapS> smoser: np, accpting now :)
[18:32] <smoser> smb, stupid question.
[18:32] <smoser> but how do i get the quantal/backport/whatever-its-called kernel in 12.04
[18:34] <RoyK> smoser: running quantal?
[18:34] <smoser> RoyK, 12.04
[18:36] <RoyK> why do you need another kernel?
[18:36] <RoyK> (and why do you run quantal on a server?)
[18:37] <smoser> RoyK, 12.04.2 installations now install a 3.5 kernel (ie, the one from quantal).
[18:37] <smoser> i'm asking how i can install that kernel into a system that was previously installed.
[18:37] <jcastro> smoser: there's a wiki page, sec
[18:38] <smoser> jcastro, http://askubuntu.com/questions/168218/will-ubuntu-12-04-1-include-the-new-linux-kernel <-- that didn't help me as much as it coiuld have.
[18:38] <jcastro> I'll fix that once I find this page
[18:38] <smoser> (someone asked you about a kernel, and you told them about X)
[18:39] <jcastro> there's an entire page on how this works
[18:39] <jcastro> but unfortunately for us it's in the ubuntu wiki
[18:39] <RoyK> does 12.04.2 install with 3.5?
[18:40] <jcastro> yeah it's part of the enablement stack
[18:40] <RoyK> that doesn't make sense - the point of LTS is to be *stable*
[18:40] <smoser> https://wiki.ubuntu.com/PrecisePangolin/ReleaseNotes/UbuntuDesktop
[18:40] <jcastro> only new installs get the new kernel
[18:40] <smoser> RoyK, if you installed previously, you do not magically get the new kernels
[18:40] <smoser> new installs get new kernels.
[18:41] <RoyK> but why?
[18:41] <smoser> to support new hardware is the primary motivation
[18:41] <RoyK> the whole point of LTS is to remain stable
[18:41] <jcastro> https://wiki.ubuntu.com/Kernel/LTSEnablementStack
[18:41] <jcastro> there it is dude
[18:41] <adam_g> SpamapS: if you're still sitting at your SRU queue master console, there happens to be the openstack 2012.2.3 in queue for quantal as well (nova, glance, horizon, cinder, quantum)
[18:41] <smoser> those *$%& hardware companies keep making new stuff.
[18:41] <smoser> jcastro, yeah, ifound that soon after you used the world 'enablement'
[18:41] <RoyK> smoser: still, it doesn't make sense
[18:42] <jcastro> well, the LTS needs new kernels to work on newer hardware
[18:42] <SpamapS> RoyK: the *only* point of point release LTS's is to enable new hardware.
[18:42] <RoyK> add new PCI IDs etc, but don't upgrade the kernel to something bleeding-edge
[18:42] <jcastro> otherwise, 6 months after an LTS release all of a sudden it doesn't install on an increasing number of systems
[18:42] <SpamapS> RoyK: if you don't like the new kernel, install w/ older point release.
[18:43] <RoyK> I still like the old model better
[18:43] <SpamapS> RoyK: I think the problem is the overhead of maintaining so many kernel trees.
[18:43] <jcastro> IMO the release notes should be clearer about that, I can imagine people installing the point release thinking they're getting the same thing as they did before but with slipstreamed updates.
[18:44] <SpamapS> adam_g: ouch, thats a much bigger ball of wax. Since its been 2 weeks, I'll carve out some SRU time tomorrow which is my normal day.
[18:46] <adam_g> SpamapS: thanks.  you might notice some changes to the way we're preparing changelogs + bug tags after discussion in #ubuntu-release a few weeks back. let me know if you have questions
[18:47] <RoyK> the problem with moving to a new kernel for an LTS release is new bugs. with new code, there's always new bugs. If there are new drivers, backporting them would be better. PCI IDs etc are added all the time, and doesn't take much time to add
[18:52] <jcastro> smoser: I've fixed up that AU answer, thanks.
[18:56] <SpamapS> RoyK: dunno if you've noticed, but people test things now. Its no longer about reducing change, its about managing it. But I do agree with you that the decision was probably made a little too lightly.
[18:57] <jcastro> I think it should have been more obvious in the release notes, etc.
[18:57] <jcastro> it took me way to long to find that wiki page
[18:57] <SpamapS> I wonder if that kernel breaks my macbook air's touchpad the way quantal/raring have.
[18:59] <RoyK> I'm still sceptical about introducing new kernels into an LTS release
[18:59] <jcastro> they're only for new installs on the new media
[19:00] <jcastro> LTS machines won't get an upgrade to a new kernel or anything like that.
[19:00] <jcastro> and I suppose the data from errors.ubuntu.com will let us know right away
[19:02] <RoyK> still sceptical - LTS should be *stable*
[19:03] <sarnold> the proliferation of UEFA on new hardware makes it a bit impractical to wait until 14.04 for a new LTS.. this did seem least bad of available options
[19:03] <jcastro> indeed
[19:16] <RoyK> sarnold: if redhat/centos gets away with it, why not ubuntu?
[19:19] <RoyK> imho the LTS releases should be rock stable, meaning no major kernel upgrades nor major package upgrades, just backports for fixe
[19:19] <RoyK> fixes
[19:20] <RoyK> if this is changed to upgrading kernel just to add new hw support, it means LTS is no longer LTS
[19:20] <RoyK> it's moving towards the cutting edge
[19:20] <RoyK> that's what the non-LTS releases are for
[19:23] <patdk-wk> there are no more non-lts releases now
[19:24] <patdk-wk> atleast from that blog post I was reading, lts was going remain lts
[19:24] <patdk-wk> thought the new model was suppost to be, rolling releases, with backports to lts
[19:31] <RoyK> patdk-wk: well, if 12.04.2 has a new kernel, it's not really LTS, is it?
[19:34] <patdk-wk> hmm, mine doesn't, odd
[19:34] <sarnold> a fresh install does get the new kernel. updates have to ask for it by name.
[19:34] <patdk-wk> oh, I installed from 12.04.1 like a day before .2 came out
[19:35] <patdk-wk> sarnold, so how does that work?
[19:35] <patdk-wk> security patchs will go into both kernels?
[19:36] <sarnold> patdk-wk: I think so, what with it just being the quantal kernel it might not even be extra work. not sure. :)
[19:36] <patdk-wk> ya, but quantal support ends long before lts
[19:37] <sarnold> patdk-wk: based on a (too quick) skim of https://wiki.ubuntu.com/Kernel/LTSEnablementStack it looks a bit like we'd be offering stacks from the newer releases (if they happen) along the way
[19:40] <patdk-wk> yuk
[19:41] <patdk-wk> they are unsupported
[19:41] <patdk-wk> so if you install a 12.04.2+ cd, you will get a limited support kernel
[19:41] <patdk-wk> and will be forced to ugprade to the 14.04 stack to maintain support
[19:42] <patdk-wk> under item 9
[19:42] <patdk-wk> or, item 10
[19:53] <jcastro> yeah that sounds right
[19:54] <jcastro> you'd be on interim kernels until the next LTS
[19:55] <patdk-wk> if that was an installer-time option, I would be happy
[19:55] <jcastro> yeah, but it looks like at the time there were CD image issues
[19:56] <jcastro> I would expect in the future you'd choose at the installer level in your preseed or whatever
[19:56] <jcastro> but 2 ISOs isn't unmanageable
[19:56] <patdk-wk> guess for me, I have no point updating my local lib to .2
[19:56] <jcastro> just keep using the old ISO and you'll be fine
[19:57] <jcastro> existing LTS boxes won't get new kernel upgrades
[19:57] <RoyK> jcastro: that still doesn't make sense - LTS should be *stable* and no new kernels should arrive in such a distro
[19:57] <RoyK> even though it's in a new iso
[19:58] <patdk-wk> in the distro is fine, by default, I have issues with
[19:58] <patdk-wk> the option to use kernel kernels have always existed
[19:58] <patdk-wk> newer
[19:59] <RoyK> well, the option of doing a kernel upgrade is fine
[19:59] <RoyK> but a new kernel being the default with 12.04.2 is *not* fine
[19:59] <patdk-wk> no, that makes sense even, that is when you know you need it
[19:59] <patdk-wk> but to do it without telling you, :)
[20:01] <patdk-wk> sounds like an, alt-cd image feature though
[20:19] <RoyK> patdk-wk: really, a new kernel in an LTS doesn't make sense
[20:32] <SpamapS> RoyK: as I said, I think the kernel team is stretched too thin to keep all of the hardware backporting going on so many LTS trees. Trying to auto-detect what kernel you will need is pretty close to impossible....
[20:33] <RoyK> SpamapS: ok
[20:33] <SpamapS> RoyK: so if you want old kernel -> 12.04.1 + updates. If you can't boot 12.04.1 because of new hardware.. try 12.04.2 ...
[20:33] <SpamapS> RoyK: a lot more supportable from Ubuntu's standpoint that way.
[20:33] <RoyK> SpamapS: is it that bad?
[20:37] <SpamapS> RoyK: with desktop LTS support extending to 5 years, yes I think it is
[20:39] <RoyK> server and desktop should be split in that tense
[20:44] <SpamapS> RoyK: yeah, I think having the two diverge a lot would be just as much of a nightmare though.
[20:45] <RoyK> imho LTS should be rock stable
[20:45] <RoyK> no new versions should be allowed
[20:45] <RoyK> only backports
[20:46] <RoyK> non-lts should have new things
[20:46] <lifeless> all software sucks
[20:46] <lifeless> software that sucks will have security bugs
[20:46] <RoyK> that's what it used to be
[20:46] <lifeless> so no new versions -> vulnerable software
[20:46] <RoyK> yes, but using new software in LTS breaks things
[20:46] <RoyK> and makes LTS != LTS
[21:08] <Combatjuan> Hello.  I have server with some watchdog processes that are going nuts.  top shows them as using 330% CPU occassionally and having logged more CPU time than anything else.
[21:09] <Combatjuan> The last thing I want to do is make this server reboot.  I'm not sure how to go about figuring out why they're mad, and I don't want to set off the watchdog restart bomb.
[23:02] <GeorgeTorwell> does anyone know where I can see a list of abstractions for apparmor
[23:03] <sarnold> GeorgeTorwell: ls /etc/apparmor.d/abstractions/
[23:04] <sarnold> cripes there's a lot :)
[23:04] <GeorgeTorwell> thanks
[23:16] <sliddjur> When restoring files with duplicity restore how to restore all files from latest backup and overwrite current files?
[23:17] <sliddjur> "Duplicity will not overwrite an existing file. Here's the output if a change is made to the script above to restore the file to /etc/apt/sources.list:" (https://help.ubuntu.com/community/DuplicityBackupHowto)
[23:17] <sliddjur> can I force overwrite?
[23:20] <holstein> sliddjur: you could remove the targets
[23:21] <sliddjur> u mean restore to another location?
[23:21] <holstein> or that...
[23:22] <sliddjur> holstein: what do you mean remove targets then?
[23:23] <holstein> sliddjur: if duplicity will not overwrite an existing file, then remove the existing file... otherwise i see some "force" options in the man pages
[23:27] <sliddjur> holstein: I only see force options on the delete backup switches
[23:27] <holstein> sliddjur: me too, thats why i suggested removing the targets, or just use rsync
[23:38] <sliddjur> hmm. what would be a good way to restore the /etc dir upon a system crash?