[00:00] lifeless: what I was pointing at, is that 2 drives failing at the same time usually points to a bad batch [00:00] JanC: but thats the thing, it doesn't. [00:01] JanC: *that* was the point of the google analysis [00:02] well, 2 drives from the same batch failing very prematurely [00:02] JanC: if you run enough arrays - say 10x2TB arrays, even if you multi-batch the drives in every array... [00:03] probably the closer the serial number the more likely the correlation of errors [00:06] I'm curious, did you read the google paper ? [00:08] no, but if you see almost all disks from a certain batch fail in less than 2 years, the chance that they fail "together" is quite high... [00:09] JanC: thanks for help, tracepath and mtr -u works just fine, traceroute refuse to work. I settle for that . [00:09] so UDP works but ICMP not? [00:10] JanC: with mtr i need to use switch -u (UDP) yes [00:10] i know its not the firewalls since i switched off both hardware router and ufw [00:10] sounds like a router or firewall blocking ICMP [00:10] :P [00:11] *somewhere* [00:11] could it be ISP blocking, yeah [00:11] if it happens for all hosts, sure [00:11] since its always times out at hop 3. [00:12] JanC: many thanks, i am happy now [00:12] (admits i even switched ethernet cable on the server and eth port) [00:14] JanC: so they have data, you are speculating. [00:15] lifeless: they have statistics ;) [00:16] Good noght Ubuntu <3 [00:16] lifeless: did they split up statistics on batch, and provide worst/best case scenarios? ☺ [00:18] JanC: they instrumented every drive in every server, with model age manufacturer service history [00:18] JanC: including IO load [00:19] lifeless: but the only thing they care about it averages, as they have 100 mirrors to take over [00:19] or, more likely, thousands of mirrors [00:20] but if you have a link I'd happily read the paper ☺ [00:21] its trivially googlable. The google paper doesn't talk correlation though; for that there are other papers [00:21] like http://static.usenix.org/events/fast07/tech/schroeder.html [00:23] anyhow, my point is that there is research on this, we don't need to rationalise or guess [00:25] adam_g: still around? https://code.launchpad.net/~zulcss/quantum/quantum-oslo-config/+merge/149184 [00:27] lifeless: that paper says nothing about "bad batches" [00:28] which is explainable: for Google a bad batch is just a minor issue [00:28] right; the schroeder paper makes a nod to it. [00:28] for a smaller company, a bad batch might be life or death ;) [00:31] JanC: you like find http://storagemojo.com/2007/02/26/netapp-weighs-in-on-disks/ an interesting read [00:33] JanC: it has some further links. [00:34] lifeless: yes, will read it tomorrow [00:34] it's 1:30am here now ;) [00:34] gnight! [01:38] hi.. trying to use vmbuilder's existing chroot feature [01:43] I download ubuntu server version from "http://www.ubuntu.com/download/server/thank-you?distro=server&bits=64&release=latest" but I get it "ubuntu-12.10-server-amd64.iso" Why amd64 ? [01:47] deeprogram, because thats what you downloaded [01:47] deeprogram, what did you expect? [01:48] escott: I don't understand the name "AMD" [01:48] is it same as AMD CPU ? [01:48] deeprogram, its their architecture yes [01:48] deeprogram, AMD made it Intel copied it [01:49] escott: OK. thank you [02:29] hey guys I need some help: I need to setup access to mysql server on a ubuntu server. I opened port 3306 and created a new user for them. To I need to add my new user to mysql group? This is a brand new server. mysql was already set up on it [02:39] anybody home? [02:41] anon321123: yup... [02:41] anon321123: im not sure what you are doing... i wouldnt think you should expose mysql like that [02:41] anon321123: im no expert, which is why i didnt answer, but i typically have ssh access.. via keys, and open ports as needed fore services [02:42] ive never exposed msqul and wouldnt have any idea how to do that securely [02:42] all i can say is... can you just give the user ssh access? [02:42] holstein: Hello. Thank you for listening. I am in america. I have a user that sent me an email from the other side of the world saying they need mysql. I set up a login for them. They have ssh access already [02:44] anon321123: they should have access to what they need then [02:45] anon321123: i dont expose mysql like that, and i dont think you should lightly [02:45] holstein: Oh okay. Thank you very much. [02:45] the question is, why do they need that port open? and what are you providing them? just a database? [02:47] holstein: I am trying tpo setup an environment for them to work in. The message I got was pretty vague and I am very new to all this stuff. [02:47] holstein: I have to run real quick. Be back in a bit if you're still here. Thanks [02:52] anon321123: i would ask for more specifics... what you are setting up seems to me to be very insecure [03:09] holstein: I did a dpkg reconfigure on mysql and deleted the iptables rule for it. I am just going to setup ssh access for them and take it from there. If anythiong is wrong I am sure they will let me know. Thank you [03:21] anon321123: that sounds safer to me.. im sure you'll get it sorted .. good luck! [03:34] holstein: thanks === work_alkisg is now known as alkisg [07:51] hi all [07:51] koolhead17, greetings [07:51] cfhowlett: hi there. [08:14] Guten Morgen === smb` is now known as smb [08:16] Daviey, greetings [08:20] hello! I found in ubuntu-server-12.04.2.iso image in directory preseed following files: cli.seed, ubuntu-server-minimal.seed, ubuntu-server-minimavm.seed and ubuntu-server.seed. In isolinux/txt.cfg menu there is Install Ubuntu Server option that refers to ubuntu-server.seed. Does anything refer to cli.seed or ubuntu-server-minimal.seed? [08:21] hola Daviey [08:29] leotr: not from the cd menu superficially, i believe cli and minimal are implied [08:30] Daviey: what is the difference between ubuntu-server.seed and ubuntu-server-minimal.seed? [08:30] leotr: diff -u :) [08:33] Daviey: have you experience in remastering installation CD? [08:34] morning all [08:35] yolanda, review required if you have time - https://code.launchpad.net/~james-page/quantum/oslo-config/+merge/149217 [08:35] jamespage,s ure [08:35] yolanda, thanks muchely [08:36] seems like ubuntu-server-minimal is not so minimal :) [08:36] jamespage, why is that new dep? [08:37] yolanda, quantum is the first project to start using oslo-config (openstack shared library) [08:37] leotr: yeah.. you'd hope so :) [08:37] yolanda, it landed in raring yesterday [08:38] i approved it [08:40] what do i need to add to preseed file to make installation of ubuntu-server minimal without any questions (just select menu item and *everything* installs by it's own) [08:42] leotr: You need to look at preseeding.. it's much less complex than i think you are making it :) [08:43] leotr: check out, https://help.ubuntu.com/12.04/installation-guide/amd64/appendix-preseed.html [08:45] Daviey: is it difficult to add additional packages to CD and make them installed during installation? Is it difficult to figure out what is required to be written to CD so that no Internet connection is required for that? [08:45] second question is about packages [08:46] leotr: it's a bit dirty.. but reasonable. http://razvangavril.com/linux-administration/custom-ubuntu-server-iso/ (i wouldn't dpo the kickstart bit) [08:48] Daviey: thank you === histo1 is now known as histo [10:07] JanC: it's simple statistics, really. nothing more fancy [10:45] jamespage: I reported bug 1130029 for raring lxc server post install test failure (test_lxc_api) - both amd64 and i386 are impacted [10:45] Launchpad bug 1130029 in ubuntu-test-cases "testcase: test_lxc_api returns error in raring lxc server smoke tests" [Undecided,New] https://launchpad.net/bugs/1130029 [10:51] hallyn, ^^ can you take a look when you start please [10:51] psivaa, ^^ FYI [10:51] jamespage: thanks [11:12] jamespage: Hey, are you uploading ceph to the grizzly CA? [11:13] Daviey, will do [11:13] I pushed a fix for the cluster resource agents last night [11:13] to raring that is [11:17] Ik just heard Canonical finally has some ipv6 space! [11:27] adam_g, roaksoax: lets discuss the approach to passing the vip between services for the openstack ha stuff later today [11:47] jamespage: http://www.mail-archive.com/ubuntu-bugs@lists.ubuntu.com/msg3983923.html [11:47] hope someone is showing some love to this openVswitch isssue :D [11:47] koolhead17, two days ahead of you - fixed it on sat - in proposed and verified [11:48] jamespage: awesome. so will take some time to land on the cloud archive? [11:48] koolhead17, openvswitch is not in the cloud archive [11:48] jamespage: ooh ok. [11:52] * koolhead17 pokes zul [12:04] koolhead17, released to updates now [12:06] k [12:06] yolanda, erm - I broke something yesterday - https://code.launchpad.net/~james-page/quantum/fixup-quantum-agent-conf/+merge/149256 [12:06] please could you +1 [12:06] ta [12:06] ok [12:08] done [12:22] yolanda, ta === megha is now known as unix [12:49] hello everybody. I can't get qemu-kvm to use the rbd image as disk. I've configured the kvm xml file to use the monitor, I've created a virsh secret and I've added the tag to define the authentication. when I want to do a virsh create of the xml file, it says "error connecting" to the monitors. I following the steps shown here http://wiki.skytech.dk/index.php/Ceph_-_howto,_rbd,_lvm,_cluster#KVM_-_add_secret.2Fauth_for_u [13:19] where can i download alpha2 server? all i found were cloud images [13:19] server didnt participate in alpha2 [13:20] hm. thanks ogra_ [13:21] like desktop and most other images ... we are moving away from milestones nearly everywhere [13:32] any reason why recent security updates of kernel are being held back? [13:36] Koheleth: what security updates? [13:41] just found out its for 10.04 lts, not 12.04 :) [13:42] ... delivers a digital smack [13:42] to the head [13:42] virtualmin still wants to install though [14:01] jamespage: quantum broke again [14:01] zul, hag! [14:01] jamespage: hehe [14:01] * zul pokies jamespage with a stick [14:01] ImportError: No module named netifaces [14:02] blah! [14:11] jamespage: huh, still trying to figure out what the actual error is suppsoed to be. it says 0 failures. is it actually whining if there is any output over stderr? [14:12] hallyn, test.sh returned code 2 [14:12] but why? nothing in the output [14:17] jamespage: I don't have permission to set priorities on utah test cases bugs [14:17] hallyn, ping psivaa [14:20] I think psivaa is mad at me for taking so much time at last UDS with libvirt :) [14:20] jamespage: so it's been awhile since i've done it - those two commands to run utah tests can just be done on any cloud isntance right? [14:20] hallyn: lol no :), i could set the priorities, if you'd want me. and i have asked the UTAH dev team to grant access in the mean time [14:22] psivaa: :) thanks. i think that one probably shoudl be high. btw are you the maintainer of the utah code base? [14:23] what is the preferred route for updates? merge proposals? debdiffs? [14:25] hallyn: ok, the priority is set now, but i do not maintain utah code, i'll ask the UTAH dev team to answer that [14:25] gema: ^^^ could you please ? [14:27] psivaa: cool, thanks. see you at uds :) (btw, that bug - it's still cropping up in various ways!) [14:28] hallyn: do you have a fix to submit to utah? [14:28] hallyn: ack, see you :) [14:28] hallyn: the preferred method would be a merge proposal [14:29] gema: no i don't yet :) but i will [14:29] hallyn: excellent, thanks [14:29] we will be looking out for it [14:30] * hallyn goes to hide [14:30] hehe [14:30] hallyn: smoke testing bugs always get fasttracked :) [14:30] zul, hallyn, One of you care to sponsor a little upload of libvirt to raring? chinstrap:~smb/4review [14:31] smb: sure [14:31] zul, ta, the changelog should be obvious ... *growl* [14:37] lolz [14:39] smb: looks good to me do you want to have a look hallyn [14:40] uh, ok === frojnd_ is now known as frojnd [14:43] hallyn, FWIW, I also tested in on my Xen box. ;) === shiny is now known as sh1ny === wedgwood_away is now known as wedgwood [14:46] smb: zul: ok. did builders actually refuse to apply the patch without htat (seemingly trivial) refresh? [14:46] smb: zul: in any case, looks good, thx [14:46] hallyn: i might have disabled it by mistake [14:46] right, i see that [14:46] zul: oh hey, [14:47] hallyn, zul, I guess it was an interruption while doing it and then, meh [14:47] sheepdog - there's a request to enable it. can it be optioanlly enabled, or would libvirt need to build-dep on something in universe? === bean__ is now known as bean [14:49] i think you would need to do a MIR for sheepdog [14:51] couldn't just have it in Suggests? [14:51] (i have no idea how it is hooked up...) [14:51] neither do i [14:51] oh aren't you the maintainer for sheepdog? [14:52] ok well i don't have time to mess with that today, else i'd try a test build and run... [14:52] i'll comment on the bug then (i just figured you'd know offhand :) [14:52] thx - ttyl [14:54] smb: uploaded [14:55] zul, yay :) [14:57] zul: https://code.launchpad.net/~james-page/horizon/g3-recompress/+merge/149290 === leotr|2 is now known as leotr [15:02] zul, I'm not promoting nodejs [15:02] zul, I don't believe its supportable in main [15:02] jamespage: neither am i [15:02] Hello! I tried to create installation dischttp://razvangavril.com/linux-administration/custom-ubuntu-server-iso/. I added extra packages but now i get error unable to locate package-name (package is in extra directory) [15:08] zul, fixing quantum now [15:08] jamespage: cool im still stuck on quantum [15:09] cinder? [15:10] rtslib changes === ztane_ is now known as ztane [15:24] Hello all [15:25] does ubuntu/debian named kickstart file as "preseed" file [15:25] name+ [15:25] from ( https://help.ubuntu.com/community/Installation/LocalNet#Advanced:_Hands-Off.2C_Preseeded_Network_Server_Install ) is this ( preseed/url=http://192.168.1.7/preseed-feisty.cfg ) the kickstart file mentioned under point #5 ? [15:26] I need to build a kickstart/preseed file,a basic one, for a minimal install on a remote box. I have a pxe active with 12.0.4.2 LTS imported via cobbler [15:27] Haris no [15:29] I was looking at -> https://help.ubuntu.com/12.04/installation-guide/example-preseed.txt [15:30] kickstart file is produced by kickstart utility, preseed is different thing. But both can be used at the same time [15:31] do we have an example kickstart to 12.0.4 lts ? [15:31] http://razvangavril.com/linux-administration/custom-ubuntu-server-iso/ [15:34] zul, yolanda: https://code.launchpad.net/~james-page/quantum/python-netifaces/+merge/149312 [15:34] seems I like fixing quantum [15:34] why do I need to have an ISO ? I don't have interactive access to the box I need to install 12.0.4 on [15:34] 12.04+ [15:34] jamespage: heh just poke it with a stick and it will fall apart [15:35] jamespage: there is a quantum-plugin-hyperv package? [15:36] zul, yeah - I did that over the weekend - its currently empty as I managed to not include the install file in my branch [15:36] lgtm [15:36] zul, thinking about revisiting the way the plugins work to be a little more automatic [15:37] jamespage: agreed [15:37] Haris: you don't need it... Just wanted to show you that kickstart and preseed are different things [15:37] ah, thank you! [15:38] checking it [15:38] Haris: but both can be used for unattended installations [15:39] I see [15:39] Haris, but currently i couldn't add extra packages... The way it shown in tutorial doesn't work [15:40] package not found error... but kickstart itself works [15:40] but you have network connection so it shouldn't be important for you [15:50] jamespage: https://code.launchpad.net/~zulcss/cinder/cinder-refresh/+merge/149317 [15:53] zul, looking [15:53] zul: https://code.launchpad.net/~james-page/keystone/grizzly-refresh-01/+merge/149320 [15:54] jamespage: looking [15:57] jamespage: https://bugs.launchpad.net/ubuntu/+source/oslo-config/+bug/1130196 [15:57] Launchpad bug 1130196 in oslo-config "[MIR] oslo-config" [High,New] [16:01] does having a seperate partition for /boot help ? [16:05] zul, looks borked "Starting cinder-volume node (version )" [16:05] jamespage: well that sucks [16:21] I need an example ks file for ubuntu. I have a template from centos. But its not working. I'v specified language in it. But the 12.04 installer asks me for language. Also, it asks me for cdrom failure. Where-as I'm not looking to install via cdrom. I'm installing this box via pxe [16:33] also, why does the pxebooted installer of 12.04 ask me for existence of cdrom ? [16:37] suggestions to which imap server would be fastest to deploy? [16:37] hey folks, I am trying to use motion to capture a security camera, and the only way it will work is with LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libv4l/v4l1compat.so before the command. How can I add that to the init script in /etc/init.d ? [16:49] nevermind. I got it. `export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libv4l/v4l1compat.so` in the init script [17:06] how do I specify a network server or archive from where ubuntu will fetch files for installing 12.04, rather than asking for a cdrom [17:06] is this something I can do in the kickstart file [17:13] SpamapS, i'd really love SRU team lvoe to https://launchpad.net/ubuntu/precise/+queue?queue_state=1&queue_text=cloud-init [17:18] smoser: bug 1005551 needs a test case [17:18] Launchpad bug 1005551 in cloud-init "update-grub-legacy-ec2 ignores kernels named -generic" [High,Confirmed] https://launchpad.net/bugs/1005551 === matsubara is now known as matsubara-lunch [17:20] SpamapS, i can do that. [17:22] smoser: ok, it looks good otherwise, will accept as soon as test case is there :) [17:59] jamespage: when I try to reproduce the utah lxc failure, I get http://paste.ubuntu.com/1683416/ [18:11] SpamapS, updated. i'll fix it up a bit, but theres a resonable description/test case there now. [18:11] thank you [18:26] smoser: np, accpting now :) === wedgwood is now known as wedgwood_away === wedgwood_away is now known as wedgwood [18:32] smb, stupid question. [18:32] but how do i get the quantal/backport/whatever-its-called kernel in 12.04 [18:34] smoser: running quantal? [18:34] RoyK, 12.04 [18:36] why do you need another kernel? [18:36] (and why do you run quantal on a server?) [18:37] RoyK, 12.04.2 installations now install a 3.5 kernel (ie, the one from quantal). [18:37] i'm asking how i can install that kernel into a system that was previously installed. [18:37] smoser: there's a wiki page, sec [18:38] jcastro, http://askubuntu.com/questions/168218/will-ubuntu-12-04-1-include-the-new-linux-kernel <-- that didn't help me as much as it coiuld have. [18:38] I'll fix that once I find this page [18:38] (someone asked you about a kernel, and you told them about X) [18:39] there's an entire page on how this works [18:39] but unfortunately for us it's in the ubuntu wiki [18:39] does 12.04.2 install with 3.5? [18:40] yeah it's part of the enablement stack [18:40] that doesn't make sense - the point of LTS is to be *stable* [18:40] https://wiki.ubuntu.com/PrecisePangolin/ReleaseNotes/UbuntuDesktop [18:40] only new installs get the new kernel [18:40] RoyK, if you installed previously, you do not magically get the new kernels [18:40] new installs get new kernels. [18:41] but why? [18:41] to support new hardware is the primary motivation [18:41] the whole point of LTS is to remain stable [18:41] https://wiki.ubuntu.com/Kernel/LTSEnablementStack [18:41] there it is dude [18:41] SpamapS: if you're still sitting at your SRU queue master console, there happens to be the openstack 2012.2.3 in queue for quantal as well (nova, glance, horizon, cinder, quantum) [18:41] those *$%& hardware companies keep making new stuff. [18:41] jcastro, yeah, ifound that soon after you used the world 'enablement' [18:41] smoser: still, it doesn't make sense [18:42] well, the LTS needs new kernels to work on newer hardware [18:42] RoyK: the *only* point of point release LTS's is to enable new hardware. [18:42] add new PCI IDs etc, but don't upgrade the kernel to something bleeding-edge [18:42] otherwise, 6 months after an LTS release all of a sudden it doesn't install on an increasing number of systems [18:42] RoyK: if you don't like the new kernel, install w/ older point release. [18:43] I still like the old model better [18:43] RoyK: I think the problem is the overhead of maintaining so many kernel trees. [18:43] IMO the release notes should be clearer about that, I can imagine people installing the point release thinking they're getting the same thing as they did before but with slipstreamed updates. [18:44] adam_g: ouch, thats a much bigger ball of wax. Since its been 2 weeks, I'll carve out some SRU time tomorrow which is my normal day. [18:46] SpamapS: thanks. you might notice some changes to the way we're preparing changelogs + bug tags after discussion in #ubuntu-release a few weeks back. let me know if you have questions [18:47] the problem with moving to a new kernel for an LTS release is new bugs. with new code, there's always new bugs. If there are new drivers, backporting them would be better. PCI IDs etc are added all the time, and doesn't take much time to add [18:52] smoser: I've fixed up that AU answer, thanks. [18:56] RoyK: dunno if you've noticed, but people test things now. Its no longer about reducing change, its about managing it. But I do agree with you that the decision was probably made a little too lightly. [18:57] I think it should have been more obvious in the release notes, etc. [18:57] it took me way to long to find that wiki page [18:57] I wonder if that kernel breaks my macbook air's touchpad the way quantal/raring have. [18:59] I'm still sceptical about introducing new kernels into an LTS release [18:59] they're only for new installs on the new media [19:00] LTS machines won't get an upgrade to a new kernel or anything like that. [19:00] and I suppose the data from errors.ubuntu.com will let us know right away [19:02] still sceptical - LTS should be *stable* [19:03] the proliferation of UEFA on new hardware makes it a bit impractical to wait until 14.04 for a new LTS.. this did seem least bad of available options [19:03] indeed [19:16] sarnold: if redhat/centos gets away with it, why not ubuntu? [19:19] imho the LTS releases should be rock stable, meaning no major kernel upgrades nor major package upgrades, just backports for fixe [19:19] fixes [19:20] if this is changed to upgrading kernel just to add new hw support, it means LTS is no longer LTS [19:20] it's moving towards the cutting edge [19:20] that's what the non-LTS releases are for [19:23] there are no more non-lts releases now [19:24] atleast from that blog post I was reading, lts was going remain lts [19:24] thought the new model was suppost to be, rolling releases, with backports to lts [19:31] patdk-wk: well, if 12.04.2 has a new kernel, it's not really LTS, is it? [19:34] hmm, mine doesn't, odd [19:34] a fresh install does get the new kernel. updates have to ask for it by name. [19:34] oh, I installed from 12.04.1 like a day before .2 came out [19:35] sarnold, so how does that work? [19:35] security patchs will go into both kernels? [19:36] patdk-wk: I think so, what with it just being the quantal kernel it might not even be extra work. not sure. :) [19:36] ya, but quantal support ends long before lts [19:37] patdk-wk: based on a (too quick) skim of https://wiki.ubuntu.com/Kernel/LTSEnablementStack it looks a bit like we'd be offering stacks from the newer releases (if they happen) along the way [19:40] yuk [19:41] they are unsupported [19:41] so if you install a 12.04.2+ cd, you will get a limited support kernel [19:41] and will be forced to ugprade to the 14.04 stack to maintain support [19:42] under item 9 [19:42] or, item 10 [19:53] yeah that sounds right [19:54] you'd be on interim kernels until the next LTS [19:55] if that was an installer-time option, I would be happy [19:55] yeah, but it looks like at the time there were CD image issues [19:56] I would expect in the future you'd choose at the installer level in your preseed or whatever [19:56] but 2 ISOs isn't unmanageable [19:56] guess for me, I have no point updating my local lib to .2 [19:56] just keep using the old ISO and you'll be fine [19:57] existing LTS boxes won't get new kernel upgrades [19:57] jcastro: that still doesn't make sense - LTS should be *stable* and no new kernels should arrive in such a distro [19:57] even though it's in a new iso [19:58] in the distro is fine, by default, I have issues with [19:58] the option to use kernel kernels have always existed [19:58] newer [19:59] well, the option of doing a kernel upgrade is fine [19:59] but a new kernel being the default with 12.04.2 is *not* fine [19:59] no, that makes sense even, that is when you know you need it [19:59] but to do it without telling you, :) [20:01] sounds like an, alt-cd image feature though [20:19] patdk-wk: really, a new kernel in an LTS doesn't make sense [20:32] RoyK: as I said, I think the kernel team is stretched too thin to keep all of the hardware backporting going on so many LTS trees. Trying to auto-detect what kernel you will need is pretty close to impossible.... [20:33] SpamapS: ok [20:33] RoyK: so if you want old kernel -> 12.04.1 + updates. If you can't boot 12.04.1 because of new hardware.. try 12.04.2 ... [20:33] RoyK: a lot more supportable from Ubuntu's standpoint that way. [20:33] SpamapS: is it that bad? [20:37] RoyK: with desktop LTS support extending to 5 years, yes I think it is [20:39] server and desktop should be split in that tense [20:44] RoyK: yeah, I think having the two diverge a lot would be just as much of a nightmare though. [20:45] imho LTS should be rock stable [20:45] no new versions should be allowed [20:45] only backports [20:46] non-lts should have new things [20:46] all software sucks [20:46] software that sucks will have security bugs [20:46] that's what it used to be [20:46] so no new versions -> vulnerable software [20:46] yes, but using new software in LTS breaks things [20:46] and makes LTS != LTS [21:08] Hello. I have server with some watchdog processes that are going nuts. top shows them as using 330% CPU occassionally and having logged more CPU time than anything else. === NomadJim_ is now known as NomadJim [21:09] The last thing I want to do is make this server reboot. I'm not sure how to go about figuring out why they're mad, and I don't want to set off the watchdog restart bomb. === peterrus- is now known as peterrus [23:02] does anyone know where I can see a list of abstractions for apparmor [23:03] GeorgeTorwell: ls /etc/apparmor.d/abstractions/ [23:04] cripes there's a lot :) [23:04] thanks [23:16] When restoring files with duplicity restore how to restore all files from latest backup and overwrite current files? [23:17] "Duplicity will not overwrite an existing file. Here's the output if a change is made to the script above to restore the file to /etc/apt/sources.list:" (https://help.ubuntu.com/community/DuplicityBackupHowto) [23:17] can I force overwrite? [23:20] sliddjur: you could remove the targets [23:21] u mean restore to another location? [23:21] or that... [23:22] holstein: what do you mean remove targets then? [23:23] sliddjur: if duplicity will not overwrite an existing file, then remove the existing file... otherwise i see some "force" options in the man pages === Guest30093 is now known as sweettea [23:27] holstein: I only see force options on the delete backup switches [23:27] sliddjur: me too, thats why i suggested removing the targets, or just use rsync [23:38] hmm. what would be a good way to restore the /etc dir upon a system crash?