[00:30] adam_g nice. [00:31] smoser, hoping that is the source of my problems. the cinder issues i was hitting do not seem to happen if that bug is not affecting (precise) [00:31] hm.. === Gnubie is now known as Guest74935 === Guest74935 is now known as Gnubie_ [01:19] ScottK: ping [01:32] zul: pong [01:32] ScottK: https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1223342 [01:32] Launchpad bug 1223342 in neutron "[FFE] neutron-vpn-agent and neutron-metering-agent" [Undecided,New] === unreal_ is now known as unreal === medberry is now known as med_ [02:03] zul: I'm unlikely to have time for New before the weekend. [02:31] are the mount points in fstab mounted in parallel? [02:32] if so, is there a way to specify dependencies? === freeflying is now known as freeflying_away === freeflying_away is now known as freeflying [06:55] Hi what is alternate Ubuntu CD? is this a different than for example Ubuntu server? thx [07:34] jamespage, Morning, when you are around, can you help me to figure out whether recent jenkins fails in nova-compute are related to xen and if yes, why? === BlackDex_ is now known as BlackDex [08:30] hello all, i'm in a conundrum, I have a fairly nice server for what it is, and it's been spending a good deal of time at about 8-13 load.. I can't figure out why, the CPUs aren't bogged... the Ram is hardly swapping meaning the disk isn't thrashing, and even the dual-gigabit net link isn't saturated.. can someone give me some pointers here? === freeflying is now known as freeflying_away [09:01] gartral, you could use 'ps ax' and look for processes that are permanently in R or D state === freeflying_away is now known as freeflying === freeflying is now known as freeflying_away === freeflying_away is now known as freeflying [12:00] hallyn_, jdstrand: can you help me with libvirt apparmor and backing store support in Precise? If I create an instance that uses a backing store, then apparmor denies me. I think I@ve tracked it down to this commit, which isn't in Precise (Saucy works fine). What do you think about an SRU? http://libvirt.org/git/?p=libvirt.git;a=commitdiff;h=2aca94bfd3691c492ce4b6e7f1dd73342774fefd [12:01] Or is there something else I can do instead? [12:27] rbasak: I'll let hallyn_ comment on the SRU. that patch should be fine but might have to be adjusted for precise's libvirt [12:28] rbasak: that said, I'll mention that is more of a feature than a bug fix [12:28] rbasak: at least imo [12:35] jdstrand: thanks. Yeah I agree that the bug/feature thing is a bit dubious. [12:37] From virt-aa-helper's view it's clearly a feature. From a holistic view I'm not sure, since libvirt has the functionality which the apparmor support "breaks" [12:38] The problem for me is that I want backing stores to work for the cloud tooling that we want functional on Precise. [12:38] smoser: ^^ [12:39] A workaround is to disable apparmor for libvirt altogether, which isn't great. [12:39] Or perhaps a replacement virt-aa-helper under another name, and reconfigure libvirt to use that. [12:40] rbasak, i'm kind of confused. [12:40] whatdoes openstaack do on presee. [12:41] smoser: good question. No idea! [12:41] and how is a patch sent upstream by an ubuntu developer in 2010 not in 12.04? [12:41] Not use backing stores, I guess? [12:41] rbasak: disabling apparmor is not a viable workaround. it is critical to our security story for fully virtualized cloud guests [12:41] openstack definitely does use qcow2 , or at least can be configured to do so. i think it is actually even default. [12:42] rbasak, are you sure its not jut that you're calling it raw and it is actually a qcow ? [12:42] jdstrand: right, agreed. I meant on a per-user basis who wants to use this specific tooling on precise for development or something. I wouldn't want to recommend doing that in production. [12:42] it is. I don't think it uses backing stores by default [12:42] https://bugs.launchpad.net/nova/+bug/837102 [12:43] Launchpad bug 837102 in nova "nova writes libvirt xml 'driver_type' based only on FLAGS.use_cow_images" [Low,Fix released] [12:43] https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/470636 [12:43] Launchpad bug 470636 in libvirt "AppArmor security driver does not support backingstore" [Medium,Fix released] [12:44] smoser: there is no mention of "backing" in src/security/* in 0.9.8-2ubuntu17.10 [12:44] oh, actually, I can't say if precise uses qcow2 [12:44] * rbasak looks at the bug [12:52] smoser, jdstrand: it does look like the patch made it into Lucid, but I'm not clear on what happened after that. In Lucid, it looks like a lot of the code was reverted/replaced by 9900-CVE-2010-2237-2238-2239.patch. [12:52] rbasak: Red Hat libvirt, possibly 0.6.1 through 0.8.2, looks up disk backing stores without referring to the user-defined main disk format, which might allow guest OS users to read arbitrary files on the host OS, and possibly have unspecified other impact, via unknown vectors. (http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2010-2237) [12:53] rbasak, your libvirt xml [12:53] are you specifying that the disk is qcow ? [12:53] if you're not, AA will (correctly) not allow it. [12:53] * rbasak checks [12:53] if you do specify it as qcow, then it will. [12:55] smoser: I'm specifying qcow [12:56] smoser: volume: http://paste.ubuntu.com/6092363/; instance: http://paste.ubuntu.com/6092366/ [12:57] smoser: note that I think this works in Saucy [13:00] rbasak, hm.. i'm not really sure. i'm 98% certain that on precise with libvirt and app armor you can use a qcow disk. [13:00] but i dont know what 'volume' is in that respect. [13:00] smoser: use a qcow disk specifically with a backing store? [13:01] i dont know what "backing store" is. [13:01] but yes, specifically this works: [13:01] qemu-img create -f qcow2 -b original-disk.img my-delta.img [13:01] That's slightly different to what I'm doing. [13:01] libvirt.... with 'my-delta.img' specified as a disk. [13:01] right. [13:01] you're (i thikn) asking libvirt to do that for you? [13:01] Right. [13:01] maybe just dont do that and do it yourself. [13:02] which is what openstack does. [13:02] it creates the qemu disk backed by a another [13:02] and then tells libvirt to use that created disk. [13:03] That would be pretty messy and involve a pretty big refactoring. libvirt provides a tidy API that works on Saucy :-( [13:04] The metadata about the connections between volumes can be held in the libvirt XML then, too. THat makes deleting volumes easier. [13:04] I don't like it although I accept that is one solution. [13:07] rbasak, well, sru seems the only other option. [13:07] * rbasak is investigating a third idea [13:07] which would seem to me to be low regression likelyhood, as its just (securely) allowing somethign that awasn't allowed before. [13:09] jdstrand: echo '/var/lib/ubuntu-cloud/libvirt/images/* r,' >> /etc/apparmor.d/abstractions fixes the issue for me, and will work for all my use cases. Would you consider this secure, and is there a way my package could drop this in in a pluggable way? [13:09] (my package manages that directory) [13:10] rbasak, are you able to add stuff into /etc/apparmor.d/libvirt ? [13:11] ah. or local/usr.sbin.libvirtd [13:11] smoser: those are generated though. Only TEMPLATE is not. [13:11] Putting something in local/ might violate policy I think [13:11] Hence the question [13:11] I'll also need to ensure that the directory only contains official images, and put the rw instance disk images elsewhere. [13:12] I'm bundling both in the same place right now, and then instances could read each others' disks (with some kind of qemu exploit), which would be bad. [13:12] rbasak /etc/apparmor.d/abstractions is a file, no? [13:13] err. is a directory [13:13] smoser: sorry. I meant /etc/apparmor.d/abstractions/libvirt-qemu. [13:13] TEMPLATE includes that file. [13:14] i'd just violate policy on the 12.04 backport. [13:14] if in fact that violates policy. [13:15] I'm not sure it'll work though [13:15] oh. i thought you said it would. [13:15] usr.sbin.libvirtd is the wrong file. [13:15] really? [13:15] what file is it ? [13:15] The generated ones in /etc/apparmor.d/libvirt/ [13:16] I think. [13:16] Those are per-instance (ie. per-qemu-process) [13:16] rbasak: well /var/lib/ubuntu-cloud/libvirt/images/* r,' >> /etc/apparmor.d/abstractions/libvirt-qemu will mean that all instances, [13:16] if escaped, will be able to read all other isntances' data, [13:16] well, no. [13:16] because he'd only put raw images there. [13:17] so they'd be able to read their backing store or other stuff they could have just downloaded from http://cloud-images.ubuntu..com [13:17] hallyn_: right now, that's true. However, I can arrange for instances to have their main disk images in a different directory, and for that directory to contain only official Ubuntu cloud images, which are public. [13:17] assuming you mean raw vs qcow, what diff does that maek? [13:17] ah [13:17] (since in my case the backing stores only need to be read-only public cloud images) [13:17] then that sounds good. [13:18] but, does http://libvirt.org/git/?p=libvirt.git;a=commitdiff;h=2aca94bfd3691c492ce4b6e7f1dd73342774fefd also fix the issue for you? [13:18] rbasak, you could put your files in a subdirectory of that if you wanted. [13:18] if you dont wildcard '**' then subdirs are restricted. [13:18] I'm not sure if that patch fixes it. I've not tried yet - wanted to discuss first. [13:19] smoser: I'm not sure that libvirt's API supports volume pool subdirectores like that, but I'll check - thanks. [13:19] hallyn_: in particular I'm now concerned to understand why a security update seems to have reverted most of that patch in Lucid. And it doesn't appear present in Precise, but is in Saucy. So I'm quite confused about that patch now. [13:40] zul, If you are around. There seems to be something wrong with the nova-compute jenkins tests, not sure this is related to the xen upload. I am not bright enough to make any sense of the output [13:40] smb: yeah i saw that its on the list today [13:41] zul, Ok, let me know it it is related. [13:44] rbasak: re /var/lib/ubuntu-cloud/libvirt/images/* r> no that is not secure because right now we have vm isolation. anything that was in /var/lib/ubuntu-cloud/libvirt/images/ would be available to all VMs, which would break that isolation [13:45] jdstrand: right, but I'm suggesting that I limit that directory to published Ubuntu cloud images only, which are the only things I need as backing stores. [13:46] I might rename the directory to make it clearer I guess. "public" perhaps. [13:46] Or images/public [13:47] rbasak: I don't understand what isn't working. the security update didn't revert this-- the xml just has to has to to specify the type. eg [13:48] rbasak: and I wrote a tool that would migrate people automatically [13:48] as part of the security update [13:49] jdstrand: I'm doing that. See http://paste.ubuntu.com/6092366/ for my instance definition. [13:49] jdstrand: the volume definition is: http://paste.ubuntu.com/6092363/ [13:49] jdstrand: it might be that I'm doing this a little differently from openstack and what direct qemu users might do. I'm doing everything through the libvirt API. [13:50] rbasak: what is the apparmor denial? [13:51] jdstrand: type=1400 audit(1378904893.099:36): apparmor="DENIED" operation="open" parent=1 profile="libvirt-a9ffce69-5593-9a1a-4f8d-60995f9dad8d" name="/var/lib/ubuntu-cloud/libvirt/images/Y29tLnVidW50dS5jbG91ZDpzZXJ2ZXI6MTIuMDQ6YW1kNjQgMjAxMzA5MDk=" pid=18276 comm="kvm" requested_mask="r" denied_mask="r" fsuid=106 ouid=106 [13:53] HI is the toolchain version has changed from LTS 12.04 to LTS 12.03 update 3?thx [13:56] The code that creates the volume is: http://pastebin.ubuntu.com/6092576/ [14:04] rbasak: can you paste the output of: qemu-img info /var/lib/ubuntu-cloud/libvirt/images/foo ; qemu-img info /var/lib/ubuntu-cloud/libvirt/images/Y29tLnVidW50dS5jbG91ZDpzZXJ2ZXI6MTIuMDQ6YW1kNjQgMjAxMzA5MDk= [14:04] rbasak: I have to go to a meeting [14:05] jdstrand: will do, and I'll leave you a message here. THanks. [14:07] rbasak: actually, I'm back [14:07] jdstrand: http://pastebin.ubuntu.com/6092619/ [14:07] That was a quick meeting :) [14:10] jdstrand: a thought. smoser pointed out that I'm not decompressing the downloaded backing image, and that I should because it hurts performance. That's in my backlog. But everything works transparently. That isn't going to influence the code that looks at the backing volume, is it? [14:10] (everything apart from this apparmor issue, that is!) [14:11] I wouldn't think so, but all that is abstracted away from the apparmor driver [14:11] OK [14:11] This issue didn't affect me in more recent releases, btw. [14:11] rbasak, no. [14:13] rbasak: oh, which was the first release it worked on? [14:13] jdstrand: currently unknown :-( [14:14] There are many moving bits to the code I've written, so it's a bit awkward to test. If you need to know I can reduce everything to a much smaller test ase. [14:15] I would like to know. I am trying a reduced test case now [14:19] Hi! I'm trying to setup bind to override the domain "infected.no", I have to add a few local records. But i still need to be able to resolve the actual website. Can i do this in bind? [14:20] * rbasak uses the tool we're trying to fix to quickly fire up some test instances [14:21] halvors: look up bind "views". You can maintain a separate local copy of some particular zone. A warning though: it can lead to considerable confusion to run things that way. [14:35] rbasak: ok, precise works with a simple qcow2 with backing store: http://paste.ubuntu.com/6092737/ [14:35] rbasak: ie, just using qemu-img and not the volume xml [14:36] * jdstrand now tries with volume xml === freeflying is now known as freeflying_away [14:50] hi, I'm a vps running ubuntu. Initially it ran ubuntu 12.10; after I've upgraded it to the 13.04. Anyway the image of the vps by default mount a 2.6 version of linux. I've tried to update it to the latest on raring (3.8) but I get this error: http://paste.ubuntu.com/6092777/ . I think that I can't upgrade the kernel for the particular setup of grub on a vps... so are there any other way to upgrade the kernel? [14:52] jdstrand: I did http://pastebin.ubuntu.com/6092801/ by hand. I see: "2013-09-11 14:50:43.236+0000: 11812: warning : virDomainDiskDefForeachPath:13244 : Ignoring open failure on /var/lib/libvirt/images/foo: Permission denied" [14:53] jdstrand: virt-aa-helper with sudo works. So is the problem that libvirt-aa-helper can't read that file so doesn't find out about the backing volume? [14:56] mibofra: are you running an official Ubuntu image, or something that's modified by your VPS provider that isn't really Ubuntu? I see no reason why /usr/share/initramfs-tools/hooks/fixrtc should fail except perhaps if something like /sbin/hwclock has been removed on your system. [14:57] no the executable is under /sbin/ as usual [14:58] Is your disk full? [14:58] on line 1010 (of the script) there is this: system ("run-parts --verbose --exit-on-error --arg=$version " . No the disk isn't full [14:59] I've all the necessary space [14:59] Are you sure? HOw much space is that? [15:00] rbasak, Filesystem 1K-blocks Used Available Use% Mounted on [15:00] /dev/ploop13128p1 10319140 2094284 7700672 22% / [15:01] OK I agree that sounds OK [15:01] rbasak: aha! [15:01] Linux spf-virtualserver 2.6.32-042stab079.5 #1 SMP Fri Aug 2 17:16:15 MSK 2013 x86_64 x86_64 x86_64 GNU/Linux the actual kernel [15:02] rbasak: actually, no. that is a harmless error [15:03] rbasak: the output from virt-aa-helper should be the same there [15:03] jdstrand: it's not. When I run without sudo, I don't see the backing file. When I run with sudo, I do. [15:04] rbasak: oh, right, cause it can't inspect the qcow2 [15:04] Right [15:04] rbasak: but virt-aa-helper runs as root, so that shouldn't be the case [15:04] rbasak: s/case/problem/ [15:05] jdstrand: it has its own apparmor profile though, doesn't it? [15:05] rbasak: it does, but sudo wouldn't make it suddenly work [15:06] THis time I used a standard location (/var/lib/libvirt/images/) for the volume image, too. [15:07] Perhaps it doesn't work in the non-standard location for that reason? [15:08] rbasak: are there any apparmor denials? [15:09] adam_g: http://people.canonical.com/~chucks/ca/ (a newer webtest is needed for ceilometer) [15:10] jdstrand: yes. I think that's it. I apologise for not spotting this earlier - I was only pasting the most recent denial without checking timestamps, assuming that was the only one. It looks like there are denials for virt-aa-helper preceding them. [15:12] rbasak: can you paste the apparmor denial? [15:12] jdstrand: eg: Sep 11 13:08:12 ubuntu-cloud2 kernel: [504847.014007] type=1400 audit(1378904892 [15:12] .811:31): apparmor="DENIED" operation="open" parent=18180 profile="/usr/lib/libv [15:12] irt/virt-aa-helper" name="/var/lib/ubuntu-cloud/libvirt/images/foo" pid=18263 co [15:12] mm="virt-aa-helper" requested_mask="r" denied_mask="r" fsuid=0 ouid=0 [15:13] But I want to check that it's the correct case. [15:13] rbasak: ok, add to /etc/apparmor.d/usr.lib.libvirt.virt-aa-helper: [15:13] /var/lib/ubuntu-cloud/libvirt/images/* r, [15:13] then do: sudo apparmor_parser -r /etc/apparmor.d/usr.lib.libvirt.virt-aa-helper [15:14] and try again [15:14] Right, will do. [15:14] rbasak: that change is totally appropriate [15:14] (I just want to catch up with my test instance first, since I think I need to destroy that) [15:14] you can see we have accesses for libvirt, nova, eucalyptus, etc [15:14] * rbasak removes his previous workarounds [15:15] rbasak: interestingly, if you used /var/lib/ubuntu-cloud/libvirt/images/foo.qcow2, it also would have worked [15:15] /**.qcow{,2} r, [15:16] guys where is normally located dumpe2fs ? [15:17] jdstrand: success! Thank you! [15:18] rbasak: so, you should be able to change the virt-aa-helper profile back to the original and generate your filenames to use .qcow2 and it should also work [15:18] jdstrand: sorry I didn't spot the previous apparmor denial. That would have saved much wasted time. There were some other messages about qemu network bridges starting up in the middle, and I had assumed that the earlier denials were from a previous attempt rather than reading them through more carefully. [15:18] ok [15:18] no worries [15:18] guys :D ? [15:18] jdstrand: yeah. I think I'll do that to save having to modify stuff in precise [15:18] jdstrand: many thanks for your help. I owe you much beer. [15:19] ok rbasak there is something mad [15:19] there isn't dumpe2fs on the system [15:20] fixrtc use both dumpe2fs and hwclock [15:21] mibofra: e2fsprogs provides dumpe2fs. Try installing that. It should be installed already because it's marked "essential". [15:21] thanks [15:22] hi [15:22] there's a getenv call in the postinst of python-cinder (see http://bazaar.launchpad.net/~ubuntu-branches/ubuntu/saucy/cinder/saucy/view/head:/debian/cinder-common.postinst#L4) [15:23] where does this command come from? [15:24] toabctl: that looks like a bug to me. I think it should be "getent". [15:24] zul: ^^ [15:25] ok rbasak now the kernel was upgraded successfully... I wonder why the tool wasn't installed in the image yet... [15:25] mibofra: sounds like a broken image. Where did it come from? [15:25] toabctl: crap please open up a bug in launchpad please [15:27] rbasak, I think form the provider of the vps [15:28] zul, +1 [15:28] mibofra: please could you take it up with them? I don't mean to just fob you off - I'm concerned that others will have the same problem. [15:29] mibofra: they should not be calling their own constructed image "Ubuntu" either. [15:29] mibofra: exactly because of quality problems like this. [15:30] omg [15:30] rbasak, I've rebooted the vps [15:30] but it rebooted with the same kernel version [15:30] Linux spf-virtualserver 2.6.32-042stab079.5 #1 SMP Fri Aug 2 17:16:15 MSK 2013 x86_64 x86_64 x86_64 GNU/Linux [15:31] mibofra: sounds like they're booting their own kernel from outside your VM. [15:32] mibofra: on vps that use their own kernel, we've see issues where apparmor is not compatabile [15:33] utlemming: also they're not shipping e2fsprogs, which is an essential package that should be installed on all Ubuntu systems, and thus breaks initramfs-tools, which causes kernel updates to fail. [15:33] yikes...what is the vps? [15:33] (regardless of whether the kenrel updates work or not) [15:33] so I've to re-make the image more or less xD [15:33] upgrading and adding software [15:34] really nice [15:40] rbasak: its an openvz setup === marcoceppi_ is now known as marcoceppi [16:26] anyone know if dm devices can be used for a uswsusp resume device? [16:45] sarnold: ping [16:53] Hello. I have an ubuntu 10.04 server with a 300 mb boot partition and it is using 90% of space. how do I get rid of the old kernels in it without messing somethign up? [16:55] currently I have from 2.6.32-21 to 2.6.32-51 in there [16:59] I tried to use dpkg --list | grep kernel-image but it doesn't list anything [17:03] cekimogloy: | linux- instead [17:04] thanks [17:04] clear [17:07] If you're using drivers which use dkms might want to remove the linux-headers for the old ones as well [17:11] cekimogloy: you could try (more) compression on your initrds also === rap424_ is now known as rap424 === Jordan_U_ is now known as Jordan_U [20:16] smoser: You around? [20:18] hey. [20:18] long time. [20:18] Yeah! Sorry for the long silence; I've been buried in euca2ools 3 work. :-\ [20:19] I'm looking into using simplestreams as a new back end for eustore stuff, but I'm having trouble finding what actually generates the data it uses. [20:19] What generates the data for stuff like cloud-images.u.c? [20:20] gholms: which data? [20:20] http://cloud-images.ubuntu.com/releases/streams/v1/ [20:20] gholms: look at lp:simplestreams [20:20] Yes, I have been looking through that code. [20:21] gholms: the AWS and download code is public, the Azure has NDA bits [20:21] Is that in the source tree and I'm just missing it or something? [20:22] There's plenty of code that uses extant data, but precious little that actually writes it. [20:22] tools/make-test-data does a little of that, but it looks like it's pretty much generating it all from the ground up. [20:23] gholms: for some, yes it is [20:23] gholms: give me a minute to look at the code... [20:24] gholms, make exdata [20:25] it scrapes / combines data from /query into simplestreams format. [20:25] to create the aws and the download data. [20:25] the other content_sources come from elsewhere. [20:25] (liek azure and hp) [20:28] Okay, so the process really does involve that. [20:28] That's useful. [20:32] gholms... [20:32] thats one of those things that you think... well, this wont last long. [20:32] but it lasted long [20:32] Oh? [20:33] oh. not simplestreams. the generation bit [20:33] there that kind of scrapes other data. [20:38] Ideally I just want to be able to have people dump a bunch of images and some metadata for each one into $dir using $layout and have things Just Work, so that seems similar in spirit. [21:31] alright, I'm in a pickle, I have a headless server that was working fine earlier today, now when I try too SSH into it i get "ssh_exchange_identification: Connection closed by remote host [21:31] " [21:33] this is even happening when I try to bounce the connection off a machine from a friends house, not attached to my network x.x [21:33] Time to power cycle it if you have no remote KVM or console capabilities [21:34] maxb: I can't. the BMC isn't responding either and the power button lock is engaged, I'm locked out [21:35] Time to phone someone up and get them to yank the power cable then [21:35] maxb: short of tracing which of the 10 freaking power cables running through the cabinent I'm stuck [21:35] nah, It's a server in my possesion and crontrol [21:35] This is why it's important to use managed PDUs [21:35] control* [21:36] * gholms recommends labeling wires and managed PDUs [21:36] gholms maxb I don't have a few hundred dollars for a managed PDU [21:37] Do you have a few dollars for a roll of masking tape and a marker? :) [21:37] gholms: do, cat keeps chewing the tape off, not chewing the wire, just the tape [21:37] Ouch. [21:38] Servers and cats should not be mixed :-) [21:38] I think she gets high off the adhesive [21:38] oh she's a good kitty, I can have a comp open doing diagnostics, she looks at it, then walks away [21:39] Sounds like what you need is a mini-rack with doors. [21:39] gartral: can you hook up a keyboard and blindly login, reboot? [21:39] she made the mistake of sniffing a cpu fan once when she was a kitten, gave her a nice bloody nose, never wanted to put her face too close to a comp after that [21:39] aww poor kitty [21:40] sarnold: tried that, just beeps when i hit keys [21:40] Heh [21:41] here's the screwed up part, my ZNC server is on this machine, which is connected to freenode, in turn whichi s how I'm talking to all of you, so I know it's not a kernel panic [21:41] gartral: oh, it beeps on keypresses? that feels like a seriously wedged machine, I'm used to seeing that when the keyboard buffer is stuffed full and nothing is handling keyboard presses.. [21:42] gartral: Whaa?? wow. [21:42] gartral: does znc give you any command execute abilities? [21:42] sarnold: only for ZNC, not the machine [21:43] gartral: normally that'd be a good thing.. hehe [21:43] most of the websites and services are running, except for appearent SSH, ipmi, snmp, and webmin [21:44] ergh. webmin. I wonder if it is someone else's computer now. [21:44] so yea, I'm stuck between a rock and a hard place here [21:44] sarnold: there's no outside connection too webmin, it's completely in network on an out-of-band line [21:44] disabling sshd, impi, and snmp would probably draw undue attention pretty quickly, but someone might just do that to defend their new machine [21:45] gartral: ah, good, that's encouraging. :) [21:45] and by out of band, I mean it's only accessably from a single network port, running from an un-bridged connection between my workstation and the server [21:45] (I'm not dumb) [21:46] i guess I'll pull power, see if that helps [21:55] well I don't know what the hell happened, but I can log in now >.< [21:56] gartral: check the logs, it'll be worth finding out what happened.. [21:56] my guess is OOM killer went nuts. but that's just a guess. [21:57] sarnold: on a server with 8 gigs of ram? <.< [21:58] err.. this is odd, now it's saying I have a read-only FS [21:58] I've had that happen on servers with 32G of RAM when people weren't being careful. ;) [21:59] i gotta wonder if the HDD is dying [22:00] sudo: unable to open /var/lib/sudo/name/6: Read-only file system; sudo: unable to execute /sbin/reboot: Input/output error [22:00] brb again [22:01] gartral: yikes, good luck === freeflying_away is now known as freeflying [23:39] I just ran a sudo apt-get upgrade and have just looked back, and have a screen full of various errors, all relating to read-only filesystems [23:39] I know this information is very vague and a bit useless, but any idea where to start troubleshooting aand fixing? [23:39] Your stuff is all backed up, right? [23:40] Linux Ubuntu-Server 3.8.0-26-generic #38~precise2-Ubuntu [23:40] most of my important stuff yeah [23:40] its just a home server so most of it is more or less disposable anyway [23:41] (yes I am aware the first rule of everything is backups, but I currently can't even afford enough storage to keep my actual stuff, never mind full backups [23:42] MoleMan2: check dmesg, it might specify why the filesystem is read-only... [23:44] sarnold: http://pastebin.com/FeFKrvjb is the last chunk of info, I just had to guess at what is recent/useful though :/ [23:51] MoleMan2: those numbers in square brackets are timestmps since boot, measured in seconds [23:52] MoleMan2: that paste covers less than a second of time, though it's hard to know exactly how far in the past it is. [23:53] yeah, comparing with syslog, that entire chunk was around Sep 12 00:26:45 which was presumably when everything froze [23:53] MoleMan2: "medium error" and "media error" look like bad news. it might be a dying drive, might just be a fussy controller / drive / driver that could be 'fixed' by a reboot. [23:53] as the last entry in syslog was Sep 12 00:26:46 Ubuntu-Server kernel: [975496.222453] type=1400 audit(1378942006.370:42): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/tcpdump" pid=24281 comm="apparmor_parser", presumably due to read only [23:55] yeah, I just don't want to reboot if I don't have to as I won't be home for quite a while, so if it just fails to boot I'm stuck for a few weeks and won't be able to do anything :/ [23:55] might I be able to just manually remount / change the mount to wr without a reboot, or is a reboot probably the best way to go? [23:56] MoleMan2: oh man. :/ it'd be best if it could run a fsck before coming back online. I wouldn't force to wr. [23:56] but yeah, I'd picked up on those bits as a read fail, possibly linked to a drive death, [23:56] hmm [23:57] MoleMan2: .. but with the data, i'd be worried about a fsck removing something you care about, too. not a great situation. :( [23:58] arrgh! >.< I can't figure out why this server is barfing like this! 13.04 0:- 1:* 2: 3: 4: 5: ▸904kB/s 53‼ 1h53m 50C 8.18 2x2.4GHz 3.9G39% s1.9G0% 292G67% gareth@kitsunet 192.168.1.4 2013-09-11 19:56:13 [23:59] load shouldn't be this high, I have NOTHING running