[04:13] <styol> Who loves connection resets? I know I do. I was curious if anyone might have any ideas on what this sort of connection reset pattern might suggest: http://pastie.org/private/ikgecg5y7yjg7zvfxg8uww
[04:14] <styol> I don't particularly get the back and forth nature of it, but I make have lots learn to. Yes, intentionally poor english.
[04:16] <styol> perhaps a simple question: when [source] > [destination]: Flags [R] does that mean that the [source] caused the reset, or is it just acknowledging that [destination] attempted to write to a socket that was already closed?
[05:15] <sarnold> styol: another option -- source might not have sent the reset at all, it could be sent by a router or firewall somewhere in the middle
[05:16] <sarnold> styol: (it -could- also be sent by any other system, but they'd have to guess the sequence numbers; not that TCP is cryptographically strong, but it shouldn't be trivially bad either.)
[05:16] <styol> sarnold: mmm I see. Packet loss couldn't related, could it? Basically the host is doing their best to refute that it might have anything to do with them
[05:17] <styol> they were like, oh, seems the client is sending the RST. I monitored eth1 with the public network and was like, oh, yeah about 5% of the time
[05:19] <sarnold> styol: it shouldn't be packetloss, TCP tries harder than that to keep connections alive :)
[05:20] <styol> I realize it is apples and oranges, but the most common thing we've seen with abnormal amounts of resets is packet loss
[05:20] <styol> the problem is this is beyond my expertise, though I am learning which is great and I love it, but still don't have a solid 'Aha! there is the issue` yet
[05:21] <styol> at this point i would probably trade allowing someone watch me eat a bag of rocks if they were willing to check it out haha
[05:25] <sarnold> styol: hrm. you know, it -might- be packet loss -- if the FIN packets aren't acknowledged quickly enough, the TCP timers on the host that sent the FIN will send another .. and another .. and if two FIN packets are delivered, the second one ought to generate an RST.
[05:26] <sarnold> styol: note that the connections with [R.] all have window sizes of 115, the ones with just [R] have window sizes of 0. which strikes me as strange.
[05:34] <styol> sarnold: good point regarding FIN, from what I've read regarding the what can cause a RST, that could indeed make sense
[05:35] <styol> sarnold: that is a little odd. I hadn't noticed that. What is the significance of the period again?
[05:35] <sarnold> styol: . means 'ACK'
[05:37] <styol> sarnold: but.. wait huh. isn't this an RST? Or does R. means the RST was in response to an ACK?
[05:38] <sarnold> styol: yes, it is an RST; the only real way to know what it is in response to would require widening the tcpdump to capture more data
[05:38] <sarnold> styol: (don't forget, TCP packets can have multiple flags set at once)
[05:39] <styol> sarnold: gotcha gotcha
[09:35] <jamespage> smb, decided on a slight different course of action
[09:36] <jamespage> smb, without the major GRE restructure the GRE feature in the dkms module is not going to be enabled on a 3.11 kernel
[09:36] <jamespage> as the kernel has already registered a handler for the GRE protocol.
[09:36] <jamespage> so
[09:36] <jamespage> I'm going to say - if you want GRE - use the kernel openvswitch module
[09:36] <jamespage> anything else - use the DKMS module
[09:36] <jamespage> (NEWS added).
[09:39] <Daviey> jamespage: What does this mean for HWE next cycle?  Doing the same as raring hwe?
[09:39] <jamespage> Daviey, yeah
[09:40] <smb> jamespage, I saw the mails. I am not sure how much openvswitch is used in openstack (and what features that requires), the other user would be LXC.
[09:40] <Daviey> Standalone kvm is also a consumer
[09:40] <smb> ok
[09:41] <Daviey> Oh, and xen i suppose.
[09:41] <Daviey> But i don't know if anyone has tried this with our xen
[09:41] <smb> Daviey, Not that I would have knowingly used it
[09:47] <jamespage> smb, Daviey: you still get all the options but
[09:47] <jamespage> GRE -> native kernel module
[09:47] <jamespage> VXLAN -> dkms module
[09:47] <jamespage> its a little confusing and worthy of a release note
[09:47]  * jamespage aims to kill the dkms module for 14.04
[09:48] <jamespage> apparently vxlan will go native in kernel as well
[09:50] <smb> jamespage, Just sounds like you cannot get both at the same time. Unfortunately upstream is so unsure about the new code.
[09:50] <jamespage> smb, yeah
[09:50] <jamespage> smb, I hacked on it again this morning and I can't get the GRE feature to register on 3.11
[09:51] <jamespage> whichever way I try
[09:51] <jamespage> I don't really understand why that is - it appears that even in 3.10 the gre demux'er registered the GRE protocol hander
[09:51] <jamespage> and inet_add_protocol won't add a handler unless the existing entry is NULL
[09:57] <smb> Hm, weird. Ok, the build of dkms may have succeeded by things not being exported. But as you say at least registering would have failed. Except maybe if back then they have added it without registering a protocol handler. But that also sounds unlikely
[10:01]  * smb -> lunch+errands
[10:01] <caribou> people, would it be considered insanity to introduce new kdump functionalities in the 14.04 (i.e. LTS) cycle ?
[10:02] <caribou> I'm thinking of enabling networked kernel dump functionalities (i.e. sending the vmcore file to NFS or SSH)
[10:04] <caribou> for instance, RHEL6 can send a kernel core dump to NFS or SSH remote host
[10:08] <ikonia> caribou: netdump is available already
[10:08] <caribou> ikonia: I don't know about ubuntu but it used to be notoriously broken
[10:08] <ikonia> do you feel more is needed ?
[10:09] <ikonia> it's always worked like a charm for me, I'm surprised to hear that
[10:09] <caribou> ikonia: I must admit I never used it on Ubuntu/Debian
[10:10] <ikonia> caribou: apologies for my lack of awareness of you as a person, do you work within the development team ?
[10:10] <caribou> ikonia: the idea was to add this to the default kdump-tools functionality. But maybe netdump is sufficient
[10:10] <ikonia> kdump-tools is a much more modern and "accepted" way, I'd much rather see kdump grow, however it depends on your desire for change
[10:10] <caribou> ikonia: not in the development team, I'm part of Canonical's sustaining engineering team
[10:11] <ikonia> caribou: that's why I was wondering if you "wanted" to look at change, or you where considering actually doing it
[10:11] <Daviey> jamespage: RE: killing the module for 14.04.. I think that would be ideal.. we'd need to monitor advancements in upstream development, and balance that.. It might be that for the latest crack, people will want to use dkms.
[10:11] <ikonia> caribou: I'd certainly like to see the kdump functionality grow, but I wouldn't be upset without as tools like netdump have served me well
[10:11] <Daviey> I have NFI what latest advancements will come out in ovs in the next year.
[10:12] <caribou> ikonia: I was more worried about introducing such a change in an LTS release
[10:12] <Daviey> caribou: I think what yu are suggesting isn't full of crazy at all... but It would need discussion with the kernel team, smb ?
[10:12] <ikonia> caribou: do you see it as a risk ? surly it's mininal
[10:12] <caribou> ikonia: my idea would have been to work at the enhancement myself
[10:12] <ikonia> caribou: I'd certainly welcome it and offer any support I can
[10:13] <caribou> Daviey: kernel team has little to do with it; the dump happens in userspace
[10:13] <caribou> ikonia: the major concern is network availability and apparently the network is already up when kdump-tools kicks in
[10:13] <Daviey> caribou: Right, but they are much closer to this than us.. :)
[10:14] <ikonia> caribou: that should be managable though with depends
[10:14] <caribou> Daviey: well, my previous kdump-tool blueprint got more attention from the server team than anybody else
[10:14] <ikonia> caribou: with good reason, the server team users are the ones who will benefit/be interested
[10:15] <caribou> I also thought that it might be a "nice to have" in a cloud context where kernel dump could be sent to a single dedicated instance
[10:15] <zetheroo> from one of our Ubuntu servers all hosts and gateway are pingable except for one 192.168.1.205 ... other hosts have no issue pinging that same IP or it's hostname ...
[10:15] <Daviey> caribou: I noticed that makedumpfile, the source package for kdump-tools has been touched more by infinity than us.  Maybe worth discussing with him.. We'll be supportive where we can :)
[10:16] <caribou> Daviey: just so you know, I co-maintain the makedumpfile package on Debian ;-)
[10:16] <Daviey> caribou: Also, introducing a network service.. you may want the security team to have a brief look at your plan aswell.  That wasn't a consideration for when it was first MIR'd
[10:16] <Daviey> caribou: Ah!  I didn't know that :)
[10:17] <caribou> Daviey: this is why I asked about the LTS specifics; modifications would be introduced in Debian as well, but if it means waiting a full cycle to get it on a non-lts release, then it changes the timing
[10:17] <Daviey> caribou: We aren't as polished as we should be for centralised logging :(
[10:18] <caribou> Daviey: indeed, the network side will need some specific attention; I'll make a note of that
[10:18] <Daviey> caribou: Hmm, I don't think what you are suggesting is inappropriate, personally
[10:18] <caribou> Daviey: ok, I'll do some preliminary hacking to see what is involved and will comeback maybe for a 14.04 blueprint
[10:20] <jamespage> smb, Daviey: uploaded ovs for saucy
[10:20]  * jamespage puts virtual networking down for the rest of the day
[10:20] <caribou> Daviey: ikonia: thanks for the comments
[10:34] <ikonia> caribou: if I can offer some help, please let me know, as this would be a useful function for me.
[10:51] <zetheroo> I have exported an nfs share with the options (rw,sync,no_subtree_check) and I can mount it on the host it's exported to, but when I try to do ls in the mount path I get "Permission Denied"
[10:52] <ikonia> zetheroo: what's the permissions on the file system ?
[10:53] <zetheroo> cd ..
[10:53] <zetheroo> drwx--x--x  3 root root
[10:53] <zetheroo> thats the permissions of the directory I am trying to share
[10:53] <ikonia> zetheroo: so only root can see that
[10:53] <ikonia> are you root on the server where it's mounted ?
[10:53] <zetheroo> yes
[10:54] <ikonia> really ?
[10:54] <zetheroo> yep
[10:54] <ikonia> can you do ls -la against it
[10:54] <ikonia> (not in it, against it)
[10:54] <zetheroo> on the host that is exporting?
[10:54] <ikonia> no, where it's mounted
[10:55] <zetheroo> drwx--x--x  3 root root
[10:55] <ikonia> that's odd
[10:55] <ikonia> does id confirm uid 0
[10:55] <zetheroo> uid=0(root) gid=0(root) groups=0(root)
[10:56] <ikonia> zetheroo: out of interest, on the host, change it to 775 say, can you then see it on the client ?
[10:56] <zetheroo> yes
[10:56] <zetheroo> change caries over
[10:57] <ikonia> ok, so it is a legitimate permissions problem
[10:57] <ikonia> seems odd though as you are the owner and the owner has full control,
[10:57] <zetheroo> am I using the wrong options?
[10:57] <zetheroo> this is my exports line: /mnt/neptune	mars(rw,sync,no_subtree_check)
[10:59] <ikonia> I don't see why that would impact the user permissions
[11:00] <zetheroo> fstab entry on the client: neptune:/mnt/neptune	/mnt/neptune	nfs	defaults	0 0
[11:02] <ikonia> don't see why that would impact the permissions like that
[11:02] <ikonia> seems a genuine problem
[11:02] <zetheroo> ah ... I changed defaults to rw and now I can ls inside ...
[11:03] <ikonia> why would that impact the file system permissions ?
[11:04] <zetheroo> so now the line in fstab is: neptune:/mnt/neptune	/mnt/neptune	nfs	   rw	0 0
[11:04] <zetheroo> I guess it was mounting it with option "defaults" ... which perhaps does not any perms ... !?
[11:05] <ikonia> I don't see why that would matter though,
[11:05] <ikonia> I'll have to do a little digging, this is curious
[11:06] <zetheroo> ok, it;s not working on another system with just changing the fstab mount option ... so it seems that the chmod to 775 also needs to be done :P
[11:07] <zetheroo> really odd ...
[11:10] <jamespage> smb, w00t dep-8 tests passing again
[11:10] <TJ-> zetheroo: Have you set an fsid on the root NFS share on the server? e.g.  "/srv         			10.254.0.0/16(rw,async,no_subtree_check,crossmnt,fsid=0)"
[11:11] <TJ-> zetheroo: also, is NFSv3 or NFSv4?
[11:11] <zetheroo> TJ-: I don't know what that even is so I guess not ...
[11:12] <zetheroo> these are all identical systems which I just setup yesterday and today with identical Ubuntu Server 12.04.3
[11:12] <TJ-> zetheroo: For that /etc/exports line I just posted, the clients have "10.254.251.1:/Library /home/all/Library nfs4 _netdev,auto,exec 0 0"
[11:12] <zetheroo> usually nfs share work without a hitch and without any crazy configs etc ...
[11:13] <zetheroo> so I could try adding the fsid=0 option?
[11:14] <TJ-> zetheroo: Check what it does. It's if the F.S. doesn't have a UUID
[11:16] <zetheroo> thanks!! fsid=0 seems to have done the trick :)
[13:01] <caleress> hello guys, i have an upstart script that works on 12.04 and fails on 10.04, any ideas ?
[13:01] <caleress> exec start-stop-daemon --start -c dev --exec /opt/dev/test.sh
[13:01] <caleress> this is the script
[13:02] <smoser> jamespage, Daviey would you find it appropriate / inappropriate for me to add a ubuntu-cloudimg-keyring to ubuntu-cloud-keyring source package ?
[13:02] <smoser> i think as a separate binary package
[13:03] <Daviey> smoser: ubuntu-cloud-keyring is such a trivial package, that i'd probably say make it a separate source
[13:03] <Daviey> i don't think it makes it any easier to combine them
[13:04] <smoser> fair.
[13:05] <Daviey> smoser: If this is blocking, please let me know when it's in the queue and i'll review it as a priority
[13:19] <jamespage> smoser, sounds OK to me
[13:20] <smoser> jamespage, so this will end up in cloud archive
[13:20] <jamespage> smoser, why? would it not be better in the main archive?
[13:20] <smoser> it will be in the main archive of course.
[13:20] <smoser> but not in 12.04
[13:21] <smoser> sans flux capacitor
[13:22] <zul> jamespage:  based on the MIR for oauth2 im going to see what it takes oauth2 with oauthlib
[13:23] <Daviey> smoser: erm
[13:23] <Daviey> smoser: It /should/ be in primary archive of 12.04
[13:24] <Daviey> Oh, i suppose there is the trust path - as being in the Cloud Archive means it was signed by ubuntu-cloud-keyring.. but anyway, that is convoluted :)
[13:25] <smoser> Daviey, the new package you're telling me to create is not in 12.04
[13:25] <smoser> and sru policy would block said package from being in 12.04
[13:31] <Daviey> smoser: Nah, we went through this for ubuntu-cloud-keyring.  It is okay to introduce something like this to -updates.
[13:31] <Daviey> ubuntu-cloud-keyring wasn't part of 12.04, it was introduced via SRU
[13:32] <smoser> k
[13:38] <rtg> what source package contains the ubuntu server guide ? I'd like to assign bug #1215665 to the right package
[13:42] <rtg> Daviey, ^^
[13:44] <Daviey> rtg: assign it to the 'serverguide' project..  https://bugs.launchpad.net/serverguide
[13:46] <rtg> Daviey, done. thanks.
[13:53] <Daviey> rtg: thanks
[13:53] <caribou> Daviey: ikonia: remember the networked kdump discussion of earlier today
[13:53] <ikonia> yes
[13:54] <caribou> Daviey: ikonia: I think that there should be some provision there to also send it to some cloudy storage, like ceph or swift
[13:55] <caribou> not sure how easy it would be though
[13:55] <Daviey> caribou: I remember it fondly :)
[13:56] <caribou> Daviey: I just think that in a cloud context, that might be nice to be able to send the vmcore to some kind of 'shared storage'
[13:56] <Daviey> caribou: Be careful not to overscope, and deliver nothing :)
[13:57] <caribou> Daviey: true, the basic options would be sufficient but could be a nice evolution
[13:57] <caribou> just food for thoughts
[13:57] <Daviey> yeah
[13:58] <caribou> Daviey: the existing options would allow for one cloud instance to handle receiving dumps through SSH and store them
[13:58] <Daviey> caribou: smoser might have some guidance on storing in swift, and jamespage for ceph.  Maybe they have some pointers
[13:58] <caribou> Daviey: well, let's get the basic bits in first then I can go crazy
[13:58] <Daviey> caribou: This sounds like it could be well handled as a subordinate charm :)
[13:59] <caribou> indeed
[14:03] <linuxtech> I emailed LaMont 2 days ago about doing a sync request for his new bind 9.9.3 package and if I don't hear from him tomorrow, I am going to request it.  Any comments on using the newer Extended Support Release bind 9.9.3-P2 for saucy?
[14:05] <linuxtech> I have been running it a couple days on multiple machines, bot authoritative and recursive servers.
[14:07] <lamont> linuxtech: I'll be uploading -3 to debian and syncing today
[14:07] <lamont> though it'll be late tonight
[14:07] <linuxtech> Cool, Thank you!
[14:08] <lamont> the only issue I ran into was that there were ubuntu changes to merge, and that led me to actually looking at some other bugs to go with the new debian upload.
[14:08] <lamont> amusingly, the merge consists only of changes to debian/changelog, since I NAKed one piece of one upload
[14:09] <linuxtech> I was looking at some  of the Debian bugs and was wondering about commenting on some, it looks like a lot of the old ones are fixed or not relevant anymore.
[14:09] <lamont> and one of those things (doko wants make -j for parallel=), has it ftbfs atm
[14:10] <lamont> linuxtech: relevant and useful comments on any of my debian bugs are always welcome
[14:11] <lamont> gah
[14:38] <__mp> hello
[14:39] <zul> jamespage:  tevent approved
[14:39] <jamespage> zul, good-oh - thanks for doing that
[14:40] <zul> jamespage:  np
[14:40] <jamespage> zul, openvswitch is now sorted out as well btw
[14:40] <zul> jamespage:  sweeeet!
[14:41] <__mp> I have an upstart script https://dpaste.de/kGOBZ/ and I want to add respawn functionality. Any idea on how to solve this? I want to monitor the pid.
[14:49] <jamespage> __mp, you need to let upstart monitor the process directly - right now its not actually doing that
[14:50] <__mp> jamespage: Yes, that's what I figured. I find this suggestion but I find this quite problematic: http://tad-do.net/2013/07/24/writing-upstart-script-for-forky-java-application/
[14:51] <jamespage> __mp, can you run your application inthe foreground?
[14:52] <__mp> jamespage: I can't (well I could but I don't want to since I don't want to handle rake stuff by hand).
[15:04] <qman__work_> I have some questions about trying to backport packages
[15:04] <qman__work_> I used the backportpackage tool to try and backport shibboleth 2.5.2 from saucy to precise
[15:04] <qman__work_> my build failed and says it's waiting on a dependency
[15:05] <ivoks> jamespage: ping
[15:05] <qman__work_> I assume this means the system just needs more time to compile my package, but I need this faster; I don't mind running the compile on my own hardware but I don't know what I need to do to
[15:06] <qman__work_> This is the package page on launchpad: https://launchpad.net/ubuntu/+source/shibboleth-sp2  This is my build: https://launchpad.net/~cs-cracker/+archive/shibboleth-ppa/+build/4895264
[15:06] <zul> adam_g/jamespage: http://people.canonical.com/~chucks/ca
[16:07] <dumb_questions> don't worry, I'm planning to change the backup strategy. Right now I have a Tar backup of an Ubuntu Server. If this machine dies can I restore that backup to any other hardware assuming 64-bit? Anything special I need to know if it's intel vs. AMD?
[16:13] <resno> dumb_questions: for the most part my experince has been you can easily swap install between multiple phyiscal machines with realtive ease
[16:20] <qman__work_> dumb_questions: as long as your replacement hardware is supported by the version of ubuntu you're running, it will just work
[16:21] <qman__work_> if it's too new you might have to upgrade your kernel or backport drivers, etc
[16:26] <dumb_questions> thank you both of you
[16:27] <dumb_questions> it's a sticky backup situation, but I ant to be sure to plan for the future and do it right
[16:29] <dumb_questions> another question, what's the best way to test a large backup? This one is 100GB and I want to be sure the backups work.
[16:29] <dumb_questions> right now I'm creating a VM and untarring to a VM. But, 100GB takes some time....
[16:30] <patdk-wk> hmm? that is the only way?
[16:30] <dumb_questions> ideally the client would be able to test their own backup.
[16:31] <dumb_questions> I was afraid of that. in the future I hope to have incrementals set up both on rotated ext. HDDs and at a remote location (s3?). But I'll still have to test them regularly.
[16:35] <dumb_questions> ok, now for a dumb question: I restored the thing in Virtualbox. Now when I reboot the VM it keeps rebooting to the live disc instead of the the VDI I restored. I removed the CD drive from STorage, but can't figure out how to tell it to boot from the HDD. Anyone know?
[16:44] <qman__work_> dumb_questions: you need to boot a live CD and install grub to the master boot record
[16:44] <qman__work_> while you restored your files, you didn't restore the bootloader
[16:44] <smoser> jamespage, ping
[16:45] <smoser> so with virtual-maas... trying to use that with juju on precise.
[16:45] <smoser> 2 things.
[16:49] <zul> adam_g: mind +1ing http://people.canonical.com/~chucks/ca
[16:50] <ivoks> ah, is that novnc fix? :)
[16:51] <ivoks> nope :(
[16:53] <smoser> http://paste.ubuntu.com/6018470/
[16:55] <dumb_questions> qman__work_: I'lll look up a tutorial. Thanks.
[16:56] <dumb_questions> found one. The same I used for restoring, just had to keep reading: rtfm, right?
[17:06] <adam_g> zul, lgtm
[17:07] <zul> adam_g:  cool thanks
[17:11] <smoser> jamespage, do you know why we use bind9 in virutal-maas ?
[17:11] <jamespage> smoser, thats what maas uses
[17:11] <smoser> oh. trhats right.
[17:11] <smoser> adn we just add the forewarders to it.
[17:11] <jamespage> yep
[17:12] <jamespage> dnsmasq is alot simpler of course
[17:12] <smoser> so i think it would be trivial to just install dnsmasq
[17:12] <smoser> it will get updated with resolvconf
[17:12] <smoser> and we can point maas's forwarder at *that* ?
[17:12] <smoser> maybe
[17:15] <smoser> hm.. no. that seems to conflict with maas (package level conflict, joy)
[17:40] <adam_g> smoser, any options for getting nested kvm  on a cloud image other than installing the generic kernel and rebooting?
[17:41] <smoser> you shoudlnt 'hav to reboot
[17:42] <smoser> is that enough?
[17:46] <adam_g> smoser, doh, nevermind. just needed the linux-extras-`uname -r`
[17:47] <kurt_> Is there a good post-deployment guide for configuring/setting up openstack via openstack-dashboard (horizon)?
[17:47] <smoser> oh. adam_g simpler, linux-image-extra-virtual,
[18:04] <RoyK> hm... I'm helping a friend to manage his home server, and lately, errors have shown up on the root fs, which resides on an md-raid mirror. one of the drives in the mirror has a single pending sector, as reported from smartctl
[18:04] <RoyK> could this be a memory issue? I see nothing in the logs from the drives, and the single pending error has been stable for months
[18:05] <RoyK> s/pending error/pending sector/
[18:05] <ikonia> RoyK: why do you feel it's possibly memory ?
[18:10] <RoyK> ikonia: I don't really understand what else it can be. if it were the drives, I should have seen I/O errors in the logs, which I don't
[18:10] <ikonia> RoyK: what sort of errors is he getting ?
[18:10] <RoyK> ikonia: if it were a single incident, no problem, but I saw filesystem errors and the root remounted r/o a couple of days after he rebooted and fsck stepped in
[18:11] <ikonia> RoyK: so, i've seen issues on md (i'm assuming software raid too) with disk controllers
[18:11] <RoyK> http://paste.ubuntu.com/6018739/
[18:12] <RoyK> ikonia: sure, but that also will produce i/o errors in the logs
[18:12] <ikonia> RoyK: depends, I've seen it switch straight to a ro file system
[18:12] <ikonia> RoyK: I admit it's normally on cheap/poor controllers
[18:14] <RoyK> ikonia: disk 0-1 are for the root, and disk 2-5 are for the local raid-6. disks 1-4 are on the controller on the mobo. no issues with the raid
[18:15] <RoyK> and btw samba segfaulting every now and then, sometimes daily
[18:15] <RoyK> another factor
[18:15] <ikonia> RoyK: I'd guess because the file system under it has problems
[18:15] <RoyK> ikonia: the filesystem under ext4?
[18:15] <ikonia> RoyK: under samba
[18:16] <RoyK> samba doesn't share out anything from the root
[18:16] <ikonia> maybe not then
[18:16] <RoyK> and the raid hasn't shown any problems
[18:16] <ikonia> RoyK: i wonder if it's worth running iostat, and just seeing if there is any load/high scan rate or writes before it has a problem
[18:18] <RoyK> ikonia: last time this happened (remount-ro), was while I was cleaning up, removing an old backup, so yes, it happens during load
[18:19] <RoyK> ikonia: samba doesn't seem to follow this pattern, it seems to die randomly
[18:19] <RoyK> even at times I know the server is idle
[18:19] <ikonia> RoyK: something seems pretty messed up
[18:19] <RoyK> I know :P
[18:30] <ikonia> RoyK: is there anything in the syslog when it swaps to r/o mode
[18:30] <ikonia> RoyK: it has to log something ?
[18:33] <RoyK> what I pasted was from dmesg output. it hasn't been setup with a remote log server (yet), although I'll do that when the system gets rebooted. no reason to reboot it now - fsck will probably just stop the bootup by asking silly questions
[18:40] <ikonia> RoyK: that maybe worth while see if it gets something extra sent out before it goes read onl
[18:40] <ikonia> only
[18:40] <RoyK> ikonia: sure, but all that is kernel stuff, so it should be in dmesg
[18:41] <ikonia> RoyK: you'd hope/expect, but there is nothing obvious jumping out
[18:41] <RoyK> well, all kernel messages go into the circular kernel log (as read by dmesg), and then to syslog
[18:41] <ikonia> but there is nothing obvious there
[18:42] <RoyK> I'll try whenever I get write access to the box, but I'm willing to be some money it won't help much. the cleanup I did was with rm, locally, no samba involved
[18:47] <ikonia> from what you're saying, I agree
[18:47] <RoyK> looks like someone suggested removing the journal, fsck and then re-add it http://www.linuxquestions.org/questions/linux-newbie-8/aborted-journal-and-volume-remounted-read-only-812216/
[18:49] <ikonia> that doesn't seem logical to me
[18:49] <RoyK> ok - why not?
[18:49] <RoyK> should fsck fix it?
[18:59] <ikonia> RoyK: I'd hope fsck would be enough
[18:59] <RoyK> we'll see
[19:00] <ikonia> RoyK: it's an easy get out of jail, but that hardware doesn't seem happy
[19:03] <RoyK> then why no I/O errors? they should have appeared *before* the actual fs error
[19:04] <ikonia> RoyK: I agree, but disk problems, samba problems....
[19:06] <RoyK> yes, that's why I think a good, long memory check would be relevant
[19:06] <RoyK> since bad memory can cause all sorts of errors
[19:15] <RoyK> hm... I know I should never run fsck on a mounted filesystem, but should it be safe to fsck a filesystem remounted RO by the system after errors? I've fscked filesystems remounted RO earlier without issues
[19:19] <ikonia> RoyK: touch autofsck and bounce the box
[19:21] <RoyK> no chance to touch anything on a RO FS
[19:26] <ikonia> of cours.e...
[19:26] <ikonia> idiot, didn't think
[21:05] <StereoChild> anyone got an opinion on the best torrenting software i.e for a seedbox
[21:05] <StereoChild> when I say best I mean your preferred
[23:50] <koolhead17> Daviey: