[00:04] jpds: we can, I think, write a single update process to spit out N sets [00:07] New bug: #409192 in libvirt (main) "could not be installed during automatic synptic update" [High,Invalid] https://launchpad.net/bugs/409192 [00:12] New bug: #553676 in samba (main) "nautilus gvfs-smb" [Undecided,New] https://launchpad.net/bugs/553676 === dendro-afk is now known as dendrobates [00:23] I'm having a problem using ssh. When I simply try to connect to an SSH server, as I've done in the past, I get the error message "Privileged ports can only be forwarded by root." I am not trying to forward. I tried to run the following: ssh [server] [00:36] mathiaz: ping [00:36] kirkland: o/ [00:36] mathiaz: https://bugs.edge.launchpad.net/ubuntu/+source/libvirt/+bug/219249 [00:36] Launchpad bug 219249 in libvirt "virsh completion only works at the shell prompt" [Wishlist,Incomplete] [00:36] mathiaz: can you re-test that bug, or close it? [00:38] kirkland: hm - that would require access to a lucid libvirt [00:38] kirkland: which I don't have [00:38] mathiaz: not on your mini10v? [00:38] kirkland: nope [00:40] mathiaz: is the bug just that virsh can't expand ~ ? [00:40] mathiaz: i just tested here on Lucid, works fine [00:42] kirkland: I think what I meant was that when I type the file completion works [00:42] kirkland: but when I hit enter it fails [00:43] mathiaz: virsh # define ~/foo2.xml [00:43] error: Failed to open file '~/foo2.xml': No such file or directory [00:43] mathiaz: like that? [00:43] kirkland: yes [00:44] kirkland: with you typing only ~/fo and then completed it to ~/foo2.xml [00:56] New bug: #239107 in libvirt "kvm - guest freebsd 7 - reboot problems" [Wishlist,Fix released] https://launchpad.net/bugs/239107 [01:02] New bug: #235386 in libvirt (main) "Permissions of files in /etc/libvirt/qemu are too restrictive" [Wishlist,Invalid] https://launchpad.net/bugs/235386 [01:16] New bug: #368084 in libvirt (main) "internal error creation of non-raw images is not supported without qemu-img" [Medium,Fix released] https://launchpad.net/bugs/368084 [01:27] New bug: #273748 in libvirt (main) "postinst fails with addgroup: The group `libvirtd' already exists as a system group." [High,Confirmed] https://launchpad.net/bugs/273748 [02:22] New bug: #351183 in libvirt (main) "libvirtd crashed with SIGSEGV in virStateActive()" [Low,Invalid] https://launchpad.net/bugs/351183 [02:28] mathiaz: hey [02:28] kirkland: o/ [02:28] mathiaz: what release is your kvm host these days? [02:28] mathiaz: still hardy? [02:29] kirkland: yes [02:29] mathiaz: with the kvm/libvirt backport? [02:29] mathiaz: can you still reproduce https://bugs.edge.launchpad.net/ubuntu/+source/libvirt/+bug/193451 ? [02:29] Launchpad bug 193451 in libvirt "poweroff in the guest doesn't stop the kvm process on the host" [Low,Confirmed] [02:29] kirkland: 1:84+dfsg-0ubuntu12.4~hardy1~ppa1 [02:29] kirkland: 0.6.1-0ubuntu5.1~rc1ppa1 [02:30] kirkland: let me try that [02:30] mathiaz: cool, thanks [02:31] jdstrand and I have cut libvirt's open bugs from 78 to 41 \o/ [02:31] kirkland: seems like it's working correctly now [02:32] mathiaz: would you mind marking that bug fix-released? or would you like me to? [02:32] kirkland: I'll update the bug [02:37] mathiaz: thanks dude [02:59] kirkland: have you had good irc sessions with screenbin? I'm thinking about using screenbin for a session, but realized that logging it might be uncool [02:59] * kees ponders how to tie it to the irc channel... [03:06] hi ubuntu server [03:06] what is the name of package of text mode ubuntu installer? like ubiquity but in text mode [03:07] seyacat: "debian-installer" [03:08] kees: ty, isee that [03:09] kees: how can i use debian-installer? [03:13] <_Techie_> !deb [03:13] deb is the Debian package format, also used by Ubuntu. To install .deb files, simply double-click (in Ubuntu) or click (in Kubuntu) on them to start the GDebi utility. [03:13] <_Techie_> !dpkg [03:13] dpkg is the Debian package maintenance system, which together with apt forms the basic Ubuntu package management toolkit. [03:14] <_Techie_> someone help me, my minds gone blank [03:14] <_Techie_> syntax from installing from a .deb please [03:14] <_Techie_> for* === erichammond1 is now known as erichammond [03:36] _Techie_, dpkg -i package.deb [03:36] see also: dpkg --help; man dpkg [03:42] <14WAAGF8Z> can anybody help me how to exclude a folder from the rsync command. i have a fodlder called "Search" inside the folder called "charts.windows" that i do not want rsync to copy, the rest of the folders inside the "charts"folder i do want them to be copy: this is what i have: rsync -v -r -d --delete /home/rgotten/charts.windows /home/rgotten/Prueba/Myplasticare/Charts >> /home/rgotten/logs/respaldo-`date +%d%m%Y`.log [03:47] 14WAAGF8Z: Try adding an option like: --exclude charts.windows/Search [03:49] ahoy, just looking at lucid. it seems lighttpd can't bind to port 80. is anyone looking at this? [03:50] <14WAAGF8Z> erichammond: in waht part of the command i should add that [03:50] 14WAAGF8Z: Anywhere before the source directory [04:36] i have my server setup with my ssh key so i don't need to login when i access it. however, i used my .pub key...since that key is on lp, should i generate a new key to use for accessing my server? or does it still need the private key on my system anyway? [04:38] oh, when i logged on it asked me to unlock my private key. so i guess that answers my question =) === JanC_ is now known as JanC [06:07] kirkland, what about lp? [06:07] getting the release from lp? [06:07] i just fired up a test vm i had in vbox, i zipped up the /var/www folder and want to transfer it over my network...last time i ran this vm i remember networking was functioning correctly [06:07] eth0 will not go up, i've checked my interfaces they are good, the e1000 network module is loaded... [06:07] sudo ifup eth0 returns and error about no device [06:08] SIOSCSIFADDR: No such device [06:08] eth0: ERROR while getting interfaces flags: No such device [06:08] any suggestions? [06:08] sounds like the virtual adapter isn't connected [06:09]  [06:09] Network [06:09] Adapter 1: [06:09] Intel PRO/1000 MT Desktop (Bridged adapter, eth0) [06:09] it's connected [06:09] no clue then [06:09] hmmm [06:10] where's j he's usually good at bailing me out :) [06:15] is there a command like time that will give me an idea how much iowait time the process had, I have a problem where the first time I execute commands on a VPS it takes like 300 ms to start executing. [06:18] billybigrigger, are you sure that the interface exists in the guest? (i.e. does it shows on ifconfig -a ) ? [06:25] it shows up as eth3 [06:25] sudo ifup eth3 [06:25] ignoring unkown interface eth3=eth3 [06:25] jeffesquivel, ? [06:25] any ideas [06:28] billybigrigger, probably your mac address changed, which udev interpreted as you inserting a new inteface, thus giving it a new persistent name (that would almost always have been correct in a physical machine) [06:29] yeah, that makes sense [06:29] i generated a new mac address in vbox [06:29] billybigrigger, if that's it... then you can edit /etc/udev/rules.d/70-persistent-net.rules [06:29] billybigrigger, all you need to do is delete the old entries [06:29] and then replace eth3 for eth0 [06:30] billybigrigger, other solution would be to edit /etc/network/interfaces [06:30] already edited my interfaces [06:30] billybigrigger, and replace eth0 for eth3 and use ifup eth3 just like before [06:30] billybigrigger, ok, nevermind then [06:30] ;) [06:30] jeffesquivel, thanks [06:30] billybigrigger, no problem [06:31] now to finish transferring /var/www and exporting my mysql tables...fun stuff [06:33] anyone knows if there is a way to recover an encrypted partition left over from a previous install [06:35] I accidentally borked my system yesterday and upgraded to 10.04 today. My old ~ is still in another partition, but after mounting it I can't access its contents - they were encrypted. [06:35] gwynddyllyd, do you have the key? [06:35] While this is a good sign for security, it's bad for my local confs... [06:35] jeffesquivel: I know the pass, but if by key you're referring to a file that serves as key then I'm out of luck. [06:36] Also, does this means that I should keep backups of .Private at all times? [06:37] gwynddyllyd, you should always keep backups of your important stuff :-) [06:37] gwynddyllyd, but what you really need to have is the info on that file that serves as key [06:37] jeffesquivel: all my documents were archived somewhere as well, it's just the conf files that I have lost [06:38] gwynddyllyd, IIRC, just after installation you're warned about it... but not everything is lost... [06:38] gwynddyllyd, I think kirkland is THE guy to ask about encrypted home though... [06:39] jeffesquivel: any way to get the contents back, or am I out of luck? [06:39] ,seen kirkland [06:39] (do we have a bot here?) [06:40] jeffesquivel: but even then, thanks a lot. [06:40] gwynddyllyd, ok, lets see if I can help while we wait for his reply [06:41] right now you have a .Private directory on your home with all your encrypted stuff, right? [06:41] also, lucid beta comes with great defaults. My console is setup for 1280x1024 and I have done zilch for it to happen [06:41] on a folder at /media/local/olduser [06:41] my current ~ is a different partition [06:42] entirely different from the one I borked [06:42] gwynddyllyd, ok... do you happen to also have your old ~/.ecryptfs ? [06:42] let me check [06:43] i have some troubles with cups, when trying to modify a printer, I get this error message in the logs: E [02/Apr/2010:13:30:30 +0800] Unable to change permissions of "/etc/cups" - Permission denied [06:43] permissions are already set to 755, and user:group is root:lp [06:43] yes I do [06:43] can someone help ? [06:43] gwynddyllyd, do you have a wrapped-passphrase in there? [06:44] gwynddyllyd, if you do... try this: ecryptfs-unwrap-passphrase wrapped-passphrase [06:44] jeffesquivel: yes, there is [06:44] gwynddyllyd, it will ask you for your password (the one you have) and then the output should be a lot of letters and numbers [06:45] jeffesquivel: done. [06:46] jeffesquivel: numbers and letters at the prompt [06:46] gwynddyllyd, now, you need to use mount -t ecryptfs to mount your encrypted directory unto a decrypted one using the key you just got... [06:46] gwynddyllyd, give me a sec to see if I can find a tutorial for that [06:48] jeffesquivel: i'm reading ecryptfs' man page. Seems ok to follow. [06:48] gwynddyllyd, ok... then my work's done :-) [06:50] jeffesquivel: I really appreciate your help. I was about to wipe the partition and buy a new hd to hold my confs on separate media. [06:50] (which for the latter i might still do) [06:50] gwynddyllyd, you're welcome [06:55] jeffesquivel: to be completely fair I see no point in encrypting configs. As soon as I manage to copy the old files somewhere else I'll wipe the partition. [06:56] gwynddyllyd, I guess it depends on the contents of those configs [06:56] gwynddyllyd, for example, pgadmin3 stores passwords for your database in cleartext somewhere in your home [06:57] jeffesquivel: editors, compilers, git, etc [06:57] gwynddyllyd, or you could have a recently used file named "something deeply personal you don't want to share with others" and that name is in a config file... [06:57] gwynddyllyd, but I see your point :-) [06:57] lol [06:57] yes that could happen, but that's not the case. [06:58] gwynddyllyd, I know... I think I just have my $HOME encrypted to mess with big brother... ha hahaha [07:01] jeffesquivel: hahhahahhahah [07:01] * gwynddyllyd thinks about GoDaddy's recent brawl with China [07:02] jeffesquivel: Do you happen to know what kind of cipher Ubuntu uses by default? It's asking me right now. [07:03] gwynddyllyd, hmm... that should be on my fstab [07:04] no, it's not there... but mount does know [07:04] ecryptfs_cipher=aes,ecryptfs_key_bytes=16 [07:05] jeffesquivel: aes did the trick - it was the default [07:05] gwynddyllyd, ok, great [07:06] but apparently .Private on that partition is a symbolic link to /home/.ecryptfs/olduser/.Private [07:07] nevermind, it's on media too, I just have to relink it. [07:09] gwynddyllyd, I was just reading about that! http://dustinkirkland.wordpress.com/2009/08/06/moving-your-encrypted-home-meta-data-out-of-varlibecryptfs/ [07:12] jeffesquivel: Copying that link will be hard. I'm on the console and I haven't used screen to spawn this emacs instance... :( [07:13] let me grab a pen... lol [07:13] ewwwww [07:13] hehe [07:14] gwynddyllyd, ha haha... it may help... but basically says what you already know... that some stuff is really located at /home/.ecryptfs/$USER [07:15] jeffesquivel: yes, I have relinked it and managed to mount it... but the contents are not there! [07:15] something is amiss [07:15] * gwynddyllyd scratches head [07:17] gwynddyllyd, did you check the encrypted files are where they should be? [07:18] jeffesquivel, you see nothing on your mounted directory or you see everything but with scrambled names? [07:18] jeffesquivel: I saw the files at /media/local/.ecryptfs/olduser/.Private [07:18] and they're scrambled [07:18] but now I've linked them there [07:18] oh, wait [07:19] maybe I should try mounting at the .ecryptfs directory [07:19] instead of at the 'old location' [07:19] since it's not my home anymore [07:19] no point in linking anything [07:23] jeffesquivel: no luck. Everything's scrambled after mounting with my passphrase. That's sad, and a proof that security works. Not even the owner can access his files if he's careless. [07:23] gwynddyllyd, wait [07:24] gwynddyllyd, if you try to open one of the files, is it readable? [07:24] gwynddyllyd, IIRC, the system does two things: 1) encrypt your data 2) change the name of everything so that you can't get info out of the names [07:24] gwynddyllyd, it does that with two different keys [07:25] but I think you can get the key for the filename stuff from the key you already have... [07:26] jeffesquivel: let me try. The names are definitely hard to read and type. [07:29] jeffesquivel: I complains about not finding a key with description XXXXXXXXXXXX [07:29] gwynddyllyd, hmm... you could try doing ecryptfs-add-passphrase before trying to mount [07:29] s/I/It [07:30] it asks me to when I'm mounting interactively. [07:30] I figured I shouldn't since I don't want to get some global file dirty. [07:31] (it probably holds info on the current ~ too) [07:31] gwynddyllyd, hmm... what you need is your fnek sig... now I remember [07:32] gwynddyllyd, that is stored on ~/.ecryptfs/Private.sig [07:33] I think that when you are mounting the encrypted dir it asks you if you want to use fnek (file name encryption key) or something like that [07:33] let me try to remount it [07:33] gwynddyllyd, you have to say yes and use one of the sigs on that file I told you [07:34] * jeffesquivel should write about of this process on his blog for when he needs it again in the future [07:34] wait, scratch that... [07:34] * jeffesquivel should start a blog [07:34] ;) [07:39] jeffesquivel: mounting with the fnek for the passphrase did nothing apparently. It complains that it couldn't find a matching auth token for the damn thing, though it gives me the matching pass as error [07:39] But I'm assuming it will be impossible to retrieve it. [07:40] jeffesquivel: If I manage to make this work I'll write a script to automate the backup of all keys and restore them. [07:40] gwynddyllyd, I know it is possible... I just can't remember exactly how... he he. [07:40] This is such a headache... lol [07:41] And the worst part is that I'm doing just for the challenge now, since I could've reconfigured everything I needed already. [07:41] gwynddyllyd, yep... that's exactly why I want to see it done... ha haha... [07:41] hahhahahha [07:42] you sadistic bastard [07:42] :p [07:44] ok, wait... I [07:44] 'll try this... but that means logging out [07:44] brb [07:54] gwynddyllyd, got it! [07:55] jeffesquivel: how?? [07:55] gwynddyllyd, this is what I did (everything as root): [07:55] gwynddyllyd, ecryptfs-add-passphrase --fnek [07:56] gwynddyllyd, that should insert two sigs into your kernel or something like that... both of them are the ones that you can find on ~/.ecryptfs/Private.sig [07:57] gwynddyllyd, did that work? [07:57] Sort of. They come out differently from what's in Private.sig [07:58] I wonder if I have the passphrase wrong [07:58] gwynddyllyd, hmm... [07:59] gwynddyllyd, well, just in case... [07:59] gwynddyllyd, after that, I did mount -t ecryptfs /encrypted/directory /mnt [07:59] it asked me for the passphrase again [07:59] Should I use as input the long one I got from unwrap-passphrase or as plaintext? [08:00] Did it work? [08:00] gwynddyllyd, the long one you got from unwrap-passphrase [08:00] in both cases... for add-passphrase and for mount -t ecryptfs [08:00] jeffesquivel: that might be it then [08:02] yes, it worked for me logged in as another user (and then sudoing a shell) I got to mount this user's home [08:03] The key matched now! [08:03] gwynddyllyd, great [08:04] gwynddyllyd, now you do mount -t ecryptfs /encrypted/ /mnt [08:04] give it the long unwrapped passphrase [08:04] after that, answer aes, 16, y, y [08:05] then it will ask you for one of the sigs... in my case I had to use the second one on Private.sig [08:05] and that was it [08:06] jeffesquivel: IT WORKS! [08:06] gwynddyllyd, :-) [08:07] That's one long passphrase to input by the way [08:07] It's safe to assume that whoever has access to someone else's password still needs a lot of determination to carry it through. [08:08] jeffesquivel: I can't express my gratitude enough. This has been driving me insane. [08:08] gwynddyllyd, you're welcome [08:08] The 'entire passphrase' nailed it. [08:09] Man, my old ~ was very messy. [08:09] gwynddyllyd, ha haha... that tends to happen [08:10] jeffesquivel: I'm making a tarball of everything right now. No more crazy ecryptfs in my backups. [08:11] gwynddyllyd, ha ha ha [08:12] just kidding [08:12] Will you write that blog post, then? It would help a lot of people. [08:12] gwynddyllyd, as soon as I get my blog up again, I'll do it ;-) [08:15] It's almost 5am here. Maybe I should call it a day. [08:15] gwynddyllyd, where's that? [08:17] gwynddyllyd, well... good night (day?) anyways... he he [08:17] jeffesquivel: ipanema, rio de janeiro [08:18] gwynddyllyd, Brasil! cool! [08:18] jeffesquivel: Where you're at? [08:19] gwynddyllyd, San José, Costa Rica [08:19] morning, got my hands on a second machine which I wish to turn into a home web server, however, I am wondering should I just cluster both machines together or just use something like mogilefs to distribute the data between them. I just don't want two separate web servers running at the same time even knowing I have also thought of turning the second into a web proxy or where the database will be held. Anyway what do people here [08:19] suggest as I am also googling for information? [08:20] jeffesquivel: Never been to Costa Rica! How is it? The stereotype we have here is even sunnier than Rio. [08:21] gwynddyllyd, well... on summer it is... [08:22] gwynddyllyd, but we have rain about ~8 months of the year [08:23] jeffesquivel: We have only summer here, the entire year. I'm originally not from here, so I tend to be pretty pissed at the temperatures. [08:23] 25-35 C all the time [08:23] On summer it's 30-45 === simplexi1 is now known as simplexio [08:24] gwynddyllyd, yeah... I don't like hot climate either... but we have really nice beaches not so far away (~4 hours) ;-) [08:25] Oh, San José is on the countryside? [08:25] gwynddyllyd, nop... San José is at the center of the country.... it is the capital city [08:25] gwynddyllyd, but CR is really small [08:26] gwynddyllyd, you can get from here (the center) to south the border in about 7 hours [08:26] any suggestions on how I can make better use of the second machine? [08:26] and that's too far by our standards (because CR is "taller" than "wider") [08:26] does someone know a good wiki on how to install xen on karmic ? [08:28] Pirate_Hunter, how about a HA cluster? (and helping the HA team while at it :-) [08:29] maxagaz, just wondering... any special reason why to prefer Xen than KVM? [08:29] jeffesquivel, not a problem once I look into it, if it does what I need... [08:30] Pirate_Hunter, what is that you need? [08:30] jeffesquivel, I'm using kvm too, I'd like to try Xen [08:30] jeffesquivel: I might visit one day. I always wanted to go to Vancouver by car. [08:30] gwynddyllyd, wow! that's a long ride! [08:30] Anyway, I'm off to bed. It's really late here. [08:31] jeffesquivel: a jeep and supplies would be enough :) [08:31] ahhh can you just read the long post I posted, you just have to scroll up a bit [08:31] gwynddyllyd, well... if you get to come, be sure to visit Guanacaste's beaches... :-) [08:31] jeffesquivel: I might, I might... [08:32] maxagaz, oh, I see... well, got no idea about Xen (besides that it seems to not be supported on Ubuntu anymore), sorry... [08:33] jeffesquivel, not supported anymore? what about ubuntu cloud computing system provided with karmic server ? [08:33] isn't is using xen ? [08:33] maxagaz, you mean eucalyptus (or Ubuntu Cloud Environment)? [08:33] Pirate_Hunter: I don't have an opinion really. Do whatever you feel like it. It all depends on usage etc. [08:34] jeffesquivel, yes [08:35] maxagaz, I'm not completely sure, but I think it is based on Qemu-KVM... other people on this channel may know more about it [08:35] mazagaz: the xen forums, xen site and http://bderzhavets.wordpress.com/ [08:35] i think kvm is current hot virtualization system, xen isnt sunc with latest kernel releases (and what i have understood, it has some problems) [08:35] Ubuntu is not so good for Xen really. [08:36] Xen seems pretty good, and the only real option for systems without h/w VT i.e. a lot of systems. [08:37] To play with Xen, it might be easier to install Debian Lenny and update to the Debian dom0. [08:37] sherr, I get what you mean, however, I have no clue what I want to do with the second machine, just wish to good use of the second machine instead of having two web servers running independently, well in the end the I'll figure something out [08:37] but dosent alla latest systems have it [08:37] hello i have a problem with an ubuntu hardy under selinux, which channel is the best for this ? [08:38] simplexio: Most - but that still leaves MASSIVE numbers of older systems in the world. Anyway - Xen is good on its own merits as well. [08:38] Pirate_Hunter: you could try run db on another.. if you want to learn something, then you could try install www+db to both and then try some HA stuff [08:38] perhaps there are more specific ubu tu channel tanh this one [08:39] Montjoie: if you are running ubuntu server, this is the channel ... [08:39] ok [08:39] i have a strange selinux problem [08:39] http://pastebin.fr/7398 [08:39] after a fresh install [08:39] pam_console_t is certainly not the good context for a cron script [08:40] all system correctly labeled [08:40] Montjoie: No idea, sorry (no selinux). If no one else can help, post in the forum/security. [08:43] simplexio, yeah I been told to make the second hold database, cache etc that way it will balance the load if I ever decide to run some intensive sites, also again the ha cluster will have to read on that [08:47] Pirate_Hunter: it depnds what you need or want to try in home servers [08:48] Pirate_Hunter: like i currently play with GIs databses and mapnik + rendering maps [08:53] simplexio, don't think I'll be doing that even knowing it does sound interesting, recently been suffering from extreme boredom and need to think of soemthing to do. The server is intended for hosting some personal pages, a few projects made by me or other people. I will definitely not be using much resources out of the server but do have others that may require some resources specially when doing group projects. Altogether abou [08:53] t 5-10 people max working on the server, doing group or individual projects and hosting a few sites, what I require stability, redundancy and well a working server(s) [08:56] sorry for the long text [09:06] is it possible to mount a kvm image in order to takes files on the system installed on it ? [09:13] maxagaz: Yes, you can do that. You can mount the images with -o loop, but I forgot the rest, but there's good documentation on the subject. [09:29] so no people here do selinux on ubuntu ? [09:35] Montjoie: I don't think many uses that, no. ubuntu mostly relies on apparmor instead === sebsebseb_ is now known as xylos [11:59] installing the server now [12:00] any problems known about 8.04 [12:04] ? [12:05] * RoyK doesn't know any [12:06] working on installing now [12:06] i found it interesting that even if i choose lamp install i have to install apache2 [12:08] that is, being two years old, it uses older software than newer distros, so you might be missing something [12:08] i don't mind i just appreciate the stability [12:09] but again, for most stuff, you don't really need the latest stuff [12:09] huh? that should be part of lamp [12:09] just remember to apt-get update && apt-get dist-upgrade when you're done installing [12:09] thanks [12:09] ubuntu 8.04 is stable [12:09] imho [12:09] that is exactly what i was going to ask [12:09] running it on a vpc for de-bugging then will clone onto server [12:10] ok [12:10] local VM for development is nice [12:17] so far it works alright [12:17] using sun virtualbox [12:17] decent training platform [12:17] just putty myway in or just use the screen itself === 20QAALGOK is now known as Rafael [12:53] I have the following comand to backup my ubuntu server into a linksys NAS: rsync -v -r -d --delete /home/rgotten/Prueba/Myplasticare /home/rgotten/mnt/ >> /home/rgotten/logs/respaldo-`date +%d%m%Y`.log [12:53] my problem is that if i try to add -t (timestamp) or -va, i recerive something like: rsync: failed to set times on "/home/rgotten/mnt/Myplasticare/Charts/charts.windows/R/Ruiz/Ruiz Roberto": Operation not permitted (1) [12:53] please help [12:56] Rafael: does the filesystem support that? [12:56] oh [12:57] Rafael: try to just use rsync -av [12:57] and wtf is -d? [12:57] 'transfer dirs without recursing'?? [12:59] RoyK i tryed -va and did not work..you want me to try instead -va? [12:59] the fylesystem i believe is ext3 [13:00] Rafael: if -av doesn't work, we'll try to find the error :) [13:00] I use rsync regularly and I haven't seen those problems [13:00] sorry i am newby what is diference between va and av? [13:00] same thing [13:01] -av [13:01] -a -v [13:01] -v -a [13:01] -va [13:01] the order is irrelevant [13:01] when i try it i get permision denied ..i can try it again..what do you wnat me to show you after i do it? [13:01] do you own the target directory? [13:02] erm [13:02] run with sudo? [13:02] you're backing up to a linksys NAS? [13:02] I guess that nas has another filesystem that doesn't support it all [13:03] let me run it [13:03] one sec [13:11] RoyK: very interesting, when i run it thur rsync -n -va --delete /home/rgotten/Prueba/Myplasticare /home/rgotten/mnt/ |tee -a /home/rgotten/logs/respaldo-`date +%d%m%Y`.log i get sent 174209 bytes received 18825 bytes 8579.29 bytes/sec [13:11] total size is 1611734614 speedup is 8349.49 (DRY RUN) [13:11] You have mail in /var/mail/rgotten [13:11] but when i get the permition error is if i do a scrip to execute it? [13:11] oh [13:12] Rafael: -n won't try to create the file on your NAS [13:12] so no errors will occur [13:14] I guess the problem is the NAS using some ubscure filesystem internally [13:15] so when i do it this is what i get: ./respaldo-archivos.sh rsync: failed to set times on "/home/rgotten/mnt/Myplasticare": Operation not permitted (1) [13:15] rsync: failed to set times on "/home/rgotten/mnt/Myplasticare/Charts": Operation not permitted (1) [13:15] rsync: failed to set times on "/home/rgotten/mnt/Myplasticare/Charts/charts.windows": Operation not permitted (1) [13:15] but if i remove the -a..no problem [13:18] how do you connect to the nas? [13:18] nfs? smb? [13:19] http://en.wikipedia.org/wiki/NAS200 says that one only supports CIFS (aka SMB) [13:19] with CIFS/SMB, timestamps can be an issue [13:19] just try rsync -rv [13:20] that will copy the data, but won't care about times or flags [13:20] rgotten@myplasticare:~$ df -T /home/rgotten/mnt /home/rgotten/Prueba/Myplasticar e/Charts [13:20] Filesystem Type 1K-blocks Used Available Use% Mounted on [13:20] cifs 976267392 150326416 825940976 16% /home/rgotten/mnt [13:20] /dev/md3 ext3 955253868 3914068 912826224 1% /home [13:21] bingo :) [13:21] on a cell phone internet connection the installation of the ubuntu server takes forever when it comes to configuring apt [13:21] so either get a firmware that supports nfs, or use rsync -r [13:21] remember i am newby..i am pulling information from here and there [13:21] what is -r [13:22] facetiousproxy: I really wouldn't recommend installing ubuntu with cell phone connectivity [13:22] Rafael: man rsync [13:49] RoyK: thanks..ok i guess i have toleave with this for now until i find a way of doin nfs .. question..i was looking at the logs and they are 2 mb..does this make sence? [13:49] sorry to live with it [13:50] Rafael: impossible for me to say - it depends on how many files you're backing up, and keep in mind that you're appending to the log file, so it'll continue growing [13:50] that is, today's log file will grow [13:51] but I wouldn't worry about that, really [13:51] * RoyK has moved most data storage over to opensolaris [13:53] so in the future when is 100 mb..that is some memory that is taking away...is any way to dlete automatically logs older than xxx [13:57] you're creating new logs each day, so you should be able to remove all logs easily with something like find /path/to/logs -ctime something [13:57] s/all logs/old logs/ [13:57] see man find === dendrobates is now known as dendro-afk [14:02] Rafael: in essence it's find /path -mtime +10 -exec rm -f {} \; [14:02] +10 means 10 days [14:02] meaning all files older than 10 days [14:33] ttx: thank you for your edits [14:33] hggdh: I'm doing an install to validate the steps right now [14:33] there are a few subtile changes :) [14:34] yeah, I saw. I was still working on them, and I copied the original text as a framework [14:34] but it was far from complete === dendro-afk is now known as dendrobates [15:15] RoyK: i have being testing webmin..it will be the same i i go the the log folder and delte them?? i was trying to find a way of automatically delete the ones older than let say a month [15:17] I don't use webmin, but for normal log rotation (removal of old logs), logtotate is the thing. with your new-name-per-log, logrotate won't work, but a script running find with -atime etc will work well, and you can schedule it with cron - see man find / man cron [15:17] find /path -mtime +30 -exec rm -f {} \; [15:18] will remove 30 days or more old files [15:18] man 5 crontab is also nice [15:33] kirkland, smoser: hi! [15:33] hi [15:33] kirkland, smoser: so I investigated the libvirt 0.7.7 hotplug/scsi behavior and discussed it with upstream [15:34] kirkland, smoser: you can read about it in https://bugzilla.redhat.com/show_bug.cgi?id=578975 [15:34] bugzilla.redhat.com bug 578975 in libvirt "libvirt device_add does not work for scsi hotplug" [Medium,Closed: notabug] [15:34] kirkland, smoser: the bottom line is that scsi hot add is not pci hot add [15:35] kirkland, smoser: as such, with scsi hot add, the guest must rescan the bus [15:35] kirkland, smoser: one the guest does that, the disk is available [15:36] kirkland, smoser: in the past, libvirt created a new scsi controller for every new device. this was a pci add which is why it showed up magically [15:36] suck [15:37] working around the change in a guest is not acceptable [15:37] kirkland, smoser: this is incorrect behavior and now libvirt adds disks to an existing bus, like what would happen in the real world [15:37] yeah [15:37] so, it is not a bug, but yet it is still a regression [15:37] lovely really [15:37] those are my favorite kind of bug fixes :) [15:38] smoser: euca will break in lucid+1 for sure [15:38] smoser: I've not tried, but attaching a virtio disk should still work [15:38] * jdstrand goes to try [15:38] so we just need to force old behavior, adding a scsi controller [15:38] and the comment about "real world"... [15:39] smoser: also, aligouri (upstream qemu) reiterated that we should not be using scsi hot add [15:39] i'd think that it is hugely more likely to add a scsi device to an existing scsi controller [15:39] but if your pci has hot plug, it is entirely possible to add a new scsi controller [15:39] * jdstrand nods [15:40] smoser: you know, we could adjust the xml to add a new controller, but then euca would have to keep track of that state [15:40] so not ideal [15:41] let me see if virtio is shows up in the guest... [15:41] it does [15:41] well, it used to [15:41] :) [15:41] with 0.7.7? [15:41] ah, right [15:41] 0.7.5 [15:42] yeah, all this does not affect 0.7.5, which as the old behavior [15:42] s/0.7.5/all versions prior to 0.7.7/ [15:42] jdstrand, have to think about this... [15:42] well 0.7.5 and lower, 0.7.6 might be affected too (haven't checked) [15:43] it will absolutely break guests if devices don't "just show up" (you're right about 0.7.6, i don't know) [15:43] agreed [15:44] so either hack up libvirt or euca to add a controller, or move to virtio (which I am still testing) [15:45] kirkland, howdy!! [15:47] smoser: virtio hot add stills works [15:47] [15:47] [15:48] smoser: use that xml ^ and same virsh attach-device syntax and it all works [15:48] smoser: of course, kirkland mentioned euco is resistent to vd* devices [15:48] s/euco/euca/ [15:49] RoAkSoAx: howdy [15:49] jdstrand, its something ewe're going to be forced to deal with (moving away from scsi) [15:49] kirkland, http://paste.ubuntu.com/408130/ [15:50] but not something that we can do now. [15:50] again, its guest breaking [15:50] RoAkSoAx: \o/ [15:50] basically either that or scsi hot/add versus pci hot/add is api change [15:50] RoAkSoAx: well done! [15:50] from the guest's point of view [15:50] kirkland, i used that before, I just woke up and remembered... lol [15:50] RoAkSoAx: that's awesome, dude [15:51] RoAkSoAx: okay, so let's do this ... [15:51] smoser: yes, when 'm' opens euca will break [15:51] yeah. and we're going to have to leave scsi [15:51] *have to* [15:51] unless one of us wants to fix it [15:51] RoAkSoAx: throw that into a testdrive function, something like testdrive.ubuntu.dev() [15:51] smoser: well, or do the accounting to keep adding controllers [15:51] * smoser doesn't raise hand [15:52] smoser: I looked at the patches, they aren't trivial [15:52] RoAkSoAx: oh, better yet ... [15:52] we have to leave scsi because its broken and "wont fix" from upstream [15:52] in 'm' [15:52] RoAkSoAx: yeah, well, whatever [15:52] smoser: the accounting would have to be within euca, not libvirt [15:52] RoAkSoAx: just make it so that it's a one liner in the config file [15:53] smoser: so if they *really* want it, they would need to add it themselves [15:53] RoAkSoAx: and make sure that it handles no-networking gracefully, so that you can still use testdrive against your local cache on the plane [15:53] kirkland, ok [15:53] jdstrand, yeah. so what are you thinking for this ? [15:53] smoser: in 0.7.5 libvirt did the accounting on the number of new controllers. that all went away with the new scsi impl [15:54] smoser: I'm sure I could code up something to get it to work again, but it is a bit late in the game [15:55] smoser: I think 0.7.7 needs to be deferred [15:55] kirkland, ok will do that for current release, though for modularization I'll handle that differently :) [15:55] smoser: I will see if I can backport my blueprint items to 0.7.5-- but I'm pretty sure one needs to be postponed. I might be able to get the other two to work, with some 'cunning cleverness' [15:56] too bad really, the patches were pretty small against 0.7.7 [15:56] oh well [15:56] :-( [15:57] RoAkSoAx: sure [15:57] kirkland: ^ [15:58] jdstrand: did we hear back from upstream on the bug you filed? [15:58] jdstrand: i didn't get any email about it [15:58] kirkland: see backscroll [15:58] * kirkland reads [15:58] jdstrand: oh, sorry, i missed all of that [15:58] jdstrand: reading now [16:01] jdstrand: smoser: okay, i'm caught up now [16:01] jdstrand: smoser: one other idea ... what about USB? [16:01] jdstrand: smoser: in terms of "real world", usb flash disks are plugged and unplugged all the time [16:01] its an idea [16:01] jdstrand: smoser: they show up as sdb, sdc, etc. [16:02] jdstrand: smoser: seems like eucalyptus could be taught to add "usb" disks to guests [16:02] kirkland: that should work fine. let me try [16:02] so, fwiw, the 'sdb' isn't as big of a deal [16:02] because there is already a change required for that [16:02] i'm not sure about qemu's performance of usb disks, versus scsi versus ide [16:02] in ec2, you attach a volume as xdX [16:02] but i do know that all of those suck compared to virtio [16:02] smoser: really? then going virtio would be recommended [16:02] in euca, its sdX [16:03] yeah... i just realized that. [16:03] jdstrand: smoser: *we* are all in agreement about virtio [16:03] smoser: or do I not understand your point? [16:03] jdstrand: smoser: it's nurmi that we have to convince to take that change [16:03] jdstrand, you do understand it. [16:03] or we need to roll our own for UEC [16:03] k [16:03] kirkland, well, yes and no. [16:03] * kirkland smacks smoser for being cryptic [16:03] for root device, its painful and api breaking between ec2 [16:03] smoser: speak your mind dude :-) [16:04] there will be existing fstaba entries that have '/dev/sda1 /' [16:04] ah, "root device should be on sda" ... ? [16:04] and any image that moves across will break if you change that (by moving to virtio) [16:04] for hot plug (euca-add-volume, euca-detach-volume) there is already a api change. you cannot specify 'xvdX' as you do in ec2, because xvdX is XEN [16:05] kirkland, smoser: so the decision comes down to: do we stick with the old 0.7.5, lose the bug fixes but maintain deprecated scsi behavior and not touch euca for an LTS, or move to the new 0.7.7 with its fixes, but new scsi behavior and updating euca [16:05] kirkland, smoser: I can't answer that question [16:05] I defer to you guys [16:05] jdstrand: here's my opinion [16:05] shoot [16:05] i'm blowing smoke [16:05] you can do sdX [16:05] jdstrand: there are a bunch of bug fixes in the changelog, as you and i reviewed [16:05] in xen [16:06] or ec2 [16:06] jdstrand: but there's about an equivalent number for any libvirt release [16:06] however, i still do think that , while a PITA, changing this portion is less concerning than root= [16:06] jdstrand: while the "features" section is small, the "improvements" section is *huge* and gives me pause [16:06] and would also make data on the EBS volumes safer [16:06] which is important [16:06] jdstrand: https://bugs.edge.launchpad.net/ubuntu/+source/libvirt/+bugs?field.searchtext=&orderby=-importance&search=Search&field.status%3Alist=NEW&field.status%3Alist=INCOMPLETE_WITH_RESPONSE&field.status%3Alist=INCOMPLETE_WITHOUT_RESPONSE&field.status%3Alist=CONFIRMED&field.status%3Alist=TRIAGED&field.status%3Alist=INPROGRESS&field.status%3Alist=FIXCOMMITTED&assignee_option=any&field.assignee=&field.bug_reporter=&field.bug_ [16:06] supervisor=&field.bug_commenter=&field.subscriber=&field.tag=fixed-in-0.7.7&field.tags_combinator=ANY&field.status_upstream-empty-marker=1&field.has_cve.used=&field.omit_dupes.used=&field.omit_dupes=on&field.affects_me.used=&field.has_patch.used=&field.has_branches.used=&field.has_branches=on&field.has_no_branches.used=&field.has_no_branches=on [16:06] yoicks [16:07] yikees [16:07] jdstrand: http://tinyurl.com/y9abpkr [16:07] * RoyK hands kirkland a link to http://shorturl.com [16:07] the one thing that *does* change from ec2 -> euca is that in ec2, you can say "attach this as /dev/sdi" and, because its xen, the hypervisor can definitively say "yes" or "no" [16:07] or tinyurl, perhaps [16:07] in kvm, the hypervisor can't really promise that. [16:07] jdstrand: there are currently two bugs tagged fixed-in-0.7.7 [16:08] (from us, yes, but most LTS users aren't probably running lucid yet...) [16:08] jdstrand: one high, one medium, but neither of which milestoned as blockers for release [16:08] jdstrand: fair enough [16:08] so, its very late in the game. [16:08] (meaning we don't know how many of the fixes we'd miss would affect users) [16:08] but if this were even 1 month ago, i would heavily be suggesting moving [16:08] to attachment via virtio [16:08] jdstrand: my vote is for 0.7.5, and we cherry pick the fixes we can from git [16:09] kirkland, your link is bogus [16:09] as 523148 is fixed in our 0.7.5 [16:09] jdstrand: note that i added a kvm and a libvirt to hardy-backports about a year after hardy's release (ie, jaunty's kvm and libvirt) [16:09] smoser: the tinyurl one? [16:09] yes. [16:09] kirkland: you won't be able to do a kvm/libvirt backport for lucid without breaking euca (or backporting it too) [16:10] wait. [16:10] i dont think that is fixed in 0.7.7 [16:10] jdstrand: ah, good point [16:10] jdstrand, that will always possibly be the case [16:10] jdstrand: okay, well, honestly, we can't break euca at this point [16:11] there is no promise from anywhere that there is not massively painful changes in 0.7.8 that would prohibit easy backport either. [16:11] theres no reason to make up reasons to hcange [16:11] there are very good reasons [16:11] kirkland, smoser: if you recall, I had expressed the opinion in backscroll to stay with 0.7.5 [16:11] smoser: true [16:11] jdstrand: right-o [16:11] we just *know* it will break now [16:11] jdstrand: i'm coming to that conclusion, too [16:11] the primary one, in my opinion, is that EBS (the functionality exposed by attach/detach) is supposed to promise that your data is SAFE [16:11] and as we know, with buggy scsi, that is probably not the case. [16:12] kirkland: smoser talking about virtio and you usb got me talking again [16:12] jdstrand: just explaining my logic getting there, so that it's a thoroughly considered opinion [16:12] jdstrand: yeah, so USB ... [16:12] i say no to usb [16:12] i'd suspect performance and other issues. [16:12] smoser: why's that? [16:12] I would tend to agree-- it is a hack [16:12] you'd have a hard sell. [16:13] smoser: jdstrand: alrighty ... how about this .... [16:13] i'm open to a tested virtio solution because it is the "accepted solution" for hotplug [16:13] either fix euca to move to the upstream recommended virtio or stay with deprecated scsi [16:13] smoser: jdstrand: if in Lucid+1, euca insists that they can't do virtio, but libvirt has deprecated the pci_add scsi call, USB will be their only option for adding scsi disks [16:13] smoser: I've got packages in my ppa and xml in backscroll for virtio [16:14] can we test that? [16:14] smoser: I don't have the testing ability to thoroughly test euca [16:14] (I don't have a cloud, the experience with euca, etc, etc) [16:14] smoser: just add jdstrand's ppa, and upgrade libvirt-bin and libvirt0 on your NC's [16:15] I'm betting you will see a performance gain over scsi as well [16:15] kirkland, but you have to change euca [16:15] right ? [16:15] smoser: yeah [16:15] to make it say "add a virtio device" [16:15] thats what we want to test. [16:15] and see how bad it is. [16:15] jdstrand: oh, the performance gain is huge [16:16] between performance and stability/integrity, it may be something to consider regardless (I don't know) [16:16] smoser: one sec, let me pastebin a crude grep [16:16] jdstrand: we've been through this with nurmi several times now [16:16] kirkland: you have the backscroll xml, right? [16:16] jdstrand: we're going to insist on it for lucid+1 [16:16] * jdstrand nods [16:16] i am using this comand and get the error: find /home/rgotten/logs/ -mtime +30 -exec rm -f {} \ [16:16] error: find: missing argument to `-exec' [16:17] Rafael: terminate with \; [16:17] not just \ [16:18] smoser: http://pastebin.ubuntu.com/408145/ [16:18] kirkland, smoser: so, I am going to work on my libvirt bp stuff. let me know how to proceed after your testing. I don't have much more to offer atm [16:18] smoser: jdstrand: a gross grep shows that those are the places that need changing in the euca code [16:18] jdstrand: i do have the xml in backscroll, yes [16:20] kirkland, jdstrand i think, for maintainability that this is probably somethign worth testing [16:20] kirkland: is that a grep on "name='phy'" (I think that might show more, if things were hiding) [16:21] jdstrand: http://pastebin.ubuntu.com/408149/ [16:21] (but it could potentially miss stuff-- I've not looked at the code) [16:21] kirkland: seems a reasonable place to start [16:22] jdstrand: smoser: i don't mind trying this, but i'm not planning on hacking on it today (today's a swap day for me) [16:22] yeah, its monday before i could really test this. [16:23] kirkland: uhm, they are using 1024 as a hard limit there. please tell me they are checking the lengths of source and target dev [16:23] otherwise we've got an overflow [16:23] sorry for slipping on my security hat :) [16:24] s/overflow/potential overflow/ [16:26] jdstrand: char xml [1024]; [16:27] jdstrand: don't see any other sanity check :-( [16:28] kees: [16:28] heh [16:28] jeebus, yeah, they should be using MAX_PATH all over in here, all of it hardcoded to 1024 :-/ [16:30] kirkland: well, for it to mean something, local_iscsi_dev, localDevReal and/or remoteDev need to be attacker controlled [16:30] (from a security POV, not coding style) [16:31] I have no idea how that stuff is fed into euca, but you might want to bring it up to upstream and let them determine that (since they presumably know the code better than us, certainly me) [16:31] kirkland: ^ [16:31] jdstrand: yeah, i'll file a bug upstream [16:31] jdstrand: the path_max thing they should definitely solve [16:32] jdstrand: there's a bunch of 1024 char full paths in there [16:32] * jdstrand nods [16:34] hey guys im having an interesting issue with vi over ssh [16:34] for some reason when im working on vi it seems to drop my connection entierly to the server whiel im sshed into the remote machine. [16:34] the problem is [16:35] it only occurs when using vi [16:35] anyone have any ideas as to the issue [16:35] eagles0513875: using a non-standard terminal and arrow keys? [16:36] eagles0513875: if just using hjkl instead of arrow keys, can you reproduce the problem? [16:36] im using windows machine with xshell to ssh into the machine [16:36] also, try to enable keepalives in ssh - it might help [16:36] let me try wiht hjkl and see if it happens [16:36] RoAkSoAx: already is enabled [16:36] * RoyK guesses that was meant to RoyK [16:37] ya [16:37] sry [16:37] damn failure to tab complete [16:37] * eagles0513875 fires up xshell and connects [16:46] RoyK: also it happens when typing up a clean vi file [16:47] eagles0513875: echo $TERM [16:50] RoyK: output is xterm [16:53] eagles0513875: wierd - working with vim over an ssh link is something I do every day [16:53] RoyK: 64bit karmic or 32 [16:53] kirkland, ok I think this is how i'll go: 1. Function on testdrive to check local cache. 2. If no cache, get current dev release from LP. 3. If cache, check if expired. If it is, update it from LP. 4. If cache, not expired. do nothing. 5. Config file will always load the release from the cache [16:54] RoAkSoAx: sry for ping before btw [16:54] eagles0513875, no prob ;) [16:54] eagles0513875: karmic too - 32/64 shouldn't make a difference [16:54] RoAkSoAx: perfect! [16:54] RoAkSoAx: give the cache, say a 1-day expiration [16:56] RoyK: will keep working with it and see if it happens [16:57] kirkland, on every dev cycle, when do they release the first ISO for testing? As soon as they open the dev cycle? [16:58] RoAkSoAx: a few days (or weeks) later [16:58] kirkland, so we could extend the cache expiration then [16:58] RoAkSoAx: it could be 7 days [16:59] RoAkSoAx: actually, yeah, let's just make it 7 days [16:59] kirkland, yeah that's what I was thinking, because either way, it will check for a new release, and if there is a new release, it will update the cache even though it has not expired [17:00] anyways, I can manage that later [17:01] RoyK: did it with no error message but logs are still there since last year [17:02] RoAkSoAx: cool [17:02] Rafael: huh - try without -exec ... [17:03] that'll list the files matching -atime or -ctime or whatever you were using [17:05] RoyK: everythign seems fine now [17:07] eagles0513875: what did you do? [17:08] RoyK: nothing [17:08] i was doing what i normally do when i encounter the problem [17:08] which was type up some c++ examples im doing [17:08] it used to freeze while in the middle of typing them but it hasnt [17:08] * RoyK is a magick wizzard beaming out magick rayz to fix vim problemz [17:09] haha [17:09] RoyK: was just thinking that it was me talking to u that fixed it [17:11] RoyR: look what i am getting now: rgotten@myplasticare:~$ find /home/rgotten/logs/ -mtime +30 rm -f {} \; [17:11] find: paths must precede expression: rm [17:11] Usage: find [-H] [-L] [-P] [-Olevel] [-D help|tree|search|stat|rates|opt|exec] [path...] [expression] [17:16] <_ruben> Rafael: he meant you should remove anything following -exec as well [17:16] <_ruben> so stop at '+30' [17:16] thanks [17:18] good worked [17:21] is 2.6.33 or 2.6.32 possible on karmic server? [17:21] (is there a PPA anyone knows about for that, is what i'm really getting at) [17:22] <_ruben> spowers: http://kernel.ubuntu.com/~kernel-ppa/mainline/?C=M;O=D [17:22] <_ruben> these are vanilla kernels, no ubuntu pathces in 'em [17:22] that will be fine for my purposes [17:23] does the ubuntu server project patch the kernel for anything important? [17:23] <_ruben> most patches are backports from newer version afaik, dont know of any -server specific patches [17:24] ok, neat [17:24] <_ruben> and patches that are expected to end up in later versions [17:24] i just want to test btrfs [17:25] spowers: as a rule of thumb, patches are sent upstream, accepted (hopefully), and come back downstream for packaging [17:25] spowers: some "quirks" may be added ubuntu-side for weird stuff [17:25] <_ruben> been meaning to my hands on some spare ssd drives and play with stuff like btfs and nilfs, wont happen anytime soon though [17:25] so vendor patches, at least for ubuntu server, are carefully limited to bugfixes and that sort of thing? [17:26] apart from "quirks" [17:26] New bug: #554063 in amavisd-new (main) "Calling dspam from amavisd-new fails" [Undecided,New] https://launchpad.net/bugs/554063 [17:26] spowers: what is a vendor patch? [17:26] i'm not sure on the preferred nomenclature [17:27] spowers: use plain terms then [17:28] what i meant was to clarify if -server kernels were patched conservatively, and so the patches usually make their way upstream easily [17:29] spowers: all kernels are patched conservatively, nothing is thrown in just for shits and giggles [17:31] RoyK: i have a script called respaldo-archivos.sh that runs the rsync. and this is executed with cron, for the path: find /path -mtime +10 -exec rm -f {} \; this should go on the .sh file or in the cron directly [17:35] Rafael: doesn't really matter so long that cron eats it [17:35] Rafael: I usually use a cron job running a script in case I need to add more logic later [17:37] RoyK: so this is what i have rigth now on the script: #!/bin/bash [17:37] rsync -v -r -d --delete /home/rgotten/Prueba/Myplasticare /home/rgotten/mnt/ >> /home/rgotten/logs/respaldo-`date +%d%m%Y`.log [17:37] rsync -va --delete --exclude=/charts.windows/Search /home/rgotten/charts.windows /home/rgotten/Prueba/Myplasticare/Charts >> /home/rgotten/logs/respaldo-`date +%d%m%Y`.log [17:37] and i will just add a line with : #!/bin/bash [17:37] rsync -v -r -d --delete /home/rgotten/Prueba/Myplasticare /home/rgotten/mnt/ >> /home/rgotten/logs/respaldo-`date +%d%m%Y`.log [17:37] rsync -va --delete --exclude=/charts.windows/Search /home/rgotten/charts.windows /home/rgotten/Prueba/Myplasticare/Charts >> /home/rgotten/logs/respaldo-`date +%d%m%Y`.log [17:37] sorry [17:38] i will just add a line with find /path -mtime +10 -exec rm -f {} \; [17:52] kees: hi! [17:53] kees: I'm currently investigating bug 292971 [17:53] Launchpad bug 292971 in libnss-ldap "nscd leaking memory using libnss-ldap" [High,Triaged] https://launchpad.net/bugs/292971 [17:54] kees: and I currently get the following valgrind log: http://paste.ubuntu.com/408201/ [17:54] kees: I'm not sure how to interpret that log [18:06] mathiaz: hello! [18:07] mathiaz: reading... [18:07] mathiaz: valgrind is saying that the ldap_create function is leaking 68 bytes of memory via calls to ber_memcalloc_x [18:08] mathiaz: I'd install libnss-ldap-dbgsym to get a better traceback, though [18:08] kees: how do I build libnss-ldap-dbgsym [18:08] kees: ? [18:08] mathiaz: just install it from pitti's ddeb repository. let me find the docs... [18:10] mathiaz: https://wiki.ubuntu.com/DebuggingProgramCrash [18:10] kees: thanks [18:11] mathiaz: does the patch in comment 1 work? [18:11] New bug: #554094 in samba (main) "Windows clients cannot connect unless Samba manually restarted" [Undecided,New] https://launchpad.net/bugs/554094 [18:11] kees: well - that's what I'm trying to figure out [18:11] kees: I'm trying to reproduce the issue [18:11] kees: and figure out if the patch fixes the problem [18:11] ah-ha, excellent. [18:12] kees: however my C knowledge/libnss is not strong enough to figure out whether the patch is supposed to work [18:12] kees: so any help in reviewing the patch is welcome [18:13] kees: I think I'll be able to confirm by experimenting that the leak goes away - whether the patch is actually the correct way is out of my area of expertise [18:14] heh [18:14] mathiaz: well, a visual inspection of the patch looks good to me. i.e. the implied logic makes sense, assuming it works. :) [18:17] kees: so I'll confirm it works and that should be enough to confirm the fix [18:18] mathiaz: yeah. if you develop a test, please consider adding it to the qa-regression-testing bzr tree [18:19] kees: hm - I'm not sure how I can develop an automated test for a memory leak [18:19] kees: the actual test is: look at the valgrind log [18:19] kees: it shouldn't have the libber leak [18:20] mathiaz: sounds easy enough: run valgrind, check for "bad" text in output. :) [18:21] kees: :) [18:42] kees: http://people.canonical.com/~mathiaz/valgrind.nscd.exit.15674 [18:42] kees: ^^ this is with the current version of libnss-ldap with the memory leak [18:42] kees: http://people.canonical.com/~mathiaz/valgrind.nscd.exit.15949 [18:42] kees: ^^ this is with the patch applied [19:03] kees: http://people.canonical.com/~mathiaz/valgrind.nscd.exit.16237 [19:04] kees: ^^ this is the correct valgrind log for the patched version of libnss-ldap [19:04] kees: it seems that the library so numbers were bumped for the new patched version [19:05] hi there, I am having problems with samba and DHCP, it keeps reloading the samba.conf file on every DHCP renew, thing is, my DHCP just uses mac addresses to assign a static IP [19:17] addisonj: are your clients also losing connection to the samba server? [19:19] server isn;t in production yet, so I can't really say yet [19:20] but if samba is reloaded won't it kill everything? [19:20] Then how do you know? The messages on the console? They are irritating, but harmless. [19:21] hm? so you are saying a samba reload every 5 minutes is not going to cause problems? [19:21] Reloading is not restarting, so no. [19:22] I think there is a bug somewhere about the messages. If you are also losing connection, go here: bug 435061 [19:22] Launchpad bug 435061 in dhcpcd "smbd loses connection on DHCP lease" [Medium,In progress] https://launchpad.net/bugs/435061 [19:22] mathiaz: why was it bumped? [19:23] it shouldn't change the ABI [19:23] kees: that's a very good question [19:23] kees: shouldn't the ABI be bumped manually? [19:23] kees: I've just applied the patch and that's all [19:26] mathiaz: usually -- I guess it depends on the build system [19:27] well, it seems like a lot of the memory leaks are gone, but I see this added: +==PID== Warning: invalid file descriptor -1 in syscall close() [19:27] +==PID== at 0x587F99D: ??? (syscall-template.S:82) [19:27] +==PID== by 0x627EC6A: ??? (in /usr/lib/liblber-2.4.so.2.5.4) [19:27] +==PID== by 0x627EB3A: ber_int_sb_close (in /usr/lib/liblber-2.4.so.2.5.4) [19:27] +==PID== by 0x627F783: ber_sockbuf_free (in /usr/lib/liblber-2.4.so.2.5.4) [19:27] +==PID== by 0x6046747: ldap_ld_free (in /usr/lib/libldap_r-2.4.so.2.5.4) [19:27] looks like the atexit handler needs to be more careful? [19:27] mathiaz: btw, my comparison was: diff -u <(perl -pe 's/^(==|--)[0-9]+(==|--)/${1}PID${2}/' valgrind.nscd.exit.15674) <(perl -pe 's/^(==|--)[0-9]+(==|--)/${1}PID${2}/' valgrind.nscd.exit.16237) [19:28] * mathiaz 's brain just exploded [19:28] the good news from the valgrind is: [19:28] ==PID== LEAK SUMMARY: [19:28] -==PID== definitely lost: 632 bytes in 1 blocks [19:28] -==PID== indirectly lost: 32,904 bytes in 6 blocks [19:28] +==PID== definitely lost: 0 bytes in 0 blocks [19:28] +==PID== indirectly lost: 0 bytes in 0 blocks [19:28] kees: yeah - that's the key part IMO [19:29] mathiaz: has upstream commented on the patch at all? [19:29] kees: not yet :/ [19:32] mathiaz: perhaps ask the patch submitter about the atexit do_close issue, but other than that, ship it! ;) [19:41] hi [19:44] i am trying to setup kvm with bridged networking. br0 on the host is working. i pass --bridge br0 to vm-builder. yet, the guest still seems to use nat. what am i missing? [19:57] RoyK: how can i get an emal notification/email when a log has an error so i do not have to be checking them...everything else is working beatifull [19:58] <_ruben> Rafael: tons of tools for that: logcheck, sec, .... [20:04] hi - could someone help me with a quick postfix config issue? [20:05] !ask | DrDeth [20:05] DrDeth: Please don't ask to ask a question, simply ask the question (all on ONE line and in the channel, so that others can read and follow it easily). If anyone knows the answer they will most likely reply. :-) [20:06] I've set up postfix with virtual domains, and dovecot for IMAP - I can recieve mail on the virtual domains, but cant send mail out [20:06] I get the dreaded Relay access denied [20:08] What's the exact error? [20:09] Also please pastebin the output of postconf -n [20:09] NOQUEUE: reject: RCPT from deth-serv[85.10.192.78]: 554 5.7.1 : Relay access denied; [20:10] http://pastebin.com/SmDx16VZ [20:12] DrDeth: I don't see where you've authorized any clients to relay in your configuration? [20:12] Just mynetworks = 127.0.0.0/8 [20:13] I don't see any smtp auth. [20:13] ok - if the clients use authentication to their virtual domain, how would I set that up ? [20:14] I'm pretty sure the server guide covers this reasonably well. [20:15] Ok, I'll go have a look. I was following this community guide: https://help.ubuntu.com/community/PostfixVirtualMailBoxClamSmtpHowto [20:15] A lot of the community docs are really out of date. [20:15] i noticed :( [20:16] I'd recommend https://help.ubuntu.com/9.10/serverguide/C/postfix.html#postfix-smtp-authentication (or the appropriate one for the release you're running) [20:16] The bits in that about integrating clamsmtp are probably OK as that hasn't really changed. [20:16] excellent - thanks ScottK === kklimonda is now known as kklimonda|G1 [20:23] I'm having problems with libvirt 0.7.7 on Lucid that very much match this bug that is claimed closed: http://www.mail-archive.com/kvm@vger.kernel.org/msg21669.html [20:24] The upshot is that suspend/resume looks like it's working from libvirt's perspective, but the guest VM is hung. [20:24] Anyone seen this? [20:24] kindjal: We're going to stay with 0.7.5 for Lucid. [20:24] I just updated from 0.7.5 two seconds ago to just see if this happens to have been fixed... [20:24] it exists on 0.7.5 too. [20:25] Perhaps it's a guest kernel thing? What kernels should dom0 and domU be using? [20:25] my dom0 is just a default Lucid install, and the guest was built from ubuntu-vm-builder [20:26] You've exhausted my knowledge on the topic [20:26] d'oh [20:26] I can't figure out why I'd be special here, I'm just trying to get started with libvirt [20:35] Will we be able to use virsh reboot in Lucid with 0.7.5? And shutdown the host? [20:37] alvin: That's this bug? https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/368962 [20:37] Launchpad bug 368962 in libvirt "Can't reboot kvm virtual machines using virsh" [High,Confirmed] [20:37] Yes [20:38] I'm hoping for that one too. I see recent traffic on it, but don't know the status. [20:38] Does anyone have suspend/resume working? [20:44] lo everyone [20:46] There was mail today to (IIRC) ubuntu-devel on it. [21:03] i've got a computer with two network interfaces connected to unique networks, both with a default gateway, and therefore have a handful of "ip route ..." commands that need to be run at boot so things behave as desired. where do these commands belong so this happens automatically at boot? [21:03] in /etc/network/interfaces [21:03] oh? i'll have another look at the man page. i didn't know ip route commands were appropriate in that file. [21:04] Like this : up route add -net x.x.x.x netmask x.x.x.x gw x.x.x.x dev $IFACE [21:04] you put that under: iface ethx inet static [21:05] and to beautify: down route del -net x.x.x.x netmask x.x.x.x gw x.x.x.x dev $IFACE [21:05] oh - it's other commands too though - like ip route add default via 10.3.60.1 table vlan-3; p rule add from 10.3.3.1 table vlan-3; so on. [21:06] I have no experience with those, but I suspect they can go in the same fle. After all, you don't want them at boot, but at ifup. [21:07] It's better than putting stuff in rc.local, in case you need those routes earlier [21:08] Might I suggest the excellent Debian documentation on the matter? [21:08] yeah, i'm not a big fan of rc.local. :) [21:08] ah, good idea. thanks, i'll have a look. [21:09] right, i shouldn't have said boot. that's a bit ambiguous. at interface up/down. [21:18] hm, kvm insists on configuring the network as 10.0.2.15 -- why` [21:18] ? [21:23] hi everyone, anyone here using nfs4 in karmic? I have some weird problems with it [21:23] !anyone [21:23] A large amount of the first questions asked in this channel start with "Does anyone/anybody..." Why not ask your next question (the real one) and find out? [21:23] Well, me :-) [21:25] ok, the problem is: I can mount the exported directory, but when I try ls /exported-dir in client, this takes more than 1 minute to show anything [21:26] I tried to do a strace ls /exported-dir and everything worked fine, without any delay [21:27] when I tried to mount the same directory in a notebook with an arch linux install, the problem disappears. [21:30] whoa [21:30] Is df also completing slower? [21:31] nope [21:31] just ls or any direct access to any file/directory in the exported directory [21:32] Hmm, weird. I'm not seeing this behaviour. Have you tried (wild guess) disabling apparmor? [21:32] and checked your domain? [21:33] You're not using autofs, right? Just mount -t nfs4 jadajada... [21:33] yes, nos using autofs [21:33] checked my domain? [21:35] how can I disable apparmor? [21:36] I'm making the tests in terminal mode (becausa is impossible to use gnome in this way. the system hangs for 1 minute few seconds) [21:38] maybe is better to abandon nfs. it just gave my headaches. first was the sqlite not working with nfs3,, now this problem. [21:40] the sad thing is: in this network (a lan in a non profit in a brazilian favela), the same exported directory via samba works completely fine in windows clients. [21:41] walrus, IIRC, why not use smb to mount the directory on linux too? [21:42] jeffesquivel: because I don't know how to gave write permission to the users in the nis domain. [21:43] walrus, I see... [21:43] I can create a file with a normal user in his home directory, but the file has root ownership, and than I can't edit the file anymore. [21:44] walrus, I guess you're exporting /home and then mounting it somewhere else, right ? [21:45] jeffesquivel, yes, exactly [21:46] walrus, IIRC, what we did (but we were't using NIS, we were with LDAP) was to use pam_mount [21:46] walrus, do you have several users logged in at the same time on your linux machines? or just one? [21:46] jeffesquivel: in the moment just one. === dendrobates is now known as dendro-afk [21:47] I'm trying to mount an nfs share between 2 nfs systems. I used to do mount 192.168.0.2:/public public/ - but this doesn't appear to be working. any tips ? [21:47] jeffesquivel: you are using pam_mount to mount the samba share automatically when a I user logon? [21:47] walrus, well, that was some time ago (several years, actually)... but at the time yes, that's what we did [21:47] foo: Why is not working? what are the error messages? [21:48] walrus, it was more secure also, because we wouldn't have the whole /home mounted all of the time [21:48] walrus: wrong fs type, bad option, bad superblock on 192.168.0.2 , might be something else going on [21:48] foo: maybe you are using the wronf nfs version, try mount -t nfs4 ou mount -t nfs [21:49] walrus: hm, negative [21:49] alvin: thanks for the help. that seems to be working well, by way of the up/down statements in the interfaces file. [21:50] lunaphyte_: You're welcome [21:50] jeffesquivel: yes, maybe this is a good idea. I started to configure an openldap server but never finished to configure the clients. maybe this is a good reason [21:50] jeffesquivel: and thanks for the ideas [21:51] walrus, it may be... I really have never used NIS (it seemed outdated even at that time, but mostly because I needed to have windows clients too) [21:51] Don't use samba for sharing between Linux clients. NFS is meant for that. Let's go over the details [21:52] NFS4 is just different. (but needed these days for all those blasted desktop applications that need their databases) [21:52] alvin: yes, this is the reason that I'm trying to solve the problem for a long time [21:52] but I'm really stucked in the moment [21:53] What is your /etc/exports on the server? [21:54] Did you put a domain in /etc/idmapd.conf? And activated it in /etc/default/nfs-common ? (NEED_IDMAPD=yes) [21:54] nope [21:55] Well, those are the first things you need to do. Do it on both machines. (same domain, idmapd) [21:55] alvin, may I know what is the problem with using samba for sharing between Linux clients? :-) [21:55] jeffesquivel: It's Windows file sharing. No need to install if you don't use Windows [21:56] Also, POSIX rights... [21:57] NFS is also faster [21:57] alvin, oh, I thought I read that walrus also used that same share on windows clients... [21:57] alvin, I see your point... thanks [21:58] There's another thing. Suppose your 'client' is also a Linux server. You can reboot your server. When the server is back, your shares will not have reached a timeout and will still be mounted [21:58] well, I changed the idmap domain and nfs-commom, restarted nfs, remounted the shared directory [21:58] (although you may have troubles doing stuff on that client) [21:58] nothing has changed [21:59] Still, post your /etc/exports NFS4 needs a filesystem with fsid=0 [21:59] 90 seconds for output the ls results [21:59] I'll give an example: [21:59] /srv/archive 192.168.1.0/24(rw,async,no_subtree_check) [22:00] "/export 192.168.0.0/24(rw,root_squash,insecure,no_subtree_check,async,fsid=root)" [22:00] alvin, I see... I recall reading that nfs has some problems with security (for example, here: http://www.hiit.fi/it/tech/config/pam_mount.html)... do you think that is true or is it may be that they don't understand the protocol well enough? [22:00] Hmm, it's not coming through [22:00] alvin, just curious... as I've never used NFS before [22:01] "/export/home 192.168.0.0/24(rw,root_squash,insecure,nohide,no_subtree_check,async)" [22:01] NFS3 is very simple. There's little security, but it's easy to set up if you don't need the security and locking. NFS4 can be secured in different ways [22:01] walrus, good. What's the value for your /export export? [22:01] ah, I didn't saw it there [22:02] alvin, ok... I'll read more about it... thank you :-) [22:03] doesn't look wrong. Try adding crossmnt to the options of /export [22:03] nfs4 on linux doesn't work too well [22:03] and change fsid=root to fsid=0 [22:04] I agree that Solaris does a better job [22:05] last I tried using nfs4 with linux against solaris, it worked, but uid mapping was out of the question [22:05] alvin: tried, same problem :/ [22:05] For that reason, I share ZFS with sharenfs=anon (root=... should also work, but for a strange reason it doesn't on Linux) [22:06] Hmm, I really wonder what your problem is there [22:06] Was it equally slow with NFS3? [22:07] no, but I needed to stop using nfs3 because of a problem with sqlite, that made impossible to use any programa that used it [22:07] I know, just wanted to know if the issue was NFS4 related [22:08] Does no_root_squash make a difference? [22:08] Oh, and after altering /etc/exports, don't forget exportfs -av [22:09] tested, didn't made any difference [22:10] why the hell strace ls /home is not slow??? this is completely crazy to me [22:10] Home on your server is /export/home ? [22:10] no, is ;home [22:10] oh, yes, you said that [22:11] /export/home is a mount --bind [22:11] and that's faster? Hmm, I wonder if apparmor is discriminating against certain mountpoints [22:12] and the problem is ubuntu related, because this doesn't happens with arch linux. maybe Is a good idea to try a new ubuntu fresh installation. [22:12] walrus: could this be firewall filtering on client or server? [22:13] something maybe taking time to timeout, or auth trouble. [22:13] Nothing in syslog on server or client? [22:13] smells like auth trouble to me, but I'm really not sure. [22:13] Try wiping firewall on both [22:13] no firewall here [22:14] just ip masquarade [22:14] I'm still on NFS3, works fine. [22:14] Yes, but not for databases :-( [22:14] but are you using /home? [22:14] walrus: is anything showing up in the logs on your nfs server? [22:15] walrus: anything in logs? tail -f on both. Maybe /var/log/auth.log as well [22:15] just the message: [22:15] svc: failed to register lockdv 1 RPC service (errno 97). [22:16] lockd? or lockdv? [22:16] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory [22:16] lockdv [22:16] NFSD: starting 90-second grace period [22:17] walrus: What versions of Ubuntu are you using? [22:17] karmic [22:18] 9.10 [22:18] OK - lockdv lots of google hits .... work a look [22:18] try $ sudo service portmap start [22:19] job already running [22:24] well guys, thanks for the ideas. I will try a Ubuntu fresh install on client, I tried again to use the exported dir in my arch linux notebook and everything went fine again. [22:25] weird, but good luck [22:25] walrus: good luck anyway. Please come back and let us know. [22:25] thanks, I will. [22:26] Does anyone have experience with kvm restore from suspend resulting in a hung guest OS? dom0 and domU both ubuntu Lucid with default installs. I can't get restore to work. [22:26] kvm at 100% cpu, vnc session unresponsive, yet virsh shows "running" state and no errors in logs. Symptom looks a lot like this: http://www.mail-archive.com/kvm@vger.kernel.org/msg21669.html [22:27] I have seen those symptoms before, but in diffferent situations [22:28] alvin: you mean not after a resume? [22:28] Maybe. Those issues are fixed. I'm looking now [22:29] There is bug 457716 [22:29] Launchpad bug 457716 in libvirt "apparmor denies save and restore" [High,In progress] https://launchpad.net/bugs/457716 === dendro-afk is now known as dendrobates [22:29] Yes I found that one too... [22:30] I had that problem yesterday and updated apparmor config, now save and restore works, but suspend and resume does not... [22:30] I think I'll disable apparmor and see what that does. [22:30] I remember trying this a long time ago. Never tried it after that. (it didn't work) [22:30] Please do and tell us [22:32] no change [22:32] stopping apparmor had no effect. [22:32] alvin, nfsv4 seems nice... I thought nfs was a dead protocol... it seems I was wrong.. I'll have to try it out, thanks for the info! [22:32] re: nfs4, nfs4.1 is supposed to have pnfs clustering [22:33] alvin: so have you not seen working suspend/resume with kvm? [22:33] Well... I didn't ever try it, but I have seen it working by accident [22:34] When you do a live migration, your guest will be suspended [22:34] ah, no, paused. It's probably not the same. (It's getting late) [22:35] kindjal: I have never heard of NFS4.1 :-) cool [22:35] my plan is to set up some kind of shared file space for live migration, but I'm still just trying a basic suspend/resume, and failing... maybe I'll move on to migration. [22:38] Good luck. It should work (except for the pause thing) [22:39] * alvin goes to sleep [22:41] kindjal: yeah, apparmor doesn't need to do anything with suspend/resume of guessts-- that would be a libvirt issue [22:41] jdstrand: I just tried your 0.7.7 packages too [22:42] kindjal: is it broken there too? [22:42] 0.7.7-4ubuntu1~jdstrand2 [22:42] yes [22:42] hmm, sounds like an upsteram issue then :( [22:42] I haven not yet seen working restore from suspend... is it supposed to work? [22:42] upstream [22:42] qemu-kvm 0.12.3+noroms-0ubuntu4 [22:42] tbh, I don't use that feature [22:43] istr that it was problematic [22:43] and it didn't do what one would expect [22:43] It would make me feel better to know that I'm not special, and nobody uses it, or it's broken for everyone. [22:43] What I really want is migration, which I haven't gotten to try since I'm not yet using shared storage. [22:43] kindjal: I'm pretty sure you're special wrt this issue [22:44] err [22:44] s/special/not special/ [22:44] it would help if I could type [22:44] :) [22:44] have you seen this? http://www.mail-archive.com/kvm@vger.kernel.org/msg21669.html [22:44] well, I was just passing through-- if feel free to file a bug and if you can link it to the upstream bug [22:45] ok [22:45] thanks [22:45] no-- made a note of it. gotta go! === dendrobates is now known as dendro-afk [23:32] Seem to be running into this regression (issue with installer/grub2 not coping with hardware raid1 in 9.10 and later): https://bugs.launchpad.net/ubuntu/+source/grub-installer/+bug/485604 [23:32] Launchpad bug 485604 in grub-installer "karmic server 64 bit installer fails at GRUB when installing with RAID1" [Undecided,New] [23:33] Any solution other than to revert to 9.04 like I had on this box before? Failure prompted a fresh install.