=== gary_poster is now known as gary_poster|away [00:40] window 12 [00:40] oops [00:42] * jkitchen throws ikonia a / [00:43] noted [01:28] hi, setting up a vps I haven't touched for a couple weeks... trying to upgrade core but getting "Failed to fetch" errors. Don't think I'm doing it wrong (apt-get update, from root). Is there a server that's down? or? [01:28] Nautilus: which server? [01:29] "precise" [01:30] if that's what you meant? [01:30] Ubuntu 12.04.3 LTS [01:30] Nautilus: oh, sorry :) I meant "which update server", sorry [01:31] this? [01:31] WaVeR: Failed to fetch http://ca.archive.ubuntu.com/ubuntu/dists/precise-updates/multiverse/binary-i386/Packages 404 Not Found [IP: 91.189.91.13 80] [01:31] oops, it nick expanded to Waver [01:33] Nautilus: interesting, there's a .gz and .bz2 version there, but no uncompressed version [01:33] .. same with the us.archive.ubuntu.com mirror [01:34] .. same with the mirror.anl.gov mirror [01:34] oh I see the page you're looking at [01:34] Nautilus: did apt try to download a .gz or .bz2 version and have those fail? [01:34] .. or did it only try the one, uncompresed, version? [01:35] seems ilke just the one [01:35] maybe I should just try tomorrow, heh [02:05] Hi, Does anybody know how can I change the current sent print job's pritner? I want to change it from a real printer to 'print to a file' [02:23] anyone know a channel about SSL(/TLS)? [02:34] Nautilus: yassl [02:34] guessing [02:34] ok thanks [02:34] Nautilus: there's ##crypto, nice chaps in there, but it's more about theory than implementation... if you've got a question about configuration, this is probably your best bet [02:35] i might have someone in another channel, lets see how this goes === airtonix_ is now known as airtonix [03:07] can someone point me towards a good article about proper permissions for /var/www and sub folders? I keep having issues every time I try to install drupal or vanilla forums, this is my first VPS... Never had an issue installing on shared hosting. [03:09] MavKen: if you see 'chmod 777' in a guide, please find a different guide :) [03:09] yeah, that is what I keep running into and I know better [03:10] woot [03:10] I know I wrote something a few years back but stackoverflow is making it hard to find. bah. [03:11] MavKen: well, okay, some guidelines: the webserver should have write access to only its log files and a database socket, so avoid setting files to www-data [03:11] the install scripts are not able to write to php files... I used chmod 775 and www-data as the owner [03:11] oh [03:11] you could create a new owner just for them, or you could use your user account.. [03:12] so far I have done everything as root, have not created a user yet [03:12] aha, that's probably the next thing to address :) hehe [03:13] yeah [03:16] I've never had to worry about permissions before... this is a pain [03:25] learning the unix permissions and the theory behind them can take a while. but they are so much simpler than so many other permissions systems that have been tried over the years, and they can still accomplish so much, it's worth the time to get it right [03:28] trying to work through the SSL stuff. I'm generating a cert and have 4 choices, the most applicable seem to be "S/MIME and Authentication Certificate" and "Web Serve SSL/TLS Certificate". I don't understand the distinction [03:28] Nautilus: aha, the certificates have an "allowed use" field [03:29] Nautilus: s/mime is for email [03:29] right. dont need the Jabber or Object types [03:29] Nautilus: ssl/tls is what you're after, for a web server [03:29] actually the idea here is to use it with postfix/dovecot so eg: my pop password isn't sent in plain-text. Don't plan on https at this time. [03:30] and I thought postfix+dovecot uses SSL/TLS [03:32] ah, ubuntu server docs say "next, generate or obtain a digital certificate for TLS" [03:33] (for smtpd). So I want SSL/TLS, right? [03:33] sounds like that's for web and email both [03:35] sarnold: ^ perhaps you prefer I use your nick always === i3luefire__ is now known as i3luefire [03:54] in 13.10, when I add a new user I have a public_html folder in /etc/skel/... is there an easy way for a file to be generated in the sites-available folder with the path each time a new user is added? [03:54] I on;y have 9 hosting clients right now, but would like to streamline the process [04:16] I went with SSL/TLS === lickalott_ is now known as lickalott [08:33] hey guys if i add my user to the www-data group can i leave a website which is in development in my home directory and still be able to modify files in it without the need to change the user and group back to my system user and group [09:04] hello all, i'm looking for a webmin alternative - i just want somekind of monitoring and mail notifications on service failures, package updates etc...i havent heard good things about webmin... [09:18] StathisA: nagios is good for monitoring of network failures hardware monitoring and much more [09:18] you can be emailed about it or smsed but that is something to ask in the nagios channel [09:22] eagles0513875 thanks for the suggestion, i was looking for something not that centralized. as to avoid certain single point of failure scenarios..afaik nagios is located somewhere centrally and works with agents. [09:24] i could get away with some manual cron jobs with checking space & if the service i need is running, i guess [09:24] but i'd like to have a tool for that [09:25] StathisA: there is cacti potentially [09:31] i'll have a look, thanks [09:32] are there any release candidates for ubuntu 14.04LTS out yet? === _thumper_ is now known as thumper [09:33] iclebyte2, https://wiki.ubuntu.com/TrustyTahr/ReleaseSchedule [09:38] ogra_, I see here the feature definition freeze on november 21st. Is there anywhere I can actually find that list? [09:40] http://status.ubuntu.com/ubuntu-t/ === Mez_ is now known as Mez [10:28] hey guys is anyone alive in here? If i add the user i use to login to my server to the www-data group and leave the group for a website im running as the user can i still edit the files etc in the directory which is currently in my home directory wiht out needing to change user and group? [10:47] eagles0513875: this is basic permissions [10:48] eagles0513875: does the user you create have permissions to access your home drive ? [10:48] the user is my user my home dir [10:48] the only other person accessing it would be the www-data dir to access the website [10:52] zul, hallyn, Whoever gets there first: chinstrap:~smb/4review has an updated libvirt package for Trusty to fix some xen related bugs. === olegb_ is now known as olegb [11:03] eagles0513875: can the user you have created access the data in your home dir [11:03] eagles0513875: just "yes/no" [11:04] yes [11:04] the user owns that home directory yes [11:04] eagles0513875: then that user can edit / maintain the website [11:04] ikonia: but dont the user and group need to be set to www-data so apache can work with the website? [11:05] eagles0513875: is apache working currently ? [11:05] ikonia: yes because i have the sites set to www-data:www-data [11:05] eagles0513875: ok, so as long as the user/group www-data OR world access can read the files that's all you need [11:05] you don't have to change the ownership if you don't want to, it's up to you [11:06] ok. will test things out :) [11:06] thanks ikonia :) [11:57] zul: ping [11:57] jamespage: hi there [11:57] hey koolhead17 [11:57] hows swift? [11:58] jamespage: swift is rock solid :) [11:58] jamespage: how have you been. i need help from zul seems like am not able to find him all these while cos of timezone [11:58] koolhead17, all good [11:59] koolhead17, wassup? [11:59] jamespage: learning learning and more learning :) [12:06] jamespage, do you know if there are some more reqs on rabbit for active/active? i copy all the passwd files with unison, but i still get the AMQPLAIN invalid credentials error [12:07] yolanda, I'm not 100% sure tbh [12:09] i'm checking and cinder.passwd in slave machine matches with the one in cinder.conf, so i should be missing something [12:11] jamespage, seems it's missing the users, rabbitmqctl list_users only show guest user [12:12] yolanda, are you relating other services before or after forming the rabbitmq cluster? [12:13] jamespage, before [12:13] but i'm copying the passwd files when a node is joined, so looks that i need to sync something more [12:13] yolanda, try it the other way around - I suspect that when the cluster is created, all the data get dropped [12:14] yolanda, if that works then we know that after a cluster creation operation, we have to re-create the usernames and passwords for access. [12:14] yolanda, oh - while you are working on the rabbitmq charm - please can you add a source: config [12:14] we have rabbitmq-server 3.x in the icehouse cloud archive and no way to actually use it right now :-) [12:15] ok === highvolt1ge is now known as highvoltage [12:30] jamespage, seems as we aren't doing something right with clustering, i'll take a look at the doc again, users aren't replicated in any case === gary_poster|away is now known as gary_poster [13:49] Hi [13:49] Q: How can I expand my partition /dev/sda1 on 12.04 LTS ? Thank you! Is it safe ? [13:49] * found http://blog.chapus.net/ubuntu-server-increase-disk-space/ [13:51] shwaiil, properly done it's safe, but, of course, backup ^3 [13:52] cfhowlett: thanks for looking! Where can I find a good manual ? that I can follow ? [13:52] in case you know.. [13:52] shwaiil, a server manual? [13:52] !manual [13:52] The Ubuntu Manual will help you become familiar with everyday tasks such as surfing the web, listening to music and scanning documents. With an emphasis on easy to follow instructions, it is suitable for all levels of experience. http://ubuntu-manual.org/ [13:53] cfhowlett: Sorry, I mean a tutorial on how to resize the partition would be awesome. But I'm now looking into the link I found [13:53] shwaiil, ah, good. [13:54] not sure if it's going ok or not as it's already failing and not absolutely sure why. [13:54] shwaiil, perhaps an immediate data backup and then pinpoint the cause of the failure before you start repart'ing? [14:07] who's hacking at the cinder charm these days ? is it still adam_g ? [14:36] smoser: for bug 1188610, AIUI, making *any* changes to sshd_config is still a policy violation. I think the one sensible exception might be cloud-init. [14:38] But even in that case, we should have a /etc/ssh/sshd_config.d/ or something, as a wishlist item. [14:40] rbasak, i dont know. i dont' know that i really believe that. [14:40] vi changes config files all the time for me. [14:40] shall i file a bug ? [14:40] sometimes sed does it too. [14:41] my snarky comments aside, i dont' really understand such a policy. [14:41] puppet does this to, thats basically its purpose in life. [14:46] If it does it because a user directly requested it, then that's different. [14:46] If it's indirect (say you installed another package that needs it), then that's a violation. [14:46] Here, it's similiarly indirect. The correct solution is clearly a .d/ directory. [14:47] It's not really about the package that did it (sed, puppet, whatever). It's that the *distro* doesn't do it; the user does. [14:47] /etc is the user's (well, sysadmin's) realm. [14:47] smoser: ^^ [14:49] For example: if I install logrotate, clearly I'm asking for a crontab entry. But the logrotate postinst doesn't edit /etc/crontab for me. That's why /etc/cron.d/ was invented. [14:49] yeah. thats reasonable. [14:49] (and daily.d, weekly.d, etc) [14:49] as i said in the bug, i dont understand that the change is actually needed. [14:49] do you understand it ? [14:51] If I had a server that was always port forwarded ssh through a NAT gateway, then I'd need it. So I get that in such an infrastructure, it makes sense to have it, and a package should be able to turn it on. [14:51] as to why 'ClientAliveInterval 180' would do some traffic that 'TCPKeepAlive' would not. [14:51] TCPKeepAlive is by default pretty slow, isn't it? [14:51] Days === kirkland` is now known as kirkland [14:54] rbasak, do you see that documented anywhere? [14:55] smoser: you mean the defaults? [14:56] It's a sysctl thing I thin. [14:56] k [14:56] * rbasak looks [14:57] rbasak, its a setting in sshd_config. (boolean). [15:00] smoser: http://kernel.ubuntu.com/git?p=ubuntu/ubuntu-trusty.git;a=blob;f=Documentation/networking/ip-sysctl.txt;h=8a984e994e61616a9dba0a4f425a5b4371b3ae99;hb=HEAD#l240 [15:00] smoser: kernel default is two hours. [15:00] thanks rbasak [15:08] is it safe in general to edit files on a live guest virtual machine filesystem from the basesystem? (xen, lvm) [15:09] thelamest: if said virtual machine is not booted, sure do as I like =) but make sure you unmount it cleanly, etc. [15:09] thelamest: and as usual if you break things, you get to keep both pieces =) [15:09] He said "live". [15:09] So: no. [15:09] it's a missing security/limits.conf, that's kind of preventing me to ssh into the guest [15:10] Shut down the VM first. [15:10] rbasak: ok, i understood "live" differently =) (as in squashfs based / overlay system) [15:10] Then edit it. [15:11] It's even worse if it's a file the VM has actually hit, because then it'll be cached. Changing the file on disk will not change the cache of the file in memory. I hope that demonstrates why it's bad. [15:11] ok, thanks [15:11] good enough it's a demo system [15:16] smb: (still getting my morning going) could you also toss a debdiff in there so i can just look locally? [15:17] hallyn, sure. and good morning then :) [15:18] hallyn, done [15:28] smb: The first part of the changelog - I don't see where that's in the debdiff [15:29] hallyn, That is the debian/patches/ubuntu-xend-probe.patch that changes [15:29] as for cherrypicking of two patches from upstream, if you need that to fix xen and it comes from upstream then as far as i'm concerned you should have upload rights to just push that :) [15:29] smb: ah - ideally that would be made explicit in the changelog [15:30] smb: but that patch was already in series prior to the debdiff [15:30] hallyn, I thought to be explicit enough. But apparently not. :) [15:30] look at the debian/patches/series diff hunk [15:30] hallyn, Right that was broken when someone modified it for newer libvirt versions [15:31] hallyn, Hence "refresh/fix" === chmurifree is now known as chmuri [15:33] nm, i see. sorry [15:33] smb: so /usr/lib/xen-common/bin/xen-toolstack will return some path. you're looking for '..../xm'. If it's not xm, what else can it be? [15:34] smb, https://bugs.launchpad.net/cloud-init/+bug/1236531 [15:34] look at that comment and see if you can explain it. [15:34] smb: debdiff looks good to me. [15:36] hallyn, It could be xl (or xapi) I'd have to check the latter but for now this would put the patch back into the state it was before moving to the newer libvirt version [15:39] smb: +1 from me, thanks. [15:39] smoser, was eatmydata only supressing fsyncs or more? [15:40] hallyn, ok, will you sponsor? Yeah, yeah I should get upload rights for that and some other packages at least... [15:40] smb: yes, please apply asap :) [15:41] smb: I'll push the 1.2.0... what about the other .dsc you have in there? that goes into ppa? [15:41] (which I assume you can do? if not, which ppa, i'll try) [15:42] hallyn, No that was prepared as kind of one-shot mre for precise. [15:42] ok [15:42] smb, eatmydata just ldpreloads fsync and sync [15:42] did the shutdown, edit, create, and still got fsck on boot [15:42] (to my knowledge that is all it does) [15:42] yeah i use eatmydata to make btrfs useful... [15:43] smoser, Hm, so things were actually quicker without eatmydata if I read those numbers right [15:43] smb, correct. [15:44] which doesn't make any ssense [15:44] obviously '2' is not statistically useful. [15:44] but the only way i could explain it is if the fsync was reducing memory taken by the filesystem cache [15:44] and without that, stuff was just interacting poorly. [15:45] note, there is only 670M on those system.s [15:47] smoser, or in some way the background writeback stealing more process time from the apt-get that one expects [15:48] smoser, Did you do those tests with the very latest 3.13 or a bit ago when we still had 3.12 === Mikaela_ is now known as Mikaela [15:57] smb, let me check that. [15:57] it was a build from yesterday (0108) [16:01] smb, [16:01] $ uname -r [16:01] 3.12.0-7-generic [16:02] smoser, Ah ok, then it is not related in any way to slowdowns we / Tim was observing [16:05] smoser, But yeah, there is more in the cache. On the other hand that should increase the chance of larger sequential writes === `petey`_ is now known as `petey` [16:46] jamespage/roaksoax: https://code.launchpad.net/~zulcss/ceilometer/bug-catchup/+merge/201041 === medberry is now known as med_ [16:52] zul, comments [17:03] jamespage: fixed [17:18] smoser, So thinking about your eatmydata oddness... I see that with or without it dpkg seems to use --force-unsafe-io, which removes some slow fssyncs. Adding eatmydata to the command stack certainly adds additional context switches. Makes me wonder whether what you see is that remaining sync calls filtered by eatmydata might just be of little difference (probably hidden by caching outside the guest), so in the end [17:18] you only see time getting stolen by the additional context switches. [17:23] smb, nah. my experience is that --force-unsafe-io does very little. [17:24] and that eatmydata does a lot. [17:26] shoot. [17:31] smoser, Hm at least it looks like there is always a 1s difference in favour of not using eatmydata between "eatmydata sudo apt-get update" and "sudo apt-get update". Though also only tested that on two (real ones but not the most performant) hosts. One running Precise and the other Trusty... [17:31] smb, see my last comment there. my data was bogus. [17:31] but your dat adoesn't make senes either. [17:31] oh ok [17:31] * smb reloads the page [17:34] smoser, Ah so you basically paid the price for wrapping another layer of eatmydata [17:34] but that doesnt really make sense. [17:34] because eatmydata is LD_PRELOAD [17:35] i suppose its posisble it LD_PRELOAD=mylib1.so:mylib2.so is oddly slow. but i dont think that makes much sense. [17:36] also realize that the download time is part of this, which stinks. [17:36] ie, that is heavily network limited for a lot of that time. [17:36] i'll get better numbers here. [17:41] smoser, ok. well my mental model might be wrong but I think of it as having loaded another eatmydata which loaded the apt-get. So I imagine syscall interception to be going through two levels. Anyway, lets see the better data. And yeah that can vary too (if not cached by something local) [17:41] but ld_preload doesn't work like that. [17:42] but anyway. [17:53] jamespage: FYI, I just pushed an MP on the cinder charm with the config-flags functionality [17:54] jamespage: do I need to add someone explicitely to the reviewer list ? [17:55] jamespage: hmm, old on a bit, I'll create a public bug to track it [18:06] jamespage: ok, that should be better : LP: #1267554 [18:06] jamespage: bug 1267554 [18:13] Hello, everyone. I have a question. I have reason to believe my software RAID is experiencing problems under high load. I wanted to put them under some load to test, and I was looking at using the stress utility. The problem is, I have SSDs in the system I do not want to stress. Is there a way to put IO load on only some drives and not others? [18:22] pdavison: if they are part of the software raid. no not really. [18:23] dcosnet: No. They're not part of the RAID. They're separate. [18:23] then whats the problem? [18:25] I don't see a way to specify which "device" to use when running stress. Am I missing something, or is there another utility that will work better? [18:26] well you could just use a different tool all together like dd [18:26] Hmm... [18:26] dd if=/dev/zero of=/some/virtual-partition-here/some-big-ass-file.dd [18:26] then rm some-big-ass-file.dd after [18:26] That's going to consume a lot of space, but not really add the io load I'm looking for, I think. [18:27] o [18:27] well it would measure a basic MB/s imo [18:27] maybe not good enough though since it adds possible bottleneck i guess [18:28] if your looking for any statistics beyound MB/s its not ideal [18:28] beyond^ [18:29] dcosnet: I don't need stats. I need to stress the controller. [18:29] I expect it to fail, but I don't want to redeploy it back to production to see. [18:30] ah [18:32] http://unix.stackexchange.com/questions/72563/is-there-a-good-drive-torture-test-tool [18:33] has some good responses [18:34] Looking... [18:34] i actually just learned of a tool because of that link: HDAT2 [18:34] * dcosnet bookmarks [18:35] you probably should just look into bonnie++ [18:35] which that url mentions [18:36] I was looking at bonnie++. It looks like more of a benchmark tool. Not sure if that will do what I need. [18:37] http://sourceforge.net/projects/bonnie/ [18:37] well, good luck [18:38] Thanks. [18:43] well alternatively you could write a script that does a bunch of concurrent dd if/of commands at once to stress it out [18:44] autodeleting its child files as it finishes one [18:44] would take what 10 minutes to code? [18:48] That's an option. [18:48] pdavison: a lot of these are designed to stress the filesystems, if not the block layer, but it might be worth a look: http://nfsv4.bullopensource.org/doc/testing_tools.php [18:49] sarnold: Thanks. I'll iterate through them. If I can find one that will let me only stress the RAID volume, I can run my test. [18:50] The one other criteria is that I need to do this locally, not over NFS. [18:50] or anything network-centric. [18:50] pdavison: since those stress filesystems, I have an idea most would take a directory name as argument somewhere.. [18:50] .. so you could stress an e.g. /mnt/btrfs but leave / and /home alone :) [18:51] That's a good point. That's the problem I had with "stress" [18:52] pdavison: yeah, I just got lucky to find the whole list of things on one page :) hehe [18:53] fstress was the one I went searching for, hooray for google for getting past my poor memory of how to spell it. [18:53] i would recommend a makefile [18:53] for true concurrency [18:53] That one looks NFS-related. Can it stress things locally. [18:54] pdavison: aw crap. then my memory is more broken than I thought. [18:54] dcosnet: That's a good test, but this is an 8-drive SATA array, and I'd have to run quite a few of them concurrently to get enough load. [18:54] ya maybe 32 instances [18:54] or more [18:54] do 2gb files at a time [18:55] or well, you know its throughput more then me so pick a relivant size [18:56] this is only going to stress the write though not the read, or read/write [18:59] pdavison: hey, I finally found the thing I really -was- looking for, and now I'm not so sure it'll be helpful: http://codemonkey.org.uk/projects/fsx/ oh bother. back to bonnie* and the like :) [19:00] Right. Well, maybe I just throw everything and the kitchen sink at it until I either give up or it does. [19:06] pdavison: lol, I like that [19:30] smb, better looking data [19:30] http://paste.ubuntu.com/6722546/ [19:31] those are "hostname" (which implies eatmydata and size) and then 'total time', 'download time', and 'total - download' (ie, the install time) [19:31] utlemming, ^ [19:39] Hi, can someone help me with a mysql issue I'm running into? [19:39] Every couple of minutes this shows up in my syslog: kernel: [1790217.415429] init: mysql main process (29718) terminated with status 1 [19:40] Followed by a handful of lines about /etc/mysql/debian-start running checks. Every time those checks run it maxes out the CPU and causes issues with the websites hosted on the server [19:42] Where an I look to find why mysql keeps getting terminated? [19:43] hushnowquietnow: is there anything interesting i the mysql logs? [19:43] The mysql logs are all empty [19:44] /var/log/mysql.log, /var/log/mysql.err, /var/log/mysql/error.log, and /var/log/mysql/mysql.log have nothing in them at all. [19:48] hushnowquietnow, dmesg have anything ? [19:48] could also have stuff in syslog . could be OOM killer. [19:49] Nothing about memory in syslog [19:50] dmesg has the same lines about mysql being terminated with status 1, interspersed with: [1790718.914174] type=1400 audit(1389296799.109:14528): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/mysqld" pid=2536 comm="apparmor_parser" [19:50] hushnowquietnow: Are you running apparmor in complain mode or enforce? [19:51] How do I check that? [19:51] hushnowquietnow: https://help.ubuntu.com/community/AppArmor [19:51] Those may be unrelated, but it'd be nice to rule that out. [19:52] hushnowquietnow: Then, make sure you have all possible logging enabled: http://www.pontikis.net/blog/how-and-when-to-enable-mysql-logs [19:52] When I run apparmor_status it tells me that there are 2 processes in enforce mode, both /usr/sbin/mysqld [19:58] hushnowquietnow: that apparmor message is just loading the profile [20:01] So it doesn't have anything to do with what's killling my mysql process [20:05] Something else interesting: When I run ps aux | grep mysql, I see what I think is the mysql service started 3 or 4 times [20:13] I have logging enabled now, and mysqld has been restarted a few times but nothing new or informative is showing up in the logs [20:33] Curiouser and curiouser [20:33] The server rebooted (I think I fat-fingered something) and now the problem isn't happening any more [20:35] sarnold, smoser, markthomas, jdstrand: thanks for your help in trying to figure this out === chmurifree is now known as chmuri [22:24] hallyn, heya [22:25] stgraber: hey do you know offhand how i create a new 'parent' for nih allcoations? [22:25] adam_g: hey [22:26] oh, nih_new? [22:26] hallyn, so i just hit bug #1244694 while verifying an openstack SRU. the fix you proposed on the bug resolves it for me. [22:26] https://bugs.launchpad.net/nova/+bug/1244694' [22:26] bug 1244694 [22:26] hallyn, any chance that update to the apparmor profile is queued for upload to trusty anytime soon? [22:27] no bot eh [22:27] botfail [22:27] adam_g: which release is that on [22:27] hallyn, im hitting it on saucy [22:27] adam_g: just a sec, [22:27] hallyn: I know we have an update with some fixes coming let me check [22:28] jjohansen: what is that in reference to? [22:29] adam_g: ok, i can push that to trusty right now. are you in the mood to write a SRU justification for it for saucy? [22:29] hallyn: oh sorry wrong nick that was meant for adam_g [22:29] jjohansen: ok [22:30] jjohansen: so you mean a security update to saucy libvirt? [22:30] hallyn: I actually haven't been tracking it closely, I know there are updates to several profiles, and some misc other bug fixes [22:31] there was a ftbfs fix for the library for example [22:33] anyways I am guess this bug isn't in that change set so feel free to push it separately [22:34] ok - thanks [22:40] adam_g: libvirt_1.2.0-0ubuntu3_source.changes pushed [22:41] hallyn, to trusty? [22:41] yeah. working on the saucy one now. i don't see anything in -proposed, so we should be good [22:42] i picked up what i thought was a quick elance job helping a guy move to a new server - both ubuntu 10.4 [22:43] he took the server over from a previous admin - it collects data from a bunch of devices that are behind nat. [22:43] adam_g: ok, libvirt_1.1.1-0ubuntu8.3_source.changes pushed to saucy. === NomadJim_ is now known as NomadJim [22:43] the natted devices to a reverse ssh tunnel like "ssh -R1234:localhost:22 sshuser@server.com" [22:43] that is, to -proposed, so awaiting a SRU justification from you :) [22:44] hallyn, cool. [22:44] then the server ssh's to localhost:1234 and poof, he's connected to the natted device [22:44] the problem is on the old server, sshuser has no shell, and no .ssh directory [22:45] hallyn, i can test as soon as its accepted [22:45] how can sshuser setup a tunnel on server as a user that has no login shell, authorized_keys, etc? [22:45] adam_g: that's always nice :) [22:45] the natted device runs the ssh command as root, but as sshuser@server [22:46] and server then ssh's to localhost:1234 as root [22:49] what on the server is allowing sshuser to setup a remote tunnel on the server, if his account is basically disabled? we are having a tough time figuring this out [22:51] regreddit: I'd look for a Match block in the sshd configuration file first.. [22:52] hmm, we called ourselves looking at the old server's sshd_config to see if that's where it was happenig [22:52] and it looked pretty stock [22:55] i'll have the client check again [22:58] hrm. === freeflying is now known as freeflying_away [23:01] he claims no match block, or includes of other files [23:02] but if you do the ssh tunnel interactively, it works as expected from a user that has no shell [23:02] you dont get a shell, but from the server you can now ssh to the remote client over the tunnel [23:02] so the previoud admin setup some voodoo we can't find [23:14] this is the exact command that the remote natted devices run: autossh -f -nNT -R 28012:127.0.0.1:22 tunneluser@remotehost & > /dev/null [23:15] and when we look at that user, he has /usr/bin/nologin as his shell, and /home/tunneluser is 100% empty [23:15] not even an .ssh folder [23:16] regreddit: is there a home directory listed in /etc/passwd? [23:16] and i'll be dangd if we can get that to work on the ner server by creating the same user account with same configuration on his user account [23:16] yes, /home/tunneluser [23:17] and tunneluser is in no other groups in etc/group [23:17] s/ner/new/ [23:33] http://www.amazon.co.uk/Official-Ubuntu-Server-Book-Edition-ebook/dp/B00DW7PLHA <- any opinions on that book? :)