=== Metacity is now known as ChewToyOfFailvil === ChewToyOfFailvil is now known as Metacity === markthomas is now known as markthomas|away === bilde2910|away is now known as bilde2910 [02:50] How do I go about updating my openssl? [02:51] I have version 1.0.1f and I should have version 1.0.1g [02:51] acmehandle: sudo apt-get update && sudo apt-get -u upgrade [02:51] Hhhm, I ran update. This is a fresh install. [02:52] acmehandle: check that the version that is installed matches the most recent release http://www.ubuntu.com/usn/usn-2385-1/ [02:52] Yeah, '0 upgraded' [02:53] acmehandle: you can check the version with dpkg -l openssl 'libssl*' [02:54] Yeah, it says 1.0.1f [02:55] Which is a January release [02:55] First thing I did when I started this vps was upgrade [02:55] oh I'm sorry, I forgot dpkg -l cuts off the version numbers. sigh. [02:55] update then upgrade rather [02:55] acmehandle: dpkg -l openssl 'libssl*' | cat [02:56] the pointless |cat means output isn't a terminal, so it won't truncate the vresion numbers. look for the 1.0.1f-1ubuntu2.7 or whatever... [02:56] or just do a apt-get install openssl libssl [02:56] and it will force it [02:56] Right, it all says 1.0.1f [02:56] acmehandle: that's not the part that matters. [02:57] oh [02:57] acmehandle: the part that matters is _after_ the hyphen [02:57] but openssl website says he needs g or h :) [02:57] -1ubuntu2.7 [02:57] your fine [02:57] acmehandle: yay :) you've got hte most recent [02:57] What Patrick said. :/ [02:58] So why does openssl says g [02:58] acmehandle: the OpenSSL website may say that you need g or h, but the security patches to fix those vulnerabilities have already been applied to 1.0.1f-1ubuntu2.7 [02:58] Ah, I see. [02:58] acmehandle: openssl upstream will always recommend to use the latest release to get all the bug fixes [02:59] acmehandle: because they think everyone downloads openssl source and recompiles it all the time, when in reality, almost no one compiles their own openssl, because that's how you get regressions :) [02:59] but the security team takes the upstream patch commits and applies them to the older revisions (like 1.0.1f) and patches the vulnerabilities, in accordance with security triage procedures. [02:59] and what sarnold says. [02:59] acmehandle: but rest assured, so long as the full version string (1.0.1f-1ubuntu2.7) is installed, you're fine, as it has those patches [02:59] I mean, I'm glad for the folks who do run upstream openssl, because someone has to find the regressions :) [03:00] .. same as I'm glad someone runs linus's -rc kernels :) [03:01] the only upstream that maintains stuff, is bash :) [03:03] Chet was amazing during the whole shellshocked thing. === Metacity is now known as DiedN0AsAlways === DiedN0AsAlways is now known as DiedNight0AsAlwa === DiedNight0AsAlwa is now known as Metacity [03:04] sarnold: indeed. [03:05] Patrickdk: heheh [03:05] hey, it made life easy for me to backport that crap to debian v4 [03:12] urgh, debian 4... [03:13] makes me glad I use Ubuntu, I don't have to deal with massive version changes from one release to another, as much... [03:13] well, not my fault [03:13] company I contract for, bought another company [03:13] heh [03:13] they where working on a new product (fully deployed on 13.10? why? not lts?) [03:14] and the old system that was in, self-manage mode, was left from years ago, on debian4 [03:14] Patrickdk: at least you aren't having to take 14.04 patches and taking them back to Hardy, or god forbid Dapper, versions [03:14] * teward had a case where he had to do that :/ [03:14] Patrickdk: zounds... [03:14] I took over maintance a month [03:14] and hadn't even learned where everything was yet [03:15] teward, I had already backported around 30 things to trusty, 2months before it was released [03:15] Patrickdk: tell me about it, during the Trusty dev cycle I was already backporting entire packages to Precise just for my own needs, let alone nitpicking security patches [03:16] no, I mean, to trusty, before release [03:16] to precise, ya, still doing that [03:16] I have dropped support for lucid though [03:16] half my stuff is on trusty [03:16] the other half, is likely never to upgrade, but will be replaced [03:17] or run in parrallel, till precise dies [03:17] Patrickdk: funny story: when i took over the nginx PPAs, it was around 12.04 that I took over almost exclusively, and the first thing I did was drop all Lucid support - that was causing headaches upon headaches for me... and I had bad experiences with the interim dev releases so I just started sticking to LTSes [03:17] makes life easier on production systems, sticking to the LTSes [03:18] (so long as you backport software where necessary to support the applications you have to run) [03:21] why must the rhel installer be so annoying compared to ubuntu [03:22] +1 [03:27] Any advice where I should point the 'root' path on my server? I often hear that /var/www/ is not a good place [03:29] and I always thought it was /root [03:29] Patrickdk: lol [03:29] acmehandle: what's wrong with an htdocs of /var/www/? [03:29] it depends on a crapload of things [03:29] nothing to do with not a good place :) [03:30] The great thing about the internet is that anyone can be an admin. [03:30] it is as good a place as any other, depending on how you *configure* your server [03:30] sarnold: I honestly dont know. For django framework I hear one thing, [03:30] for rails I hear another. [03:30] when talking to apache its another. [03:30] that is cause they all have their own defaults [03:31] just adjust it, and make sure you maintain proper security [03:31] though, with django/rails/... [03:31] they will be working as fastcgi likely [03:31] so they don't even have to even care [03:31] as long as you direct the aliases for their static content, correctly [03:32] they could even be on totally seperate servers, as far as apache cares [03:33] acmehandle: aha. :) there's a fair amount of cargo-culting in some of those communities. It might not hurt to ask "why?" when something seems arbitrary :) [03:34] :) [03:34] Right, the 'why' is where I have to remember to put on an asbestos suit [03:34] normally the answer is, cause you have to change it so many times! [03:34] sometimes yes :) hehe [03:35] I'm a grumpy old grouch so I don't much care one way or the other, hehe :) [03:35] I'm admining my own vps webserver. So honestly I dont care. I'm going to experiment with nginx this time around and hopefully experiment with django and rails [03:35] and some javascript [03:36] I personally dont even care about nginx or apache for that matter, but from what I gather so far if I want to do any kind of web sockety stuff I need nginx [03:36] But I thought there were some kind of genuine security concerns the way everyone makes it sound about /var/www/ or wherever. [03:38] acmehandle: I found nginx easier to configure than apache; I've never pushed either one far enough to worry about their performance [03:39] acmehandle: I really don't like the debian style of having the apache or nginx process owned by user www-data --- the name encourages people to set the owner of their web contents to www-data. But you don't want the web server to have write access to anything, beyond its log files and maybe a database / fcgi socket ... [03:40] acmehandle: I wish the web server ran with a username like www-exec or www-prog or something that didn't scream "chown all your files to me" [03:40] sarnold: and, in the case of dynamic PHP apps like forums, the forums' cache folder, is sometimes ok to write to. [03:41] Yes, thus far thats what I hear quite often. Only thing that bothers me is I spent time on figuring out the proper settings I need for apache on one vps and somehow by magic all my settings were rolled back. So now I'm in the process of transfering to another vps and am starting from scratch, so to speak. At least this vps runs ubuntu 14 whereas the other one was 10.04. [03:41] teward: ahh, yes, I always forget about php. (It's not like I _try_, I just don't think of it often. :) [03:41] acmehandle: yikes and yikes :) [03:41] acmehandle: that can sometimes happen when they've got some helper frontend like cpanel or whatever. blech. [03:42] sarnold: with regards to www-data. Isnt it user: apache if compiled from source? [03:43] acmehandle: or httpd or something, yeah [03:44] acmehandle: this is a failing in debian policy, a failing ubuntu has inherited. [03:44] Ah right. [03:46] The thing that bothers me mostly about nginx is its thin license. [03:47] I get this sense like they can yank the public license at any time [03:47] then all those big fancy lovely websites running on nginx would be the only ones who could afford nginx [03:48] heh? [03:48] why would that matter? [03:48] the older releases would still be available [03:48] and can be forked [03:52] a great many projects have contributor license agreements that allow relicensing to e.g. BSD or MIT -- which amounts to much the same thing [05:37] I got docmgr up and working but it will not index word documents [06:56] hi, I installed ruby2.0 on my 12.04 server using the brightbox but currently those a keep back from upgrade as there seems a dep issue „ruby2.0 : Depends: ruby (>= 1:1.9.3.1)“ is anyone here also using brightbox? === Lcawte|Away is now known as Lcawte [08:18] Good morning. [09:45] ubuntu saves the data in: /var/lib/postgresql/9.3/main what exactly happens when there comes postgres version 9.4? [09:45] would then change the data directory too? === zz_DenBeiren is now known as DenBeiren [12:19] morning [12:21] Hey pmatulis, how are you doing? [12:30] lordievader: tired, i need some ginseng [12:39] Not coffee? [12:47] coffee 1st === ihre is now known as ihre`bnc === ihre`bnc is now known as ihre === TonyL is now known as Guest41932 === Lcawte is now known as Lcawte|Away === Lcawte|Away is now known as Lcawte === Guest41932 is now known as TonyL [15:37] Is there a way in apt to set an alternate mirror for a repository should the primary one be unavailable for some reason? What I'm trying to do is force clients to use our internal package repo when onsite but still be able to get updates offsite since the internal mirror will not be facing outside the firewall. [15:37] K4k: Yep, just have another 'deb' line for the repo in sources.list. [15:38] K4k: apt ignores the things it can't get to. [15:38] Is there a way to set a priority on the deb entries or does it just pick the one that's listed in the file first? [15:38] K4k: It takes what it can. [15:38] For example, could I use two different sources.list.d files with 01-internal and then 02-external as the source file names? [15:38] !pinning [15:38] pinning is an advanced feature that APT can use to prefer particular packages over others. See https://help.ubuntu.com/community/PinningHowto [15:39] genii: thanks! [15:39] K4k: Yer welcome :) [15:41] K4k: Pinning is something different. [15:42] K4k: If the repos have the same packages, it doesn't matter. [15:42] jpds: It's usual usage is to freeze a file at a particular version or to only use one from a particular repository. But it is more flexible than people think. === MeltedDed is now known as MeltedLux [15:43] jpds: yeah, was just reading that... it can set priority but doesn't look like you can pin priority based on repo, only per package. [15:43] K4k: Another thing you can do is a DNS hijack. [15:44] Would have to be on the client side using dnsmasq... which sounds ugly and error pron [15:44] K4k: Have like; gb.archive.u.c go to an internal IP as opposed to the real one. [15:44] What's wrong with dnsmasq? [15:44] Having to muck with DNS resolution client side just seems like a bad idea to me [15:45] Well, It Works. [15:45] How would I do that anyway. I would need some sort of conditional based on their interface IP? [15:46] does apt-cacher-ng help achieve your goal? [15:46] apt-cacher-ng is so unreliable. [15:46] squid-deb-proxy++ [15:47] Was looking at approx, apt-cacher-ng and apt-proxy(?) and none of them seem to do what I need the way I need to do it. They all do some part of it though [15:48] K4k: You tell dnsmasq: if you see a request for; archive.ubuntu.com, give it this A record -> 10.0.0.2, etc. [15:48] Where that A record is your internal mirror. [15:48] and when they're not on the internal network, how would it fall back to using the actual archive.ubuntu.com address? [15:49] K4k: Yes. [15:49] K4k: You set that on your LAN's DNS server. [15:49] K4k: Nothing special on the clients. [15:49] I don't have control over the LAN DNS unfortunately :( [15:49] Well... let me rephrase that [15:50] It's a windows DNS server. I'm not sure if it can do that [15:50] Hm. Conceivably you could just have a post-up directive for the ethernet adapter which decides where it's connected, and sets the Dir::Etc::sourcelist "sources.list"; variable to something appropriate [15:50] if only upstart had a network-changed event you could toggle between sources.list files using it. [15:51] what genii said. [15:51] I forgot about post-up [15:52] some sort of client side resolution timeout would be all I'd need really. `if archive.ubuntu.com; then go 10.0.0.2; redirect after 30s back to archive.ubuntu.com proper` [15:52] but I'll investigate all of these possibilities. They all sound good. [15:55] K4k: Apologies for not properly understanding your original question, had to go back up and carefully read it first :) [15:55] K4k: Could you do a transparent proxy on the LAN? [15:56] jpds: I don't think so, not easily. [15:57] afternoon :) [15:59] can anyone recommend a way to get a file from server1 to server2 using scp as root without hardcoding the password into the script that runs it? [15:59] NigeyS: SSH key. [15:59] pubkeyauth? [16:00] NigeyS: And whatever you do, use a forced command: http://binblog.info/2008/10/20/openssh-going-flexible-with-forced-commands/ [16:01] jpds we use ssh keys currently .. if i use ssh key via a bash script, and it prompts for a password does that interrupt the script at all ? .. trying to scp apache configs to server2 after creating them on server 1 but dont want to use a hardcoded password in the script, or paswordless ssh keys, work will fire me for that ! [16:01] NigeyS: The SSH key has a passphrase? [16:02] no, not by default on AWS instances, if i enable it it enables passwords for all users right ? [16:03] NigeyS: You're talking about two different things. [16:03] oh sorry see what you mean, keyphrase on the key itself [16:03] NigeyS: for that purpose I typically use Git, actually [16:03] NigeyS: If the key has no passphrase, it shouldn't prompt for one in the script. [16:03] currently it doesnt no, i could add that to server 2's ubuntu user, but how do i sudo to get that file in /etc/apache2 within the script ? [16:04] you can do the transfer with a non-root user and then use a git-hook to put the file from the local git repo in to the web directory using root locally on the system [16:04] oh, thats something i havent heard of before [16:05] i guess the other option is to put configs in a dir that doesnt require root access [16:05] * jpds wonders why system1 should be poking with server2's apache config. [16:05] Others may have a different opinion on that but that's how I manage all of my websites so that I don't have to deal with sftp or scp when I update site content [16:05] jpds cluster of web servers, configs have to be kept in sync [16:05] or you could configure ACLs for limited access to the directory by an unprivileged user [16:05] NigeyS: Well, use something like Puppet for that. [16:06] ^^^ [16:06] cant have test.com exist on server1, and not server2 as theyre load balanced. [16:06] puppet [16:06] NigeyS: Puppet, Chef, salt, ansible, are all built for this kind of thing. [16:06] thats a bit overkill for something thats only going to happen a few times amonth at the most. [16:07] Your life will be a happier place than having root run around with shell scripts. [16:07] thats a fair point [16:09] ideally i like the config on the nfs mount and they dont have to be copied anywhere but dammed if i can find how to tell apache to look there for them, on ubuntu at least. [16:09] symlink? [16:09] NigeyS: Yeah, and there's the NFS server dying. [16:09] NigeyS: And your HA cluster going along with it. [16:09] i really dont want to symlink to nfs for that very reason [16:09] Just HA all the things [16:10] Automate all the things. [16:10] so far everything but this is automated :) [16:10] soooo... then we're back to puppet jpds? [16:10] :P [16:10] Why not. [16:10] lol ok! i'll go look at puppet :) [16:10] If not puppet then, personally, I'd use the git-hooks but even that's kind of iffy [16:11] NigeyS: And with puppet, you can tweak a lot more than just Apache. [16:11] thats true, i will go read :) thanks for the advice === Xbert is now known as Guest12533 [16:12] while i'm here, any of you ever had a situation where your gss.d and statd logs were filling up with lines of "y" to the point where it uses 30GB in a few hours ? [16:13] Is anyone here using foreman? I am working on our package management systems, since we have to re-vamp everything for RHEL7 anyway, and saw that Foreman can manage both Redhat and Ubuntu packages but some material was talking about using Katello as well, is that something that works with Ubuntu or is that soley a RHEL thing? [16:14] And how do you like foreman if you are using it? [16:15] I've been meaning to try it but haven't had time to do so yet. [16:15] hi [16:16] anyone know if pxelinux.0 is added on the FIX? [16:16] Ameurux: FIX? [16:16] it's a bug [16:16] Which bug? [16:16] pxelinux.0 is missing on 14.10 [16:16] Ameurux: Erm, no. [16:16] Ameurux: pxelinux.0 is a file you're suppose to create for your PXE server. [16:17] ok, thx, Im just trying to get PXE server working on 14.10 [16:17] will give it a try [16:17] thx [16:17] genii: Looks like post-up.d script will do what I need. I can have it try to resolve the address for our internal mirror and if successful I can set a line in /etc/hosts to re-point archive.ubuntu.com to our internal server [16:18] * jpds really doesn't like the sound of network stuff poking files in /etc. [16:19] * or set up something in dnsmasq [16:20] One day you'll wake up and find your /etc/hosts file is empty. [16:20] heh... yeaaahhhh [16:20] PUPPET! [16:20] :P [16:23] Warning: Do not use this module on an existing Apache setup. It will purge any Apache configurations that are not managed by Puppet. [16:23] thats not very good of puppet..lol [16:24] NigeyS: It is. [16:24] NigeyS: If it's not managing it, it shouldn't be there. [16:24] but i have already set up and installed apache.. [16:25] It's too avoid config conflicts. [16:27] NigeyS: Last thing you want is to have "www.test.com" by hand. [16:27] NigeyS: Then add a vhost in Puppet for "www.test.com". [16:28] NigeyS: And then Apache dying as there's two configs for that domain. [16:31] yup, there's that ! [16:33] It's fairly straight forward to tell puppet "deploy this config file". Though it is proper to use the Apache modules you can just say "put this file here" [16:34] If all you're using it for is to deploy a couple of configs to two different systems, that's going to be the path of least resistence to get it working and then you can worry about migrating to the "proper" way later [16:36] NigeyS: chown the config files you want to copy to some non-root sentinel account and scp using that? [16:36] jrwren: ... [16:37] XD [16:37] NigeyS: are you using an ssh-agent? [16:39] OH! You don't need a puppet server to do what you want. You could put the puppet manifest that manages the config file on an NFS share and then there is a flag for puppet-agent you can use on the client to just read from that "local" manifest file [16:39] I just remembered that === webwiz is now known as jturek [16:48] sorry just got back.. let me read up :) [16:49] jrwren good idea, turns out this script ive been writing doesnt want to work properly anyway lol maybe thats a sign ;) [16:54] anyone care to take a look and see why im getting some funky errors? http://pastebin.com/HL36G0Tp [16:54] ./Test2.sh: 10: ./Test2.sh: function: not found [16:54] ./Test2.sh: 13: [: =: unexpected operator [16:54] ./Test2.sh: 21: [: =: unexpected operator [16:57] not really [16:57] but likely cause the script was written in bash and not dash [16:58] well,it works fine without my new $restartapache commands, but theyre just a duplicate of $needdb .. so i dont get why it doesnt work [16:58] NigeyS: bash v. dash? [16:59] wouldnt that cause it to not work at all in bash though? [16:59] How can I find out if openssl was built with tls compression enabled? [16:59] NigeyS: no. anyway, I think you want function createsite() {.. } [16:59] Sorry if this sounds like a stupid quesiton [17:00] jrwren okies, ill keep fiddling [17:03] jrwren works fine i removed #!/bin/bash by mistake === markthomas|away is now known as markthomas [17:09] NigeyS: :) [17:10] but just realised that script will cause apache to fail [17:10] * NigeyS needs more coffee [17:11] acmehandle why does it matter? [17:14] NigeyS: http://paste.ubuntu.com/9355999/ [17:14] NigeyS: That was easy. [17:15] jpds legend! lol [17:15] Because I dont want RC4 enabled on my server. [17:15] what does compression have to do with rc4? [17:15] but theres a few extra things id have to get puppet to do aswell, like insert the user data to the auth database etc [17:15] NigeyS: Though, I've not tested it, and you'll probably need to load a CGI module. [17:15] acmehandle try openssl version -a [17:16] patdk-wk: it is similar in nature. [17:16] heh? [17:16] rc4 is cipher [17:16] compression is well, compression [17:16] totally different in nature [17:16] jpds i need to chage that script quite a bit as far as the vhost settings go, we dont use cgi anymore for example, and i dont think all those options work with 2.4 [17:17] patdk-wk I'll admit I'm not an expert but: https://community.qualys.com/blogs/securitylabs/2013/03/19/rc4-in-tls-is-broken-now-what [17:17] I can understand rc4, it's not very secure anymore [17:17] In the System Administrators section [17:17] jpds: It looks like dnsmasq, using the -y flag, can select an IP from /etc/hosts for a given hostname that is on the same subnet. That might work for what I was trying to do earlier. [17:18] acmehandle, yes, but that website is talking about scope [17:18] the only tls compression attack is crime [17:18] and that requires you to send repeated data at the start of the session [17:18] it doesn't apply to openvpn [17:19] hi all. I have a wee problem here, not really related to ubuntu server, but I hoe it's not too offtopic. I login to this host, call it A, and I run xfreerdp from there and to a windows server on a closed-off network [17:20] now, xfreerdp and xquartz aren't good friends, so it the keyboard doesn't work. I can't use rdesktop, since the windows servers require crypto not supported by rdesktop. I don't have a linux machine here atm, so wonder if it's possible to do this with some ssh tunnel magick [17:21] the host A is heavily firewalled and only answers to 22/tcp. From there on, it's fairly open [17:22] sure as long as they didn't disable ssh tunnels/forwarding [17:23] patdk-wk: I didn't :P [17:23] it's common for me to disable those now :) [17:23] had too many people abusing them [17:23] patdk-wk: not many have access to this box [17:24] I had some users passwords get compromised [17:24] and the new *owners*, used ssh to portforward and attack other systems [17:24] patdk-wk: it requires both key and password, so it's a bit hard that way [17:24] patdk-wk: and company policy is to require password protected keys [17:29] RoyK: ssh -n -N -L 3389:windows-server:3389 A ; remote desktop to localhost [17:32] if you use remmina, it has an option under ssh to do it for you :) === guampa_ is now known as guampa [17:43] ok soooo after i got this thing up and running and everything seems to be working right, I transfered the vm to the production server and although it seems to be up and working correctly i cant log into the web interface. [17:43] ip address is differnt, is there something i need to change ...listening address or such on the alfresco server? [18:24] I have never setup an email server. can someone see this tutorial and tell me how I should setup outgoing server on my Mail client? http://www.krizna.com/ubuntu/setup-mail-server-ubuntu-14-04/ [18:24] I used standard default setting (port 25 with password auth) [18:26] ok...dont know why but it just...started working [18:26] hadifarnoud: USe this one: https://help.ubuntu.com/community/Postfix [18:27] is it possible to connect this to an existing filer? Whe have a samba based cifs server that already has a bunch of document on it. [18:27] tyhicks, kirkland: have you seen this bug ? https://bugs.launchpad.net/ecryptfs/+bug/1328689 [18:27] Launchpad bug 1328689 in ecryptfs-utils "ecryptfs-utils does not work with Ubuntu 14.04.1" [Undecided,Confirmed] [18:32] jamespage: I've seen the bug report but haven't had a chance to look into it [18:33] bekks: my postfix config for smpt is "submission inet n - - - - smtpd" [18:33] not sure what "chroot" is for but it is not set to "n" [18:46] what is starttls in postfix? right at the end of this tutorial, there is an example conf for mail client. I have no option for "STARTTLS" on OSX Mail. [18:47] hadifarnoud: might be 'SSL/TLS' or just 'TLS' [18:47] (at least in OSX mail) [18:49] teward: I've got 'Use SSL' next to port and 'TLS (External client certificate)' in authentication. [18:49] bit confused. that means I have to provide a certificate to OSX Mail? [18:51] no idea [18:51] funky osx [18:51] teward: also, there is an option for TLS certificate. [18:51] bloody OSX Mail [18:52] I guess SSL check box next to port is sort of TLS. [18:52] fault might be with my server setup [19:06] kirkland: re: bug #1328689> When running the adduser --encrypt-home command, it proceeds to try to mount the home directory before prompting for the user's password [19:06] Launchpad bug 1328689 in ecryptfs-utils "ecryptfs-utils does not work with Ubuntu 14.04.1" [Undecided,Confirmed] https://launchpad.net/bugs/1328689 [19:07] kirkland: so a valid auth tok obviously isn't in the kernel keyring yet [19:09] seems like my sever blocks connection from other IPs. I get this error in syslog "SSL_accept error from unknown[31.159.97.167]:" === Lcawte is now known as Lcawte|Away [19:35] hello :) [19:35] hello there [19:36] guy's I have a problem with Unity desktop [19:36] i can't find resolution [19:36] abrams: try #ubuntu , this is the channel for ubuntu server [19:36] see topic ↑ [19:36] When I try to drag and drop icon from unity to desktop [19:37] anyone know if i put an IncludeOptional into apache2.conf and it points to vhost configs, do i still have to run them through a2ensite? [19:37] ok [19:37] sory :) [19:37] abrams: is ok === markthomas is now known as markthomas|away === markthomas|away is now known as markthomas === bilde2910 is now known as bilde2910|away [21:32] what's up all. How can I compare many text file's contents in two directories? [21:33] diff -r dir1 dir2? [21:33] will that compare the file's contents or just if the file exists [21:51] anyone know if there's software out there that can basically let me run proccesses with the GPU instead of the onboard CPU? [21:57] Probably the closest thing would be anything compiled using CUDA, but you'd also need an NVidia for that [21:59] that's ok, so I'm not familiar with CUDA, is that a compiler? [22:02] elliotd123: It's a parallel-processing library from NVidia. It uses the cores of their GPUs [22:02] elliotd123: If an app is compiled from source with CUDA enabled, it will use the NVidia card to run them on. [22:03] Sounds intriguing. I'll look into that. Thanks, genii === Lcawte|Away is now known as Lcawte [22:19] Is there a difference if I install something using just apt-get versus something from ppa? [22:20] I often see suggestions to install someting using PPA and I am wondering how necessary that might be [22:20] it is the same thing [22:20] just ppa normally means, not maintained by ubuntu [22:21] Some PPA are more trusted than others, like for instance xorg-edgers [22:21] I trust my ppa a lot [22:21] I have a problem with Bind9.8 and Samba4(latest git) on Ubuntu Server 12.04.5LTS. I'm trying to get DNS_DLZ working. The DNS server was starting without the dynamic zones, and doing lookups fine, but integrated it hasn't started; AppArmor is throwing a permissions error on /usr/local/samba/private/dns/sam.ldb (just wants r.) I see the line for that file in /etc/apparmor.d/local/usr.sbin.named (named is the bind user accoun [22:22] how do I give a different user access to a single file [22:22] I still need to allow the access to the file from the origingal group and user [22:23] make a new group [22:23] enable and use acl's? === MeltedLux is now known as MeltedDed [22:32] mapleton: you were cut off at "bind user accoun" [22:33] mapleton: if you have a line /usr/local/samba/private/dns/sam.ldb r, in your /etc/apparmor.d/local/usr.sbin.named file and your main /etc/apparmor.d/usr.sbin.named file has an #include line, then you just need to reload the profile; apparmor_parser --replace /etc/apparmor.d/usr.sbin.named should do it [22:33] thanks, sorry. [22:33] was basically complete.. gonna give that a shot [22:45] okay... got a little further. "could not create /var/run/named/session.key" I'm guessing its a permissions issue, since I'm no longer running bind as the default. The samba wiki mentions (for the zone files) to chown named:named and chmod 640. Does that apply here? === bilde2910|away is now known as bilde2910 [22:46] mapleton: sorry, dunno there; it could be apparmor again. [22:46] mapleton: check again for more DENIED lines in dmesg [22:53] No Apparmor DENIED now, just a couple of permission errors, both in that directory. Is it safe to add the chown and chmod 640 permission to /var/run/named/? [22:53] mapleton: it's probably safe [22:54] .. I'm not an expert on either one, but some user account has to own them, and it could either be bind or samba, depending upon how they are modified.. [22:54] one more thing, I guess: how do I find the current ownership and permissions stats of a file [22:54] mapleton: ls -l is the easiest [22:55] mapleton: stat /path/to/filename can also show you [22:55] Hrmmm, does postfix filter relay recipients only during an actual connection to the relay? [22:56] many thanks for your help, btw.. sarnold.. its my second day on ubuntu [22:56] mapleton: welcome aboard :) [22:56] * keithzg_ is seeing a server just deferring mail to addresses that in theory should be filtered out by our relay_recipients [22:57] mapleton: could you file a bug against apparmor (ubuntu-bug apparmor) once you've gotten it sorted? we may want to add the rules you needed to the default profile [22:57] will do, thanks [22:59] although I do feel a bit of a 'computer user, non-technical' so was thinking its more user error than anything ;) [23:02] mapleton: hehe, not a bad first instinct, but it could be the others who have done this setup before you didn't report bugs either, hehe :) === Lcawte is now known as Lcawte|Away [23:08] screw it... chmod 666 -r / [23:10] hehe [23:23] okay.. I've exhausted the troubleshooting steps I could guess at. I even added (and reset) the directory to apparmor (** rwk) in case... "Could not open '/var/run/named/named.pid',"could not create /var/run/named/session.key" I used stat, and not entirely sure I know what I'm looking for, but its chmod is 664 [23:25] I changed ownership to named. I assume the error "named[1913]" means thats the executable context (the daemon named) [23:28] mapleton: correct, the 'named' comes from the process's "comm" field (first 16 bytes) and the 1913 is the pid of the process [23:31] chown -r named /var/run/named did it. one more error, but hey.. probably similar === jvwjgames_ is now known as jvwjgames