[00:08] Hi. [00:08] I'm having a problem with nfs on my server. [00:10] Is anyone here familiar with nfs? [00:11] Hello? [00:11] you should usually explain the problem :/ [00:11] in order to get help [00:11] then if anyone has experience they'll respond [00:11] Ok, sorry I was waiting to see if there was anyone here. [00:11] alternatively, #ubuntu [00:13] The server is running Ubuntu 11.04 server. The client, which is running Mac OS X is not recognizing the files and folders on the NFS mount. Here are the settings on the server and client. [00:13] SERVER SETTINGS: [00:13] Export Folder: [00:13] Export Folder Owner: [00:13] jon:80 (me and admin group) [00:13] Export Folder Permissions: [00:13] -rwxrwxrwx [00:13] Export Options: [00:13] CLIENT SETTINGS: [00:13] Remote NFS URL [00:13] nfs://10.0.1.100/jbondhus [00:13] Mount Location [00:13] Advanced Mount Parameters [00:13] None [00:13] stop [00:13] pastebin [00:13] !pastebin [00:13] For posting multi-line texts into the channel, please use http://paste.ubuntu.com | To post !screenshots use http://imagebin.org/?page=add | !pastebinit to paste directly from command line | Make sure you give us the URL for your paste - see also the channel topic. [00:14] Ok, here's the URL, sorry, i'm new to IRC. [00:14] http://paste.ubuntu.com/730553/ [00:16] jbondhus: the OS X NFS stack is somewhat sloppy, using netatalk works better [00:17] How do I use that? [00:18] The reason I'm using NFS is because it's the fastest protocol. [00:23] I have to make NFS work. [00:26] jbondhus: apt-get install netatalk [00:26] On the server? [00:26] NFS might be a bit faster, but you won't notice the difference in most circumstances [00:26] yes [00:27] You know what, does ubuntu use afp? [00:27] Can it support it as a client? [00:27] ubuntu supports AFP [00:28] I really doubt using AFP as a client from ubuntu will be the best choice [00:28] My other laptop uses ubuntu and my main one uses Mac OS X, and the Server runs Ubuntu server. It has to be compatible with them all. [00:28] but the file ownership will be in sync whether you use afp or nfs or smb or whatever [00:29] But smb is slower, and AFP is more compatible with mac os x. [00:30] I use AFP for my mac and NFS for unices and SMB for windoze [00:30] For a heterogeneous environment Samba will be the least painful [00:30] MAYBE NFS if you have OS X and Linux, but I wouldn't want to bet on it [00:30] Well I have to have it shared. I found a great article on how to set up AFP on ubuntu. [00:31] twb: mixing AFP and NFS works well [00:31] it's just file ownership and modes after all [00:31] NFS had permissions problems. Check that, HAS permissions problems. I'm just going to purge the nfs packages and rm -rf the export. That way I can start with a clean slate instead of having a junk folder that's chowned to nobody. [00:31] RoyK: I was assuming you didn't want the hassle or maintaining two whole network fs stacks [00:31] I think I'm going to go with AFP. It sounds the most painless. [00:31] jbondhus: well, unless you're going to run kerberized CIFS, NFSv4 or AFS, you are going to have permissions issues. [00:32] Unkerberized network filesystems simply do not enforce access restrictions. [00:32] What do you mean by unkerberized? I haven't heard that term before? [00:32] Meaning you don't have kerberos authentication setup [00:32] without Kerberos [00:32] Ok, that makes more sense. [00:35] Is AFS the same thing as AFP? [00:36] !afs [00:36] What? [00:36] afs is andrew file system, not really compatible with AFP, which is apple file protocol [00:36] !afp [00:37] this bot is too stupid [00:37] Ok, nevermind then. [00:40] Bye. [00:40] Indeed; not compatible at all, completely separate protocol :P [00:41] I only mentioned AFS because it's kerberized; I wouldn't recommend it for anyone that isn't a university [00:41] He just logged off. [00:41] ersi: I was speaking for the benefit of lurkers :P [00:42] 'k. :-) [01:19] stupid issue got two data four port pci-e cards, and when I boot the disks connected to them change address, can I lock in the address [01:21] yaboo: sorry, are these ethernet cards or sata cards? [01:21] twb sata cards [01:21] So what address is changing, the pseudo-SCSI bus address? [01:21] four port, both cards have disks connected to them [01:22] two, e.g. on one boot disk is /dev/sdc, next boot it is /dev/sdg [01:22] and boot after that its back to /dev/sdc [01:22] trying to do software raid [01:23] yaboo: that is how disks work. It is only chance that you have never had this problem until now [01:23] yaboo: to use software raid, just refer to the array by its UUID rather than by the device names of its array nodes [01:24] twb so disks will change address during boot [01:24] Yes [01:24] twb, sort of stupid even using the disks normally means I cannot put them in stab, because next boot its changed id [01:25] yaboo: fstab also supports UUIDs [01:25] two I understand, can I just lock the disks down [01:25] I doubt it [01:25] ok [01:25] If I had to guess I would say its because you have two identical cards, so they are more likely to race [01:26] two yes they are the same cards [01:26] But in theory it could happen on any hardware, and the Right Thing is to use UUIDs [01:26] ok [01:26] If you're installing Ubuntu it usually should use UUIDs by default [01:27] twb, thats cool, but when I reboot the array now comes as /dev/md127 and fails to mount, even using uuid [01:28] I do not know why that is. [01:28] that is??? [01:28] Did you update /etc/fstab and /etc/mdadm.conf and run update-initramfs -u -k all? [01:28] no [01:28] Do so. [01:29] so update /etc/fstab with the uuid and the /etc/mdadm/mdadm.conf then run update-initramfs -u -k all? [01:29] OK, pastebin your current fstab and mdadm.conf [01:29] ok [01:30] yeah, UUID by default was nailed down years ago [01:30] I remember dealing with it in 5.10 [01:30] When was the sg transition? [01:30] qman__: ^^ [01:30] well, dealing with this problem [01:30] qman__: I'd have said more like 2007, but whatever. [01:31] in any case, long enough [01:31] qman__: I expect he's just following an ancient howto or something [01:32] twb http://pastebin.com/DSgqZ4xh [01:33] yaboo: you have no arrays in mdadm.conf [01:33] yaboo: you need something along the lines of this: [01:33] A [01:33] ARRAY /dev/md0 level=raid1 num-devices=3 UUID=aaebe741:68a1b213:7234de3b:cd66fef8 [01:34] two ok, seems I am following a old how to then [01:34] You can get the UUID by doing mdadm --detail /dev/md127 or so [01:34] change raid=6 for me I guess [01:34] also, my fstab doesn't use that UUID format for the array [01:34] it uses the same format as the rest of the disks [01:35] Yes, in fstab it would look like this: [01:35] UUID=58d2c859-912a-4937-bbb4-d9f9edd16232 /boot ext2 noatime,nodev,nosuid,noexec,ro,sync 0 2 [01:35] Note the UUIDs will be different -- mdadm takes the *array* UUID; fstab takes the *filesystem* UUID. [01:35] The filesystem UUID can be gained with "blkid /dev/md127" or "tune2fs -l /dev/md127" [01:36] yaboo: also you seem to have the root filesystem in fstab defined twice [01:36] ok [01:36] yeah, that's a big problem [01:36] In future, you should check the date on the howto before you follow it :-) [01:37] while the concepts haven't changed much, the little things have [01:40] two change my line to RRAY /dev/md0 level=raid6 num-devices=6 UUID 8c4113cd:9dee4f90:bc191129:c5bc8b2a /home/storage ext4 errors=remount-ro 0 1 [01:40] RRAY /dev/md0 level=raid6 num-devices=6 UUID 8c4113cd:9dee4f90:bc191129:c5bc8b2a /home/storage ext4 errors=remount-ro 0 1 [01:40] two changed my line too ARRAY /dev/md0 level=raid6 num-devices=6 UUID 8c4113cd:9dee4f90:bc191129:c5bc8b2a /home/storage ext4 errors=remount-ro 0 1 [01:40] is this correct [01:41] My name is "twb" not "two" [01:45] sorry twb [01:46] is this correct syntax or not [01:47] that is not correct [01:48] the ARRAY bit goes in mdadm.conf, the /home/storage blah blah belongs in fstab [01:51] ok have to wait till server reboots [02:32] Hello Everyone. [02:32] I was wondering if there was someone that could help me with a External Harddrive mounting issue [02:36] !ask [02:36] Please don't ask to ask a question, simply ask the question (all on ONE line and in the channel, so that others can read and follow it easily). If anyone knows the answer they will most likely reply. :-) See also !patience [03:42] I have my router, a speedtouch 516, and a router, a DLink DIR-615. I am having a problem with port forwarding. It is now working. I seem to be having an issue with the modem. It sees my router as 192.168.1.64, where my router's address is actually 192.168.0.1. Not sure what the issue is here. [03:45] Unless your router or modem is running Ubuntu, that sounds like a problem for your router/modem vendor, not us [04:06] Maybe a bad modem? [04:18] All modems are bad modems [04:31] Bridge mode versus PPP mode maybe the issue with the modem? [07:50] have installed debian squeeze xfce4,seems no kernal source ,I mean there is no /usr/local/linux directory.how to apt-get it ? [07:53] * greppy looks at the channel name === jussio1 is now known as jussi === sanderj_ is now known as sander^work [09:21] New bug: #887035 in bacula (main) "bacula director killed because of an "out of memory" condition" [Undecided,New] https://launchpad.net/bugs/887035 [10:26] New bug: #887060 in unixodbc (main) "'./usr/share/doc/odbcinst1debian2/NEWS.Debian.gz' is different from the same file on the system" [Undecided,New] https://launchpad.net/bugs/887060 [11:19] hey, I downloaded ubuntu-10.04-server-cloudimg-i386.tar.gz and SHA256SUMS.gpg from https://uec-images.ubuntu.com/releases/10.04/release/, but I get a bad signature when I try to verify. Any idea what's up? [11:23] runasand: sounds like a corrupt source or corruption in transit [11:23] runasand: (or the image has been updated and the signature hasn't) [11:24] ikonia: ok, so it's not just me doing it wrong. The checksum in SHA256SUMS matches, so that's something. [11:24] ikonia: any idea who I should poke to have the signature updated? [11:25] a good question, [11:25] I don't know who maintains that stuff, at one point I didn't even think it was official [11:26] https://bugs.launchpad.net/ubuntu-on-ec2 seems like the best place [11:26] ikonia: hah, ok, the FAQ actually points users to this IRC channel :) [11:26] I'm sure that is correct, I just don't use those image, so have never really got involved [11:29] ikonia: heh, figured it out, seems like I was just doing it wrong :) [11:30] explain ? [11:31] basically, you fetch the sha256sum, sha256sum.gpg and the tarball, verify sha256sum with sha256sum.gpg and then check that the sha256sum for the tarball matches what's in the sha256sum file [11:41] howdy. [11:42] what's the use of the "backup" user autoamtically created on an ubuntu install? [11:42] Is there any harm done if I make use of it to setup an rsync from a remote server? [11:42] (ubuntu server, 10.04 lts) [11:43] I was planning on doing keybased login, just wondering if I should create a new user or reuse the existing backup user. [11:43] RoyK: you alive? :) [12:53] hello! Is the maitainer of puppet/facter around? I think the last upgrade in maverick broke it [12:55] I get this: http://pastealacon.com/29045 (worked last friday) i guess it might be because of https://launchpad.net/ubuntu/maverick/+source/facter/1.5.7-1ubuntu1.2 [12:59] if I use the version from main instead of proposed, it works.... (sudo apt-get install facter=1.5.7-1ubuntu1) [13:14] yann2: Can you file a bug that real quick, please? [13:15] will do. nice to see you around here soren :) [13:15] cjwatson: Looks like an SRU regression ^ I forget what the exact steps are from here :-/ [13:17] https://bugs.launchpad.net/ubuntu/+source/facter/+bug/885998 actually already reported [13:17] Launchpad bug 885998 in facter "facter upgrade crashes puppet" [Undecided,Confirmed] [13:17] seems I was pointing at the wrong commit === medberry is now known as med_temp [13:19] too bad the package got pushed i -updates despite having this bug reported against it a few days ago :( [13:21] yann2: Yes, that is quite unfortunate. :( [13:37] soren: https://wiki.ubuntu.com/StableReleaseUpdates#Regressions [13:38] adam_g: ^- regression in your facter upload to at least lucid and maverick [13:38] * cjwatson bumps bug 885998 to critical, though somebody who actually knows their way around facter will need to investigate [13:38] Launchpad bug 885998 in facter "facter upgrade crashes puppet" [Critical,Confirmed] https://launchpad.net/bugs/885998 [14:09] cjwatson: Ok, thanks, I'll save that for another time (I don't suppose it'll help much at this point?). [14:14] soren: probably not, no === sanderj is now known as sander^work === d1b is now known as db === db is now known as Guest74077 === Guest74077 is now known as d1b === the-mgt_ is now known as the-mgt [15:51] you know how device letters start at sda, and go up through sdb, sdc, etc? [15:51] I'm up to /dev/sdik :/ [15:51] (not a typo) [15:51] (not useful either) [16:16] cwillu_at_work: indeed, its important to have something to make logical sense of those arbitrary device names [16:16] SpamapS, there's a grand total of 2 drives in this system :p [16:17] cwillu_at_work: lots of hot swapping? [16:17] the usb adapter I use to write out images to new drives loses its mind on a regular basis [16:17] but this is new behaviour as of 3.0 [16:18] hmm, or is it [16:18] * cwillu_at_work pokes btrfs with a stickl [16:31] Morning all, I've been running my web server since around 2002 (most recent server since 2008). I have been given an updated machine and would like to migrate my sites over slowly. How can I tell Apache or my router (Tomato Linux WRT54GL) which site to which machine? [16:32] I would like to move one site at a time as I learn how to get things like I need (also moving from Wordpress to Drupal) [16:32] maybe Joomla? [16:33] nineteen67comet: if you have the sites on different subdomains then you can vome them over one by one by changing your dns entrys [16:33] like joomla.mydomain.com [16:34] if you have the different sites as subfolders like www.mydomain.com/joomla [16:34] then you need to tell the old apache to forward all requests to each subfolder to the new server [16:34] you could use apache mod_proxy to do this [16:35] Okay .. they are all base URLS (www.domainname1.com www.domainname2.com etc etc) .. so Apache can hand a site off to another machine on my network? [16:35] my folders are all /var/www/domain1 /var/www/domain2 /var/www/domain3 etc etc [16:36] by using apache mod proxy you can add ProxyPass /app1/ http://internal1.example.com/app1/ [16:36] to your apache configuration configureation [16:36] nineteen67comet: right, so you probably have www.domainname1.com pointed to the IP of your old server, you should point it to the IP of your new server. [16:36] proxy is not really necessary [16:36] aha .. unkay .. I'll jump on mod_proxy .. [16:36] nineteen67comet: there are usually several ways to solve this task [16:37] All my URLs once they go to afraid.org they hit my router and are sent to my web server (current/old one)... [16:37] if each site have its own domain [16:37] yes [16:37] simply update the dns entrys [16:37] to make each domains ip point to the new server [16:37] all domains are in the same box; my ip (external) is not static so I use afraid.org as my DNS . [16:38] You can probably just copy the old configs to the new server and then just move each DNS pointer one by one. [16:38] nineteen67comet: oh you only have one real IP ? [16:38] SpamapS: yes .. [16:38] well thats different then! [16:39] my router hands off all port 80, 81, 8080, 8081 to the web server (and 21,22 etc) .. the IP is a dynamic typical home user IP .. [16:39] nineteen67comet: for that you probably want to use mod_proxy from the old server to the new. [16:39] okay .. I'll jump on mod_proxy .. [16:39] nineteen67comet: inside each you can define a ProxyPass and ProxyPassReverse that will send all the traffic to the new server. [16:40] nineteen67comet: theres' an option you'll need so you can use your old configs on the new server... [16:40] Okay .. [16:40] nineteen67comet: what version of apache do you have on your old server? [16:41] Looking' now .. it's running Ubuntu Server 10.04.1 .. with Apache .. 2.2.14 .. [16:42] nineteen67comet: ok good [16:42] I put 11.10 with apache 2.2.20 on board (was going to stick with LTS but new stuff is just so shinny) .. [16:43] the new one has 11.10 [16:44] nineteen67comet: yeah, on the 10.04 box you'll want to set ProxyPreserveHost On ... [16:44] K .. in there now looking .. [16:44] nineteen67comet: that way the new server will get the same Host: header so the sections will work [16:50] For the Allow from portion of ProxyRequests .. do I put the URL that I want forwarded to? in my case www.justinsteiger.com .. seems like this one I found is for incoming Proxy control .. [16:51] http://httpd.apache.org/docs/2.0/mod/mod_proxy.html is the site I found on Apache .. I'm looking at the basic examples .. SpamapS [16:52] hi. is there any tool in ubuntu for notifying admins (by mail) when a servers gets rebooted without proper shutdown (like a power outage or reset/power switch etc) ? [16:54] SpamapS: and xranby thank you for the direction .. I'm going to go tinker with it all a bit and see if I can't make it work .. [17:13] SpamapS: ping [17:15] roaksoax: pong, sup! [17:19] SpamapS: just asked pitti to reject a package from oneiric's -proposed... unless you can beat him to it so I can upload a new one [17:21] roaksoax: ecryptfs-utils ? [17:22] SpamapS: redhat-cluster [17:24] roaksoax: guess pitti got it first [17:26] SpamapS: ok, thanks though ;) [17:44] zul: ping [17:44] zul: do you have the list of patches you are forwarding to cobbler? [17:44] roaksoax: yo whats up? [17:45] roaksoax: not handy [17:45] actually handy [17:45] roaksoax: https://github.com/zulcss/cobbler-oneiric [17:45] zul: may I take a look? so I can work on re-writting those that can be made less Ubuntu specific [17:45] roaksoax: please [17:45] zul: cool thanks [17:45] roaksoax: 2.2.2 should be coming out soon so ill rebase and upload [17:46] zul: awesome! [17:46] roaksoax: im going to suggest we move away from the git snapshots [17:47] zul: agreed [17:52] zul: https://github.com/zulcss/cobbler-oneiric/commit/71ba77b0578c751804643695a71849fdd739da2b this commit should be separated in two commits [17:53] roaksoax: k [17:53] zul: other than that, it looks good to me [17:53] zul: I thnk you can forward those, then rebase 2.2 to manage better the delta [17:54] roaksoax: ack [17:54] zul: cause otherwise it might become a mess :) [17:57] SpamapS: you did miss something: https://github.com/zulcss/cobbler-oneiric (they are still pending) [17:57] zul: alright cool! [18:17] zul: so are we gonna propose upstream cobbler to work with us on adding ITSM concepts to cobbler systems? [18:18] roaksoax: i dunno i havent thought about it [18:18] zul: I'll contact upstream and dig into that [18:19] roaksoax: k [18:22] hello there [18:22] this is what i got on my ubuntu server [18:22] http://pastebin.com/tDS02ine [18:23] what i have to do ? === ppetraki_ is now known as ppetraki [20:03] hello [20:03] there [20:03] i got HP DL380 Proliant server G7 [20:04] from this morning i got something like this error : [20:04] http://ubuntuforums.org/showthread.php?p=11425367 [20:05] [1131516.069427] ACPI Error: SMBus or IPMI write requires Buffer of length 66, found length 32 (20110112/exfield-285 ) [20:05] [1131516.069433] ACPI Error: Method parse/execution failed [\_SB_.PMI0._PMM] (Node ffff880ea888e7f8 ), AE_AML_BUFFER_LIMIT (20110112/psparse-536 ) [20:08] azert: does it have an addon ipmi card or does it just use onboard ilo? [20:11] onboard ilo [20:11] just only [20:12] pmatulis: [20:13] azert: i would check what acpi settings are to be found on your box, bios and os level [20:13] azert: tinker around with that for a while... [20:14] azert: and study bug #578506 [20:14] Launchpad bug 578506 in linux "[Kernel] ACPI: EC: input buffer is not empty, aborting transaction" [Undecided,Confirmed] https://launchpad.net/bugs/578506 [20:22] is it enough to get out the battery and replace. ? === allison_ is now known as wendar [20:26] pmatulis: [21:01] Hi, I'm running Ubuntu Server 11.10 and I'm struggling to SSH into the server, I've disabled the firewall, and still nothing, SSHd is on and listening (connecting to user@localhost works fine), but remote connections won't connect, what can I try to get it to connect? [21:04] zul, hey, do you see any problems with http://people.canonical.com/~serge/l.debdiff (libvirt debdiff for precise) - the logrotate stuff is new to me, but it does the right thing in my tests. [21:11] randomcake: see if the client can sense the open port on the server (telnet, nc, nmap) [21:12] randomcake: look at server logs (/var/log/syslog) [21:13] I don't see the port as being open, and nothing is logged in syslog [21:18] what should I try next pmatulis? [21:29] how do i perm remove dhcp3 [21:31] when i do an apt-get remove dhcp3-client it wants to remove ubuntu-minimal as well [21:50] pmatulis, I'm getting 'no route to host' now I'm doing SSH from another linux box (was using a Windows laptop before) [21:50] Matrix3000, why do you want to remove DHCP? [21:51] randomcake: is this ssh server behind a router? [21:52] or are you doing a local to local connection? what does your networking look like [21:53] all 3 computers are inside the same local network, it's a WiFi router, and a switch (the server and my laptop are connected by WiFi) [21:54] all are able to use the internet, the server is able to connect to my Linux NAS via SSH, but the NAS and laptop are unable to connect to the server [21:55] pastebin the output ot /sbin/iptables -L [21:55] please [21:56] on your server w/ sshd [22:02] randomcake: you doing that or no? i gotta go soon [22:03] the internet connection seems to have gone a little odd, I can't get to the pastebin, it's empty [22:03] * hallyn out [22:04] pastebin is used for you to paste the data in, apply, give me the link so i can read the output of the command i asked without you flooding the channel [22:04] so just do /sbin/iptables -L [22:04] cut, go to pastebin.com [22:04] paste it in [22:04] submit [22:04] give me the url [22:04] think of it as notepad or something [22:05] no rules, but iptables-save does give the following: *filter\n:INPUT ACCEPT [3091:2105891]\n:FORWARD ACCEPT [0:0]\n:OUTPUT ACCEPT [2928:277972]\nCOMMIT [22:06] it's network related whatever it is, are these both linux boxes? [22:07] I'm familiar with pastebin, but I'm having trouble with new network connections [22:07] no ip conflicts? [22:08] yes, both linux boxes, I don't believe so, the IPs are all allocated by the router (with the server specifically given an IP of my choosing, but still DHCP) [22:10] can you do a telnet to the box port 22 [22:10] see if it picks up [22:10] if it's no route you're getting i doubt the telnet will have any different results [22:10] what kind of router is this [22:10] nope, gives 'ssh: connect to host 192.... port 22: No route to host' [22:11] it's a BT HomeHub (I think the manufacturer is Thompson) [22:11] can you pastebin ifconfig results as well as your routing table please [22:11] can you ping the host? [22:11] nope [22:12] and the machine w/ the sshd can send and receive internet data? [22:12] can you ping the other box from the problematic server? [22:16] sorry, I'm not used to Xfce (not sure Xfce is the part I'm struggling with), I'm struggling to copy and paste from terminal :S [22:17] hilight, copy, paste bud [22:18] i gotta jet though, look at your router, see if you can install nmap on a working linux server and do nmap -sT ip.address.x.x and see if it sees ssh open [22:18] it's something routing related [22:19] maybe the Wifi dongle is rubbish, highly, fine, copy, fine, paste, within the same terminal seems fine, but paste elsewhere, and nothing [22:20] well, if he get's a no route to host error, it's not very likely nmap would show any open ports. [22:56] New bug: #887361 in facter (main) "facter facter_1.5.6-2ubuntu2.2: /usr/lib/ruby/1.8/timeout.rb:60:in `open': execution expired (Timeout::Error)" [Undecided,New] https://launchpad.net/bugs/887361 [23:07] Hello [23:07] .... . .-.. .-.. --- [23:10] I work for a small company that does a lot of file share. Currently we use a program called DropBox and can be found at DropBox.com. I am looking to expand my Ubuntu knowledge by setting up a server with Ubuntu and creating a file share with permissions and user folders etc. Does Ubuntu-server handle this well? [23:11] CantWinn, yes, Ubuntu Server is ideal for this [23:12] CantWinn, https://help.ubuntu.com/11.04/serverguide/C/samba-fileserver.html should help you get started. You can have your logins based on a Windows server's accounts, but that would be more complicated. [23:12] CantWinn: ubuntu can do all of that, but dropbox is specialized for it, using ubuntu, you can use tools like rsync to do the same [23:13] New bug: #887364 in samba (main) "package samba 2:3.5.8~dfsg-1ubuntu2.3 failed to install/upgrade: ErrorMessage: package samba is not ready for configuration cannot configure (current status `half-installed')" [Undecided,New] https://launchpad.net/bugs/887364 [23:13] it depends, are you using dropbox within a single office? or are your staff spread out? [23:13] Ok, Yeah i know DP is specialized but, when we have a lot of PDF's etc, every time someone moves computers it takes them about an hour to D/L and sync the files [23:14] your average rsync -za will probably work if it's in-house [23:14] They are mostly located in the main building with a couple of very small satellite offices consisting about 5ppl ea [23:15] CantWinn: what amount of data? [23:15] RoyK, if it's a small company, and small office, why not a Samba share? CantWinn, is there a VPN or other tunnel connecting the offices? [23:15] VPN [23:15] randomcake: depends if they're using windows or not [23:16] and what sort of existing servers do you have CantWinn? [23:16] CantWinn: doesn't say anything about bandwidth [23:16] We have a couple of servers that are running Windows 2008 R2 virtual servers on them. [23:16] a server per site? [23:16] CantWinn: samba in AD mode? [23:16] RoyK, they have a 10Mbps fiber line [23:17] RoyK, Yes [23:17] ok [23:17] The clients work stations have Win7 on them [23:17] CantWinn: why not bacula? [23:17] We have an older Dell PowerEdge R200 server not being used, I thought about trying to set that up and experiment [23:17] dropbox checks the checksums of files, and avoids overloading your upstream if the file already exists at dropbox. (part of their deduplication). [23:18] personally, i don't like the idea to share my files with the world, but it does save lots of upstream performance. [23:19] They want to get out of drop box for 2 reasons.. 1 not as secure as they like because we are a medical facility handling patient data. Number 2 is because if the internet or DP goes down we can't transfer files to where they need to go [23:19] RoyK, .. bacula? [23:19] seriously. [23:19] CantWinn: setup a backup box with some 2TB drives in RAID, sw raid should suffice well, and use bacula to back them up [23:19] !bacul [23:19] !bacula [23:19] you throw patient data on dropbox? [23:20] CantWinn: see bacula.org [23:20] air_, I just started here about 5 days ago.. believe me I am on this REAL fast [23:20] air_, any better than rsync does? I'd consider, a server per site, rsync to each site, and at the sites access the files using Samba, letting the windows users have mounted drives [23:21] or (hope this isn't against the rules!) consider the options your existing servers provide, such as Distributed File Systems in Server 2008 R2... [23:22] I think what they want and what i would like to do, is set up a file share because this is what happens: (bare with me while I type) [23:22] randomcake: if you mean to compare dropbox to rsync, sure, you could probably get the same behavior, but you'd have to care about hard links and yourself finding duplicate files. [23:23] randomcake: but then again, I'm against putting anything of value on to dropbox. :) [23:23] yeah air_, handing anything of serious value to a 3rd party isn't something to be done lightly... [23:24] no problem CantWinn, understanding your use case is our best way to give decent advice :) [23:24] Nurse scans chart, that chart gets sent to a program running RDP on the clients computer to the server, after which the file is then uploaded to DP where another person will take the scanned PDF's and import them to a splitting program, when she is done those splits go to another DP folder where some are picked up by a records person and the other if there is a RX goes to the pharmacy who has a DP for RX's [23:25] I'd sue you for this. [23:25] :P [23:25] CantWinn: then use bacula or some other backup system which keeps the data private [23:26] RoyK, we want to.. I wanted to implement a system so they can share files into folders that users are granted permission that they can access via inhouse or VPN, then have a SQL backup offsite.. wich they currently do NOT have >:( === negronjl_ is now known as negronjl [23:28] CantWinn: for medical data, you'll need to setup a VPN solution to make the data available off-site [23:28] *believe me when I say this is only scratching the surface of things wrong..* I haven't even gotten into: No redundant switches, No redundant DB Servers.. list grows [23:28] anything else is bogus [23:28] doesn't sound like any compelling reason to be using Dropbox CantWinn, was it lack of technical skills to implement a better solution that caused them to choose DB? Rather than any specific features of DropBox? [23:29] RoyK, Yeah, that's why I was wondering how Ubuntu server will handle it.. I use Desktop myself and have always enjoyed many flavours of Linux, but I have never worked with server side. [23:29] Well except RH virtual [23:29] CantWinn: try bacula [23:29] CantWinn: bacula can backup most OSes [23:30] randomcake, I will give you the reason.. it all boils down to $$ Management sees $600 / mon for secure online backup and freaks out apparently.. [23:30] RoyK, I saved the link [23:30] CantWinn: apt-get install ...... [23:31] CantWinn: explain to management what it will cost when the first patient sues you for loosing their information on dropbox. [23:31] So what would my best solution be roughly with out causing you detail grief? Old server running Ubuntu, setup for file permission shares and VPN, then implementing backula? [23:31] CantWinn: bacula is a PITA for starters, the installation can be a bithard, but once it's up and running, it's rock stable [23:31] RoyK, so Backula is standalone then? [23:32] if you are in an AD environment with access to running an extra virtual windows, why not consider running just another w2k8 for the file shares? [23:32] CantWinn: can be, but it can run on any linux or unix machine [23:32] then back it up to anything. [23:33] air_, I was wondring that too, but when I asked there was groaning's about the extra $$ for W2K8 keys. [23:33] *wondering [23:33] CantWinn: bacula is a backup service, it can run on most things, with a database backend of preferabably postgresql [23:34] CantWinn: again, just compare the costs to the damage done with current way of working. [23:34] RoyK, Ok, so bacula does sound like a good option to solve for my backup issues (which is currently a HD plugged into an external mount device) but in a quick expl how does backula help with the file share? [23:35] air_, I know that, and you know that.. remember, the rich get their by being greedy [23:35] grr *there [23:35] CantWinn: bacula is a backup system, not a file sharing system [23:35] << Too many things on the go at once [23:35] RoyK, [23:36] RoyK, ok, I thought so, I have been jumping back and forth to the site [23:36] CantWinn: the rich understand when you tell them they are fucked. I tried that on my last workplace, they told me they where insured against all things that could happen. [23:36] CantWinn: for file sharing, use samba or whatever appropriate [23:36] I gave them some nice scenarios where they would still be fscked no matter what they where insured against. :D [23:36] air_, Yeah, insurance is the blanket that most companies are hiding behind... [23:37] CantWinn: whatever insurance, it won't hold your data...... [23:37] Well the good news i hope is I got hired by a new director of IT here, and he's more a people person than tech, so when I told him the issues he told me he's going to get them to realize what kinda d00 d00 they are in [23:37] and it wont hold your reputation. [23:39] air_, nope.. that's why I'm working late right now trying to come up with something I can work on.. I just need to make sure that when people update a file in the share, that it's updated so all people can use it [23:39] if you loose customer data, get sued, got insurances to cover the legal fees, you will still make the news prime time and get a big bad reputation about being careless. [23:39] yup [23:39] explain this to the big bosses. [23:39] My director is [23:39] they don't want to be on the news for loosing patient data. [23:40] CantWinn: what are you trying to setup? a file server or a backup server? [23:40] RoyK, I NEED both [23:40] CantWinn: those are two different services [23:40] RoyK: summary. they share patient data through shared dropbox.com folders. [23:40] I'm trying to set up a local file server so they can stop using DP [23:41] and they don't do backups. [23:41] :D [23:41] air_, BINGO [23:41] well, at least you have lots of things to improve here :D [23:41] CantWinn: for windows, samba should do well, preferably in AD mode if you have AD at thesite [23:41] we are running AD on the main domain controller server [23:42] CantWinn: for backup, bacula or something, bacula is cheap (i.e free) and can backup any unices and win2k3/winxp and forward without issues [23:43] RoyK, so with bakula running the backups I would just need to find an off site secure upload solutions then? [23:43] CantWinn: with bacula, you backup things on-site, and then, you can replicate that off-site if needed [23:44] CantWinn: how much data is this? [23:44] a few gigs? a few terabytes? [23:45] ok, because I told them with data that they are supposed to keep until death, there needs to be an off site solution to copy to as well. because if there was ever a fire in the server room that has no A/C *groan* then they need to be able to get backups back [23:45] Not much.. rightnow about 400GB MAX.. the DB is showing about 230gb use [23:45] CantWinn: tape backup, then, and an offsite storage for those [23:46] RoyK, that would require them to have a tape backup drive [23:46] CantWinn: I know people doing bacula backups on tape for the full backups and disk for the differential/incremential backups [23:46] Right now I have a server running symnaptic backup to an external drive that has an internal HD loaded in it like a tape. [23:47] a server room without A/C? :| tape backup can't be fully automated, surely a network backup, which is then synced offsite [23:47] connected via USB.. lol [23:47] better than connected via Dropbox :P [23:47] randomcake, that's why I'm looking at uploading to secure site.. and yes you read that right.. NO A/C.. [23:47] randomcake, LOL [23:48] CantWinn: just setup a remote machin with some large drives as the bacula SD [23:49] RoyK, I told them to utilize their small off site offices now and put in a small server that is direct VPN to us here, that way we have our OWN off site location.. still waiting on THAT idea for a yes or no [23:49] or then, just tape backup, with remote storage of the tapes [23:50] CantWinn: you still would want a backup solution where you can restore from earlier backups in case someone overwrites a file and wants the old one back a week later [23:51] My ideal solution I gave them was this: On site secure version of a DP, then a dedicated backup server with VPN to off site server for backup. [23:52] CantWinn: the ideal solution is to have local snapshotting and then some backup off-site [23:52] CantWinn: setting up a zfs storage system will help the first part, such as openindiana [23:53] the latter is simple, just a bacula server somewhere else [23:54] sorry zfs? (getting tired) [23:54] CantWinn: you won't beleive how many hours we have saved by moving to ZFS storage with snapshots instead of restoring from backups...... [23:54] !zfs [23:54] For information concerning ZFS and Ubuntu, see: https://wiki.ubuntu.com/ZFS [23:55] CantWinn: zfs on ubuntu is slow and not what I'd recommend - using OpenIndiana is better, but then, it's anoter OS, with other things, so it's up to you [23:57] what about nexentastor, openfiler, etc? [23:57] how do they stand up compared to openindiana? [23:57] So far it sounds pretty damn impressive [23:57] air_: nexentastor is expensive, openfiler I don't know, but I think it's not updated as frequently as openindiana [23:58] RoyK: IIRC nexentastor community is free up to some 18TB storage. [23:58] air_: and the open nexenta isn't updated very frequently, hardly at all, according to my nexenta contact in .no [23:59] well, yeah, that seems to be the case actually. [23:59] So instead of using ubuntu you guys think I should use AD for the file share? [23:59] I haven't liked openfiler [23:59] how did openfiler get zfs? last I saw it was centos based