[03:35] hi anybody [03:48] hola [03:48] How are you tonight, Ahmuck_Sr [03:51] a bit of a headache [03:51] kinda feeling foggy [03:52] and you ? [03:56] * Ahmuck_Sr nudges HedgeMage [04:21] sorry about that :) [04:21] I was dragged away for a nummy snack [04:41] * Ahmuck_Sr wants a snack [05:10] hehe [05:30] so, how's the dvd coming along ? === ogra_ is now known as ogra [15:00] Morning all [15:41] For non-LTSP labs with new PCs with lots of hard space, would NFS or AFS be a better choice (along with LDAP of course), if we want the users to be able to sit in any workstation and work with their files? [15:42] (each labs has about 8-12 PCs) [16:06] alkisg, try and let us know :-) [16:06] nubae|work: I'm searching for a way to organize non-ltsp labs... do you have any links that explain how you do things there? [16:07] E.g. I was thinking the standard NFS / LDAP combination, but like you, some labs here don't have reliable network [16:08] ...so I'm also thinking of developing scripts that rsync dirs at logon... [16:08] I've never used AFS myself. [16:09] People at #openafs told me it's not really a good choice for home directories. So I guess I can scratch that.... [16:09] NFS has a laundry list of problems, but they're well known problems. :) [16:09] Ah, did they say why? [16:09] I use NFS form $HOME [16:10] Yes, it's mostly for many servers - not much benefit if you use it with one server, [16:10] and it can only expose 1 volume RW, and lots of others RO, so it won't be able to function as a distributed system between the clients PCs (that's how I had imagined it reading their wiki) [16:11] sbalneav: do you think NFS would be a reliable choice for small labs? Or should I try to make scripts that use sshfs or rsync? [16:12] I have about 120 users running off of an NFS server here. [16:12] It's definitely workable. [16:12] 2 things you need to make NFS as pain-free as possible. [16:13] 1) A kick-*ss IO subsystem on the server. I.e. 6 disks RAID10 or better, Go SAS SCSI or better. Don't cheap-out on this. [16:13] 2) Gigabit interconnect for the servers. [16:13] alkisg, we have a little nfs hack here that makes it much more reliable [16:13] Oh? [16:14] kind of a a local/server cache addition [16:14] Do tell [16:14] AAAAHHHHHHGH! [16:14] http://radiocontempo.files.wordpress.com/2009/03/hrp_4c_6.jpg [16:14] sbalneav: each lab here has only 1 server and 8-12 clients. So I guess I don't really need (2)... but, [16:14] Japan terrifys me yer again. [16:14] AFS is great but enforces you to have at least three reliable nodes constantly online [16:14] yet again [16:14] basically just synching based on cron or when the user wants [16:15] suppose I use NFS and LDAP. And the server crashes. Is there *any* way for the clients to work even with *guest* users? [16:15] Sure [16:15] I would just use a fatclient ltsp setup then [16:15] Just don't put guest users on the NFS share. [16:15] with ldap [16:16] So LDAP and /etc/passwd users can both be used simultaneously? And I just need to put the guest users in /etc/passwd with their homes in /home.local? [16:16] Yup [16:16] nsswitch.conf handles that. [16:17] Nice!!! [16:17] yeah, with pam tweaking [16:17] You can have both local (/etc/passwd + local home) and remote (LDAP + NFS $HOME) [16:17] one before the other or the other before the one [16:18] Not much, and it's not difficult. [16:18] :-) [16:18] nubae|work: unfortunately, not all labs have gigabit switches. E.g. this year I'm in a lab with core 2 duo / 2 Gb RAM / 400 Gb hd, with a *10 mbps hub* !!! [16:18] ...so I'd like a non-fat client, non-ltsp choice... [16:18] well, its not really ltsp anymore [16:18] its more like netbooting [16:18] No, I'm talking about local ubuntu installations [16:18] and booting is fine... it takes it little by little [16:19] works fine even over wireless [16:19] yes I know [16:19] netbooting and installing it then locally [16:19] or how were u thinking of cloning before hand?= [16:19] I just tar'ed the first installation, burned it in a DVD, and untar'ed it... nothing over the network [16:20] thats fine, but when u have larger deployments.... [16:20] if u need to keep burning disks for every little change [16:20] Sure, I'm *only* looking for a solution for 8-12 PC labs... [16:20] ur're gonna find it a pain [16:20] ah ok [16:21] I'll be updating the PCs seperately, not by cloning them again [16:21] I had understood 8-12 pcs per lab [16:21] Yes, e.g. 100 such labs [16:21] well clonezilla does a good job too [16:21] But each one of them will be maintained by a different teacher [16:21] well, then definetly look at a solution that involves a netboot initial install... so the the user has a choice on what to install at startup over pxe [16:22] if u have that luxury of course [16:22] disks break, scratch, etc.. [16:22] nubae|work: with 10 mbps hubs? That would take weeks... [16:23] at least... with 100 labs, I would not even consider dvds as an insallation mechanism [16:23] nah... [16:23] we do it here even with wireless [16:23] it doesnt take weeks, especially not on 8.-12 machines ;-) [16:23] 10 mbps => 1 Mb/sec => 4000 minutes for cloning a 4 Gb installation [16:23] for *one* pc [16:24] thats a lot of coffee breaks :) [16:24] all I can tell u is practically... it doesnt take that long [16:26] nubae|work: how do you update/upgrade your clients? [16:26] net [16:26] Cloning, or just apt-get update/upgrade etc? [16:26] mostly packaged [16:26] not cloning [16:26] that would indeed be unviable I think [16:27] needs to happen in bits [16:29] but perhaps a usb stick approach, nand, would be best [16:30] but nand has life issues === alkisg1 is now known as alkisg [19:35] is there Ubuntu for older people? [20:06] regular Ubuntu doesn't work? [23:43] Hello, I am having a problem with SCIM that I haven't been able to find help on [23:43] I can only use it with Text Editor [23:44] It should work with all GTK apps I believe [23:44] I was here yesterday, also asked on Ubuntu, and Ubuntu Japan but didn't get anywhere [23:44] I hope this sounds familiar to somebody