[01:35] kirkland: still there? === swift_ is now known as swift === swift_ is now known as swift [01:49] heh! [01:49] i installed nginx with aptitude install nginx... and removed it with aptitude --purge remove nginx... but guess what? /etc/init.d/nginx is still there, as is /etc/nginx and all the manpages etc [01:58] steven_t: file a bug then [01:59] lol [02:10] ScottK: sup? [02:11] lamont: I was thinking about the new postscreen tool in 2.7. [02:11] Upstream has clearly labled it "Experimental" in 2.7. [02:11] Should it be split out into a separate binary that's not installed by default? [02:12] hrm... possibly [02:12] I was thinking that experiments shouldn't be part of the default mail server task for an LTS. [02:12] I would not be averse to such a thing [02:12] ah, come on.... where's your sense of ADVENTURE?? [02:12] er, I mean, I agree [02:12] Heh. [02:13] you wanna work up a diff? [02:13] I know that 'experiment' in the default install is now an Ubuntu Desktop tradition, but for Server, I think not so much a great idea. [02:13] harsh dude [02:13] Accurate. [02:15] OK. Let me see what I can do. [02:17] hmmm [02:17] Oh, how is spam filtering in the mail task going? [02:19] I lost track of how far we got on that. [02:19] ivoks would know, but he's not around. [02:20] In any case, amavisd-new with spamassassin and clamav is the standard, documentation approach. [02:22] awesome [02:22] I've been trying with dovecot-antispam, because it seems it would give the best result, if I could get it working (: [02:25] (I.e monitors a spamfolder, and on movement out of the folder, calls the spamfilter automatically to mark as 'ham') [02:26] However, documentation on it is nonexistent :( [02:27] cloakable: Start with the Ubuntu Server Guide documentation on spam filtering. [02:28] dovecot-antispam would be an advanced part you might bolt onto it later. [02:28] ScottK: That's a little clunky to train for nonspam, though. [02:28] Needs ssh-ing into the server to call manually. [02:29] And while I can do that, I'd rather not have to, and there's users on my server that cannot :) [02:31] With a good set of RBLs + amavisd-new/spamassassin/clamav you get rid of an awful lot of it without having to mess with bayesian filter training. [02:31] I'd get that set up first and then see if you want to bother. [02:34] Mmmmm. [02:34] And when I get spam in my ham and ham in my spam? :P [02:35] First see how much of it is before you solve the problem. [02:47] * cloakable finds out what was wrong with dovecot-antispam >.> [02:48] It's been compiled with the wrong backend :) [02:49] * cloakable gets the source, comments out 'mailtrain' and puts in 'dspam' === cyphermox_ is now known as cyphermox [03:22] There, added 'dovecot' to the dspam trusted list [03:32] New bug: #559745 in eucalyptus (main) "NC failed to start a session with a libvirt internal error" [Undecided,New] https://launchpad.net/bugs/559745 [03:41] New bug: #559752 in samba (main) "package samba-common 2:3.4.0-3ubuntu5.6 failed to install/upgrade: el subproceso script post-installation instalado devolvió el código de salida de error 1" [Undecided,New] https://launchpad.net/bugs/559752 [03:49] lamont: Never mind. Apparently it's so experimental Wietse didn't include it in the tarball. No wonder I couldn't find it. [04:04] New bug: #553853 in samba (main) "(Kubuntu) Samba shares (fstab) slow down system shutdown/reboot" [Undecided,New] https://launchpad.net/bugs/553853 [04:11] ScottK: he tends to be more pedantic than me about experimental vs official - which lets me ignore that aspect without thinking much about it [04:11] to the point that you made me go "wut, yeah kill that" thinking you'd actually seen it there already [04:11] Which is a good thing to have in an MTA author. [04:12] very much so [04:12] on that note, sleep time. [04:12] I thought I'd seen it referred to as being in the release onthe mail list. [04:12] head->pillow [04:12] Good night. [05:50] I just killed two birds [05:50] good hello [05:51] I wish I would use the right channel [06:01] It's more fun for us when you don't. === swift_ is now known as swift [07:00] hey, i added a scsi drive to my live server and it isn't showing up, do i have to reboot it? === swift_ is now known as swift [07:10] ZenMasta_: are you there? [07:11] need some help installing pdo and pdo_mysql i get a message sh: phpsize not found [07:12] looks like pdo is part of the php code now and you should just need to skip to pdo_mysql [07:12] i see, let me try and see what happends [07:13] histo same error [07:13] ZenMasta_: yeah what version of ubuntu are you using? [07:13] 9.10 [07:14] on a side note, when I try to install pdo_mysql it downloads pdo_mysql and then after it downloads pdo still [07:14] and you're using sudo pecl install pdo [07:14] yep [07:15] do you have php5-dev installed? [07:16] I think so how can i find out without trying to install it again [07:16] dpkg -l | grep php5 [07:17] just decided to install before you typed that [07:17] should show a php5-dev package but like I said i think pdo is obsolete [07:17] so we'll see what happends [07:18] yeah pdo has been moved into the php source [07:18] histo that did it [07:18] installing now so i'll try the web app when its done and hopefully it will progress [07:18] did you try the webapp prior to running pecl [07:19] !info php [07:19] Package php does not exist in karmic [07:19] !info php5 [07:19] php5 (source: php5): server-side, HTML-embedded scripting language (metapackage). In component main, is optional. Version 5.2.10.dfsg.1-2ubuntu6.4 (karmic), package size 1 kB, installed size 20 kB [07:20] ZenMasta_: http://pecl.php.net/package/PDO/php-src/pdo [07:20] see [07:21] thanks === swift__ is now known as swift === swift__ is now known as swift [07:51] how do I edit php.ini? when I try to open it with vi it's as if it doesn't exist so it pretends to make a new file [11:59] Hi [13:10] Hello - i would like to test out the ubuntu enterprise cloud with more than 1 or 2 nodes, so i was considering if you can run UEC on Amazon Ec2 for testing? It seems this is the only way to "rent" a lot of computers for a short period of time [13:32] hi all. seems I'm doing something strange here. I try to mount an nfs filesystem on the host from a virtualbox VM, but I get 'mount.nfs: mount to NFS server 'rpcbind' failed: RPC Error: Program not registered' - 'mount.nfs: internal error' - /etc/exports looks right, services are started and ufw has 'allow from x.x.x.x' (the VMs address). The VM runs in bridge mode. Any ideas? [14:05] RoyK^: is mountd running on the server? [14:06] hm. no. what starts that? thought that should be in nfs-kernel-server or something [14:08] Start by checking "rpcinfo -p" [14:14] http://pastebin.com/98hsiw0x [14:14] KristianDK: You could use public cloud from Eucalyptus [14:15] binBASH, but i want to check out the configuration :-) Not how the instances works [14:15] Isn't the whole point of EUC that it's backwards-compatible with Amazon? [14:15] Er, UEC [14:16] KristianDK: It's possible to run everything on one node. [14:16] Not really a need for multiple nodes....... [14:16] New bug: #560011 in ntp (main) "Time cannot be fixed with ntpdate" [Undecided,New] https://launchpad.net/bugs/560011 [14:16] binBASH, both controller and node in one box? [14:16] yeah [14:16] twb, pmatulis any ideas? [14:16] have the same here [14:17] RoyK^: pastebin "exportfs -vra" [14:17] KristianDK: I have 7 nondes, including the one with cluster and cloud controller [14:18] twb: exporting 213.236.233.67:/var/www [14:18] planning to have 150 nodes ;) [14:18] tried with * as well [14:18] binBASH, cool :-) Well, i want to test things out before deploying it in a big scale [14:19] Like me then ;) [14:19] binBASH, do all your nodes have the VT extension as its recommended? [14:19] yup [14:19] KristianDK: http://www.hetzner.de/en/hosting/produkte_rootserver/eq6/ [14:20] have those as nodes [14:20] 150 nodes??? how many racks? [14:20] binBASH, i was actually considering http://www.hetzner.de/en/hosting/produkte_rootserver/eq8/ :-D [14:20] im already a customer there for some other servers [14:21] KristianDK: It will be a problem with network configuration ;) [14:21] still stucked at this...... [14:21] yeah, i guess because of the IP adresses being bound the MAC and the limitation of the 100mbit router, right? [14:22] Yeah, the ips are bound to server. [14:22] RoyK^: Dunno, this providers does tower hosting...... [14:23] KristianDK: With 150 nodes, there will be a lot of ram anyways, don't need eq8 really ;) [14:23] binBASH: if you need 150 nodes, I'd guess hosting it locally may be a lot cheaper [14:24] RoyK^: Don't think so. [14:24] don't have money to buy all those servers ;) [14:24] binBASH, true :) I think i'll end up with an ESXi more on the EQ8 anyway, since everything else seems complicated with hetzner :( [14:24] binBASH: but .... 150 nodes? you can run like 10 VMs on a single node - perhaps more - what do you need this for? [14:25] KristianDK: Well, I'll try to start vms now with vnc option and configure networking inside there manually. [14:25] RoyK^: Need them not for the vms, but for storage [14:25] binBASH: how much storage do you need? [14:26] binBASH, i've configured a router VM which the IPs are bound to, it forwards the IPs [14:26] RoyK^: 200 TB [14:26] binBASH: I just got an offer for such a box - NOK 250k [14:26] KristianDK: I don't want to NAT, because 2 TB Traffic Limit per node [14:26] binBASH: and storage should be done on zfs imho [14:26] NOT on a VM [14:26] but on hardware [14:27] RoyK^: I'll use GlusterFS [14:27] why not just a big supermicro box stuffed with 2TB drives and a SAS expander and some extra chassises for disks? [14:27] RoyK^: Like I said, I'm limited in Finances ;) [14:27] it'll be cheaper [14:28] binBASH, i dont use NAT, its Ip forwarding, i use the router as gateway in the network config - but i don't think you can get around the 2tb limit anyway? Its bound to the IPs [14:28] you need 200TB and can't afford it? [14:28] RoyK^: It's a single point of failure. [14:28] binBASH: then get two of them and use zfs send/receive to keep them in sync [14:28] KristianDK: if you forward to one server it will count there [14:29] RoyK^: Atm I'm using 20 TB Raid 6 NFS Server. [14:29] binBASH: I REALLY doubt you can get something cheaper from somewhere else [14:29] binBASH: we bought this box some time back with 30TB net storage - it just cost like USD 10k [14:29] RoyK^: Everything I was looking for was more expensive. like NetApp or Isilon [14:29] binBASH, as i understood from hetzner you need one router VM per physical server [14:30] binBASH: hah - use supermicro hardware, cheap drives, and opensolaris with zfs (with compression and dedup) [14:31] binBASH: where I work, we have rather high storage demands - windfield and other satellite data takes up space, and we're getting more and more all the time [14:31] KristianDK: If it would be like this, I wouldn't use the vm for routing. [14:32] Would just use the main box itself, because the ip is unusable anyways from within vms. [14:32] binBASH, i was told this was the only option - what else would you do? [14:32] true [14:32] i was thinking ESXi again [14:32] sorry :P [14:32] KristianDK: I already, started vms manually and they had a usable ip address. [14:32] but I dunno how to automate it within Eucalyptus, that's the only problem [14:32] yeah [14:33] binBASH: really, using VMs for storage is a BAD idea [14:33] RoyK^: I don't wanna use vms for storage ;) [14:34] the storage will be on the real server itself, though I will embedd it from inside the vms [14:34] binBASH: but - please - give opensolaris+zfs a try - it's well worth it. no raid controller, just zfs doing it all [14:34] zfs rocks rather loudly [14:35] RoyK^: If I would use opensolaris on some boxes I wouldn't be able to use their processors. [14:35] I have rather high demand on cpu [14:35] storage speed is not that important [14:36] for your needs, I would say separate storage and computation [14:36] use storage computer cpu for compression and dedup [14:37] Don't need compression [14:37] depending on the data, both can give you quite a bit of gain without much cpu use [14:37] for jpgs it's useless [14:37] what sort of data is this? [14:37] indeed [14:37] RoyK^: We're hosting image agencies. [14:37] Things like www.gettyimageslatam.com [14:37] but stuff like zfs snapshotting is quite priceless [14:38] btrfs and LVM do snapshotting [14:38] btrfs is NOT stable [14:38] Granted [14:38] LVM snapshotting is crap [14:38] LVM snapshotting is adequate for my purposes [14:38] LVM snapshotting moves data out of the original place for each write instead of writing new data and moving pointers [14:39] meaning if you have lots of snapshots, everything will be very, very slow [14:39] Um, both LVM and ZFS snapshotting are block COW. [14:39] I grant you that LVM is probably a lot slower. [14:39] lvm moves data out before overwriting them - not like zfs, which writes new data [14:40] Shrug. [14:40] CoW is two different things - either write new data and move pointers, which is what ZFS and NetApp does, or move the old data prior to overwriting the old ones, which is what LVM does [14:40] maybe I'll take Strato HiDrive Pro for Storage ;P [14:40] At the end of the day, ZFS is not enough to make me adopt osol. [14:40] 5 TB mirrored = 149 Eur / Month [14:41] twb: heh - then you really haven't looked into it [14:41] I'm running a 2TB osol server for ZFS right now. [14:42] RoyK^: If you really can afford a big storage you should go to Isilon ;) [14:42] But as soon as btrfs is ready, it will die [14:42] It's a much better technology [14:43] btrfs is decent, but lacks a lot of what's in zfs atm. give it a year or two and it might catch up [14:43] but yes, I will also switch to btrfs once it's there [14:43] Exactly [14:44] Which will unfortunately not be until 2012 (for LTS) :-( [14:45] KristianDK: I really don't know how to master that network problem ;) [14:45] but then, I can't wait two years for a storage solution, and then opensolaris is the way togo [14:45] binBASH, i think we need to talk to hetzner [14:45] they are kind of blocking for allowing this [14:45] :p [14:46] KristianDK: Like they're blocking gigabit as well [14:46] exactly, i asked them for gbit [14:46] and you can actually get that [14:46] KristianDK: You can have it, just costs........ [14:46] yep [14:46] you need to have flexipack and another nic [14:47] and additionally a switch [14:47] and additionally 69 Eur for moving all your servers so they are beside each other. [14:49] binBASH: what makes me wonder is why you (or your company) are hosting terabytes of data and can't afford a decent (and quite cheap) storage solution, like the osol-based one we have. It can be expanded quite easily with a SAS expander and won't cost a lot using WD Green drives or so [14:50] RoyK^: Because agencies don't pay that much ;) [14:51] RoyK^: And they pay monthly. Not a year in advance :p [14:51] well, loan some money and it'll pay back quite quickly [14:51] for EUR 10k, you get 30-35TB, which I guess will be sufficient for some time [14:52] RoyK^: Like I said we have already 20 TB. [14:52] no, wait, more - wait... [14:52] lol [14:52] and we bought it for 9K 2 years ago [14:52] though it's a single point of failure and not mirrored [14:53] hi leonel [14:54] ea binBASH [14:54] binBASH: just got this offer - supermicro box with 36x2TB disk and an ok motherboard, a bunch of memory, some cpus etc, meaning if you use three RAIDz2 groups of 12 drives each, it gives you 20x3=60TB -> price NOK 86k, around EUR 10k, and possibly cheaper outside Norway [14:55] 34 2TB drives, that was, but still (forgot about the root SSDs) [14:55] 10K for one box I assume ;) [14:55] yes, but it's still cheap [14:55] Here we pay 8500 Eur for a box with 24 x 2 TB [14:55] with 3xraidz2, you can loose six drives in total [14:56] wierd - that's _more_ expensive :) [14:56] * RoyK^ thought Norway was meant to be the expensive place [14:56] though if a box fails raid is useless ;) [14:57] binBASH: then get two and use zfs send/receive to mirror the two [14:57] and when storage fills up, get a SAS expander and an extra chassis and some drives [14:58] osol not supporting ext2 is a real pain in the arse. [14:58] over 3-5 years, I would guess you would save LOTS of doing this yourself instead of paying others to do the same [14:58] twb: why should it??? [14:58] So that I can seed the osol box by sneakernet instead of our shitty 100baseT and ADSL lines [14:58] RoyK^: Well I would lack then cpu power for video processing [14:59] binBASH: don't! [14:59] binBASH: use NFS [14:59] or iSCSI [14:59] or CIFS [14:59] RoyK^: We're having NFS already ;) [15:00] nfs performs well enough for that - for those storage needs, it would be silly to put all services in one place [15:00] huh? [15:00] get a storage server with sufficient memory and cpu for the storage alone and get compute nodes to do the ugly stuff [15:01] RoyK^: what's his use case? Just normal office documents and such? [15:01] twb: images and video [15:01] I think getting 150 Nodes which offer 1200 cpu cores + 200 TB Storage is better ;) [15:02] if planning for 3-6 months, sure, but if you are planning to be in business for a long time, buying hardware will save you a lot of money [15:02] Yeah, NFS over 1000baseT to a single NAS or SAN is probably the Right Thing. [15:03] RoyK^: You forget the fact you normally throw out servers every 2 years [15:03] binBASH: not really - storage servers can last a LONG time [15:03] especially with zfs - autogrow is nice [15:04] take a zfs mirror, replace one part with a larger drive, resilver, replace the other, resilver, and zfs says 'oops - I'm bigger' [15:04] same with raidz volumes [15:04] Raid rebuild will take ages [15:05] not really - for 30TB a scrub takes a couple of days [15:05] I already takes 4 hours to rebuild the current raid ;p [15:05] resilver about the same [15:05] and replacing drives isn't what's done daily [15:05] but hey, I've just been working with storage for 10+ years, do as you please [15:06] I think a distributed storage architecture is much better. [15:06] Companies like NetApp or Isilon doing it as well. [15:06] how much do they charge you per month for 200TB? [15:06] binBASH, have you, btw, checked for hetzner alternatives? [15:06] KristianDK: Yup [15:07] KristianDK: But only in Germany [15:07] binBASH, and they have the same sucky setup? :P [15:07] binBASH, are you german? [15:07] They are even mor worse. [15:07] NetApp is doing quite well, yes, but they charge you EUR 100k for a few terabytes [15:07] KristianDK: I live in Switzerland :) [15:08] binBASH, ok - cool :-) I'm from Denmark, so i speak a bit German, but sometimes i really don't get what they are trying to tell me at hetzner :P [15:08] RoyK^: Things like GlusterFS are working like this. [15:08] KristianDK: I moved from Germany to Switzerland in 2007 [15:09] does that support stuff like versioning or snapshotting? [15:10] binBASH, well, the problem is i've been searching for alternatives to hetzner, but they seem remarkably cheap compared to everything else [15:10] and im actually satisfied with everything but their network setup :P [15:11] binBASH, however - they recently introduced the failover IP thing [15:12] which redirects and IP to another server [15:12] maybe we can work something out with this thing? [15:12] binBASH: but how much for 100T? [15:12] or 200 [15:17] New bug: #560047 in dovecot (main) "new upstream version available" [Undecided,New] https://launchpad.net/bugs/560047 [15:17] RoyK^: Like I said each node gives 2,7 TB [15:17] For mirroring you need 2 nodes. [15:18] A node costs 69 Eur / month [15:18] and provides 8 cpu cores which I can use for video rendering and image processing [15:19] because we're a swiss company we don't have to pay German VAT [15:19] so it's cheaper. [15:20] so it costs like 9500 Eur / Month. [15:23] KristianDK: For the failover ip you need flexipack, which is 15 Eur / Month [15:24] RoyK^: I would agree as pure storage it's too expensive. [15:27] good morning - I have a Dell Precision 650. I want to run SATA drives on it as boot devices. Can I install a 3rd party SATA PCI card and boot from a drive connected to it? thanks [15:28] You should be able to. [15:29] Absolute worst case scenario you unplug the installed drives from the built in controller, install, and then reconnect them. [15:29] ScottK: how do I get the BIOS to see the PCI card? [15:30] The one time I've had to worry about it, it just did. [15:30] (never booted from a add-in card) Cool - I'll give it a try - Thanks ! === martin- is now known as zx === zx is now known as martin- [15:33] RoyK^: http://gluster.com/community/documentation/index.php/Main_Page#Gluster_Filesystem [16:06] binBASH: :P [16:07] koolhead17: ? [16:07] gluster [16:07] koolhead17: It works here without problems so far. [16:07] binBASH: it rocks [16:11] koolhead17: are you using it? [16:12] binBASH: my friend owns the company behind this project :D [16:13] ohh :p [16:14] binBASH: he is the lead developer too :D [16:14] very cool [16:14] glusterfs is very good design I think. [16:15] Too bad I can use it with 100 Mbit only koolhead17 :p [16:15] binBASH: heh. poke them [16:15] i think #gluster [16:15] koolhead17: It's not a gluster issue, servers only have 100 mbit ;) [16:16] ScottK: absolute worst case is a wincontroller :-P [16:16] twb: True. [16:17] Or my boss's favourite trick -- buy a server with hotswap bays, but forget to buy the RAID5 chip for the hardware RAID controller [16:18] lol [16:18] In which case I could create up to two RAID0 arrays of one drive each, so I couldn't even make an md RAID5 [16:18] twb: Sounds more like epic fail than a trick ;) [16:18] binBASH: he has done it TWICE [16:19] twb: So he didn't learn? [16:19] And we're still running Pentium IIIs, so you can imagine how rarely we buy new gear [16:19] Pentium 3 omg [16:20] twb: How much people you are in company? [16:20] Probably about ten [16:21] ok, more than us then ;) [16:21] It's hard to tell because some spend months pimped out, and some ex-employees continue to lurk on the lists [16:21] We replaced the LaserJet 4 last month, and the sysadmin deploying it went "oh, cool, the NEW unit has only been EOLd by HP since 2008" [16:21] lol [16:22] Having said that, I loved that little LJ4 [16:22] sounds like a lack of money [16:23] There's a policy of handing most of the profits to the engineers instead of the company [16:23] But it's also a mindset thing. [16:24] We have a pair of Q9550 with 2TB of storage, but one got stolen to run rpppoe. [16:25] twb: We had such a policy as well twb [16:25] All money to personal [16:25] ;) [16:25] Get a project manager for 100K Eur / Year [16:25] kick him out 9 months later because he sucked [16:26] and have a second boss, which was also not very useful [16:26] * cloakable eyes gluster storage platform >.> [17:16] hi id like to set up monthly bandwidth quotas for my home network something like http://www.digirain.com/en/trafficquota-overview.html but ive been googling like crazy and i cant find anything like that for ubuntu, any suggestions? [17:28] pjp3rd: You can use iptables to set a quota [17:28] http://linuxgazette.net/108/odonovan.html [17:28] pjp3rd:http://linuxgazette.net/108/odonovan.html [17:29] pjp3rd: https://help.ubuntu.com/community/UFW [17:33] brianherman, thanks that looks like a good place to start [17:33] pjp3rd: why you want quote in home network? [17:34] pjp3rd: Use the ubuntu one it seems the simplest [17:34] Quota not quote [17:34] quota yeah ;) [17:35] brianherman, but id need to set up a seperate quota for each user, a way for user to check his quota and to automate it everymonth. so it would be nice if someone has already done that work.. [17:35] binBASH, cus the ISP gives me a quota and im sharing it with 16 people, seems like the most sensible way to avoid fights when we are running out after 2 weeks each month [17:36] pjp3rd: If you share the line with 16 people why not limit bandwidth? [17:37] pjp3rd: http://manpages.ubuntu.com/manpages/karmic/man8/tc.8.html [17:37] binBASH, im not sure what you mean? [17:37] pjp3rd: http://manpages.ubuntu.com/manpages/hardy/man8/tc-cbq-details.8.html [17:38] binBASH, im not looking to shape the bandwidth for speed im looking to make sure we dont exceed the monthly limit [17:38] ok [17:42] pjp3rd: there's something called "bandwidthd" that might help in some way. [17:43] pjp3rd:http://bandwidthd.sourceforge.net/ [17:44] sherr, bandwidthd helps half the problem if it can moniter the bandwidth. id like to enforce limit as well [17:44] pjp3rd: At least then all would know who steal the traffic :p [17:45] binBASH, yip it would tell me but too late rather than fighting with people every month id just like to divide it equally and no more worries [17:46] pjp3rd: if you limit bandwidth for everyone it will be equal [17:48] pjp3rd: Think limiting bandwidth is better than having no internet for half of the month ;) [17:48] k, based on brianherman's tip ive found www.linuxquestions.org/questions/linux-networking-3/iptables-to-stop-bandwidth-completely-592827 which is what im basically looking for [17:48] but your decission ;) [17:49] problem is it doesnt seem like such a polished solution [17:49] binBASH, im not sure what you mean [17:49] pjp3rd: You said you have a quota, what if you exceed it? [17:50] my isp enforces a quota when we exceed it the connection is throttled to basically unusable speeds [17:51] pjp3rd: So why don't you distribute a max. bandwidth equally? [17:51] binBASH, what do you mean? [17:51] I mean, I wouldn't accept the fact if I'm amongst the 16 people, and one causing so much traffic, so I would have no internet then for half a month [17:52] pjp3rd: With the tc links I posted, you can assign each user equal bandwidth. [17:53] so it's not possible traffic limit will be exceeded [17:55] binBASH, correct me if im wrong but what i can understand from the link you posted tc/cbq can shape my connection meaning how much is been used by any given user/protocol at a given time, thats not going to help me to stop total monthly usage from exceeding the isp quota is it? [17:56] pjp3rd: With that you can limit the bandwidth for each user. So every user has equal line. [17:56] and you can setup a max. bandwidth rule as well, so with that you can't exceed your providers traffic limit. [17:57] binBASH, oh i didnt see details about that? how can i set up a monthly limit? [17:58] pjp3rd: You don't set a traffic limit. You set a bandwidth limit. [17:58] The Linespeed will be slower though [17:59] binBASH, can you give me more details? [18:00] pjp3rd: http://www.oamk.fi/~jukkao/lartc.pdf [18:00] read this ;) [18:00] binBASH: do you really need like 5k cores for this? I thought you were doing storage [18:01] or 800 cores, that is [18:01] RoyK^: Storage and Processing ;) [18:02] 1200 cpus actually [18:02] yeah, but you were talking about hosting images [18:02] yeah [18:02] 1200 cores [18:02] with 1200 cores you can do some rather fancy stuff, but then, what is it you're going to do with them? [18:02] RoyK^: Hosting images, resize them, watermark them, recalculate videos, etc...... [18:03] you might need 8 cores for that [18:03] not 1200 [18:03] RoyK^: If you don't wanna wait ages, you need more ;) [18:04] a resize normally can't be shared amongst cores [18:04] and I somehow doubt you have 1200 concurrent resizes [18:04] binBASH, i just skimmed through the whole book can you point me to which chapter should help me? [18:05] pjp3rd: Chapter 9 [18:06] binBASH: what are you using for this - imagemagick? can you upload some files for me to test? [18:07] RoyK^: ImageMagick for the images, yup [18:07] binBASH, im sorry i must be misunderstanding you but how is shaping traffic going to limit total usage per month per user? [18:07] RoyK^: One jpg is like 128 MB in worst case;) [18:07] binBASH: and how often are these uploaded/resized? [18:08] RoyK^: Very often ;) [18:08] seems to me 1200 cores for this job is like shooting sparrow with heavy artillery [18:08] RoyK^: Getty Images uses it for their editorial press content. [18:08] binBASH: define 'very often' [18:08] RoyK^: The problem is, the images will be transfered to news agencies. [18:09] well, it's only resized on upload, so how many images do need to convert per second? [18:09] RoyK^: The faster the better. [18:10] well, of course, but wasting a ton of money is useless [18:10] like I said it's editorial press content. Means, someone makes a photo of a football match. [18:10] it should be transfered immediately to news agencies when uploaded. [18:10] time counts...... [18:10] if you have 1200 concurrent file uploads in general, 1200 cores might be worth it, but 600 will probably do well, even 300 [18:11] then again, I somehow doubt you have 1200 _concurrent_ jobs in such a system [18:11] most likely 10+ [18:11] RoyK^: Yup, though we have some more customers, ;) [18:11] have you monitored your current system to see its load? [18:11] no [18:12] it should tell quite easily how much is needed [18:12] if load average at peak times is 4, give it 4 cores, etc [18:13] we have a 40 core compute farm at work for doing models, and that's eating some data. 1200 cores must be overkill for your use [18:13] RoyK^: There is another problem as well. The company develops some visual search engine atm. And noone knows what it will consume ;) [18:14] binBASH: then get a separate box for that as well [18:14] binBASH: it will probably need a truckload of RAM and fast disk access to its index, but not a lot of cpu [18:14] RoyK^: And how to do it without money? :P [18:15] binBASH: hey, kid, if you try to make business work, you need to invest. I'm just trying to give you simple advices, but it seems to me you know it all better than the rest of the world. keep on, kid, and you might be wanting to find a new job in a few months [18:17] RoyK^: Boss refuses to take new investors [18:17] tell your boss you can't do this without EUR 20k [18:17] that's not really a lot of money [18:18] tell him it'll cost several times as much even during the first year [18:18] or perhaps the first year will make it break even [18:20] binBASH: also, please understand that making large system work well usually means dividing services amongst servers, some for storage, som for computing [18:21] get a supermicro system for the first 60TB or so and add more disks later with SAS expanders - use it on opensolaris - share it with NFS - it can grow easily [18:22] then get small 1U boxes for doing the computing - start off with a quad intel or opteron with a bunch of cores, perhaps less, and you might see it's not really very heavily loaded [18:22] RoyK^: There is another problem as well. :-) We need entry points through geoip [18:22] if it is, add more [18:22] Means, getty has offices in asia, russia etc. [18:22] so we don't want to send them to german servers [18:22] but also to servers in usa [18:23] binBASH: fuck this - you reall don't listen - you've decided to use this or that already - I'm done trying to advice anymore now [18:23] and we really don't want to fly to usa to build something up there in a datacenter [18:23] good ;) [18:23] seems to me what you want is to brag about a truckload of terabytes, and you're doing it the wrong way, wasting money and making things worse [18:23] keep on, kid, but don't blame the ones of us that tried to help [18:24] RoyK^: Really don't wanted advice [18:27] I can see that [18:27] binBASH: out of interest - what is your current system's load average? [18:27] RoyK^: It should have a reason that companies like google have a shared storage ;) [18:28] google uses its own storage [18:28] for good reasons [18:28] I'm still curious about this load average of yours [18:29] also - how do you plan to parallelize that across 300 machines? [18:31] * RoyK^ guesses binBASH was in a hurry and perhaps a little drunk when he made those plans [18:35] RoyK^: That is what gearman is for [18:37] binBASH: what is the load average on your current box? [18:38] binBASH, for very large deployments that require lots of storage that is similar you might consider a de-dupe filesystem such as lessfs [18:39] sdfs also comes to mind [18:39] erm [18:39] does lessfs do dedup? [18:39] RoyK, you been around long enough to know what google is for [18:39] bogeyd6: I've tried hinting on using zfs, but it seems binBASH has already decided and is just here to bra [18:39] bogeyd6: I've tried hinting on using zfs, but it seems binBASH has already decided and is just here to brag === GhostFreeman_ is now known as GhostFreeman [18:40] good im glad he is bragging about using Ubuntu Server in large environments and i hope he proudly announces it to his customers [18:40] bogeyd6: you've been around for long enough to know that to answer a yes or no might perhaps be a little more sophisticated and nice than just barking fgfi [18:41] !google | RoyK [18:41] RoyK: While Google is useful for helpers, many newer users don't have the google-fu yet. Please don't tell people to "google it" when they ask a question. [18:41] also, condescension is highly frowned upon, please refrain [18:42] bogeyd6: I know, SIR, but you spent more time on telling me to google it than a yes/no answer would take [18:42] * RoyK^ guesses binBASH was in a hurry and perhaps a little drunk when he made those plans << belong in another linux support channel [18:42] bogeyd6: not really [18:42] well i said my peace, i hope you consider signing the ubuntu code of conduct [18:43] RoyK^: I thought your discussion with binBASH was quite interesting and useful until you ruined things by being rude and a little obnoxious. [18:43] Let's all be civil. [18:43] !conduct | RoyK [18:43] RoyK: The Ubuntu Code of Conduct is a community etiquette document to which we ask all Ubuntu users to adhere, and can be found at http://www.ubuntu.com/community/conduct/ . For information on how to electronically sign the CoC, see https://help.ubuntu.com/community/SigningCodeofConduct . [18:43] well, people, listen [18:44] sherr, agreed and royk should also be congratulated on his level of participation in previous instances [18:44] would make a very valuable member of the server community [18:45] mr binBASH first tried to ask about how to do his storage, and talked about 1200 cores doing image resizing for uploads, at which I asked why, and why not central storage, to which he merely barked that he didn't need my input [18:45] this is something that can annoy the one (me) trying to help one (him) out [18:46] RoyK, zfs is in fact available in opensolaris [18:47] bogeyd6: yes, and did you know ext3 is available in linux? [18:47] which is pretty awesome [18:47] scroll up :) [18:47] I was trying to tell him that [18:48] wasnt available until 27a [18:48] but it seems like he wants a truckload of cpu nodes with 2TB each for some reason [18:48] 27a? what? [18:49] i thought if you could recommend ZFS you would know a bit of its history and usage [18:50] RoyK^: Looks like you're totally mistaken, I never asked for storage. [18:52] binBASH, did you have something you did need help with? [18:52] bogeyd6: originally I asked how I can start vms in ubuntu enterprise cloud with the -vnc parameter. [18:53] i personally think that zfs needs to much horsepower and that makes it a disadvantage [18:53] bogeyd6: I just started using osol at 2009.06 - the old solaris platforms were just something I plaied with [18:53] binBASH, that is a good question, i know it can be done on a VPS but in a desktop in a cloud? [18:53] bogeyd6: I know it needs a lot, but for dedicated storage, it's nice [18:55] RoyK^: If you want details what is the problem with our current setup you can come in query :) [18:55] binBASH, https://wiki.edubuntu.org/UEC/Images/Testing [18:55] my google-fu is 10th degree master [18:56] binBASH: I tried asking about the current load, since you insist on needing 1200 cores [18:57] i got a load that would blow your mind [18:58] how nice [18:58] 13:57:15 up 28 days, 5:48, 1 user, load average: 0.84, 1.38, 2.39 [18:58] well, seems like the system uses a core or two quite well [18:59] hah! [18:59] single processor [18:59] * RoyK^ had a server peaking at load avg 32 the other day [18:59] something went wrong in freeradius [19:11] <_ruben> only 32? ... i've reached 100+ on mailservers that were "spammed" :) [19:13] yeah, but this box was running a single radiusd that shouldn't really have been busy [19:14] guess it started a bunch of threads that went mad [19:14] RoyK^: You know not only cpu usage causes load [19:15] binBASH: yeah, but there weren't any funny processes in D or Z state or similar [19:16] just a truckload of threads that went spinning [19:16] back your question about our sys. Like I told you we're using imagick. Libjpeg doesn't use smp so we could calculate 8 images at once with one server. [19:17] one image takes around 3-6 seconds [19:18] the images with bigger filesizes take much longer [19:18] have you tried graphicsmagick? [19:18] it's said to be faster by far than imagemagick [19:19] yup, it lacks some features we need. [19:19] ok [19:19] but still - the time for one image to be resized is ok, but how about the system load over time? [19:19] that's what you should worry about when designing something new [19:19] if there is high processing the load is like 25 [19:20] can you distribute this load somehow? [19:20] it's already distributed :) [19:20] I mean, I guess there's a common web front [19:21] we have dedicated servers for web, for image processing, for sphinxsearch and for exports to partners/ftp ... [19:21] also database [19:21] ok [19:22] with glusterfs, what happens if you remove a node? are nodes mirrored as well as the drives on those nodes? [19:22] RoyK^: every node is backuped by another one [19:23] ok, so mirroring, somehow? [19:23] yup [19:23] I guess it still lacks the stuff zfs/btrfs has, though :P [19:24] Well, it's scalabe and a complete different technology. [19:24] I dunno if you know NetApp or Isilon. [19:24] I do [19:24] isilton, no, but netapp, yes [19:25] I personally would prefer Isilon over NetApp from what I've heard [19:27] bogeyd6: What does this link have to do with eucalyptus? It's just for testing kvm setup [19:27] * cloakable eyes eucalyptus [19:27] kvm works perfectly for me already ;) [19:28] Anyone here use eucalyptus? [19:28] cloakable: Yes ;) [19:28] cloakable: Ubuntu Enterprise Cloud is built on it. [19:29] binBASH: If I have two four-core nodes in the cloud, can I give, say, six to an instance? [19:29] cloakable: No [19:29] cloakable: you can't run a vm across multiple machines [19:29] Damn D: [19:29] get an amd 12-core :D [19:30] Which would suck up how many hundred watts? :P [19:30] cloakable: http://www.linuxvirtualserver.org/ [19:30] cloakable: not really a lot [19:30] RoyK^: More than 45W? [19:30] :P [19:30] cloakable: I think lvs can do it. [19:31] binBASH: awesome, will look at [19:31] erm - iirc lvs is a network thing, not a processing thing [19:32] RoyK^: lvs will let multiple nodes appear as one supernode afaik [19:33] What would be really awesome would be a network-aware hypervisor >.> [19:33] cloakable: Too slow [19:33] Possibly [19:33] has it been tried? ;) [19:33] don't think so. [19:33] binBASH: yes, on IP, but not sharing computing tasks [19:34] RoyK^: yeah, could be. [19:34] lvs is nice for web servers and so on, but not for VMs [19:35] Would like to deploy an LTSP image onto a group of say 3-4 4-core machines :) [19:35] Or just use it as a desktop :D [19:36] impossible afaik ;) [19:36] Which is a shame, because it would be awesome :) [19:36] hehe [19:37] An area Atom would shine in ;) [19:37] RoyK^: The worst thing about that many nodes. Administration overhead. So puppet to the rescue ;) [19:37] heh [19:37] or cluster-ssh ;) [19:38] cloakable: I have it already. [19:38] :) === vegar is now known as Guest79698 [20:07] Hi, I have 2 ubuntu desktop machines and 1 ubuntu server machine. Using tcpdump, both desktop machines receive a multicast audio stream on my network, while the server machine does not [20:07] is there something on the server edition which might block multicast traffic? [20:31] the only thing that is different should be the kernel as far as I can tell [20:36] Apr 10 15:29:24 server deliver(root): msgid=<20100407105335.E878661D8@$DOMAINt>: save failed to INBOX: Internal error occurred. Refer to server log for more information. [2010-04-10 15:29:24] [20:36] Apr 10 15:29:24 server deliver(root): stat(/root/Maildir/tmp) failed: Permission denied (euid=65534(nobody) egid=65534(nogroup) missing +x perm: /root) [20:36] I still haven't figured out how to fix that issue [20:36] (not that I've been looking very hard) [21:03] Had a quick question for you guys. I'd like to build a simple home server since it's something I've been without for way too long (I've been so lucky to never lose data ... yet). Anyways I'll probably be doing basic stuff... File server, FTP, simple webserver....But I would like a graphical environment (kde, gnome, etc) for VNC. What hardware would you recommend to run a raid 1 or raid 0+1 (4 drives). Trying to keep it affordab [21:03] Looking for mobo / raid controller / processor recommended. [21:06] for your data I'd definately say a good raid 5 [21:07] I'm fine with a raid 5 setup too [21:07] but personally never done really any raid setups [21:07] * animeloe[net] only uses hardware raid, so can't help with software raid [21:07] replaced drives in raid arrays and whatnot, but never purchased hardware [21:08] got lots of money to spare? [21:08] get a nice areca or equivelent [21:08] Hah I do, but cheaper the better obviously :) [21:08] well [21:08] more a hobby and for work experience then a necessity [21:09] you want raid, you'll be spending at least a thousand just on a card [21:09] jared_1: you may want to check this out https://wiki.ubuntu.com/ReliableRaid [21:09] jared_1: explains some of the current issues [21:09] jared_1, if you don't want to lose data, make backups. raid has nothing to do with keeping data safe [21:12] Guest79698: servers should receive multicast just as clients do - the kernel isn't that different [21:12] Guest79698: is the server running on hardware or is it a vm? [21:26] on hardware [21:27] i vaguely remember an incident with a port mapping to a server which didnt work because there was some form of hardening on the server [21:28] Guest79698: ufw status [21:29] if ufw is enabled, it might block multicast [21:30] there are rules for multicast in /etc/ufw/before.rules [21:31] they are allowed in the default install [21:34] yeah, it wasn't that.. this is pretty weird, it seems the RTP control traffic(length 220) comes through but the audio traffic itself(length 1292) does not [21:36] erm - RTCP gomes through but not RTP? [21:36] seems like it [21:36] it might be a router problem, i'm not sure.. the rtp traffic pops up on both the other machines regardless of having pulseaudio rtp receive on [21:36] RTCP is usually imbedded in RTP, though - perhaps RTSP? [21:37] i'll fix up a paste for you with some info, hold on [21:48] http://pastebin.com/kzVynkCB [21:49] note the different port in the traffic on the server machine, which got me thinking it might be RTCP traffic which because of a lot of actual traffic doesnt neccessarily show up on the others [21:50] both ports are owned by the pulseaudio process === vegar is now known as Guest62434 [22:17] i'm Guest79698 btw :) [22:17] with the multicast issue [22:18] Guest62434: why not get a proper nick? [22:18] good question === Guest62434 is now known as vegar_ [22:21] vegar uten d og greier [22:21] !no | RoyK^ [22:21] RoyK^: Hvis du vil diskutere på Norsk, vennligst gå til #ubuntu-no. Takk! [22:22] jada :) [22:22] wrong response :p [22:23] I don't discuss stuff in Norwegian in here, but a short comment or two should be accepted [22:24] RoyK^: it was no reprimand - I wanted to be helpful [22:25] we have feelings too you know guntbert :) [22:25] I know the rules, thanks :) [22:26] vegar_: why wouldn't you? :) [22:26] * RoyK^ hands guntbert a bunch of dried fish to snac on [22:26] snack, even [22:27] * guntbert nibbles [22:29] mér finns gott að eta harðfiskur í kvölð [22:29] now it's starting to get out of hand [22:29] :D [22:30] I'll quit it [22:30] it'd be nice if someone could fast-forward the btrfs progress so that I could use linux for storage and not having to use friggin' opensolaris [22:32] Any of you know how a partition UUID is calcualted? [22:32] RoyK^: put your own effort into it :D [22:32] MTecknology: only how you can find it: blkid :) [22:32] MTecknology: heh - I'm not a coder - it's easier to just use zfs [22:33] guntbert: :P - I use ls -l /dev/disk/by-uuid/ [22:34] MTecknology: right - but if I remember correctly that is not always correctly populated [22:35] guntbert: no, not after you change things before a reboot - I never knew blkid before today :P [22:35] MTecknology: ok [22:37] New bug: #560299 in samba (main) "package samba-common-bin 2:3.4.0-3ubuntu5.6 failed to install/upgrade: Unterprozess installiertes post-installation-Skript gab den Fehlerwert 2 zurück (dup-of: 514963)" [Undecided,Confirmed] https://launchpad.net/bugs/560299 [22:39] guntbert: how are ya? [23:48] How do i reset the password on a server using an install CD [23:49] you can use a liveCD [23:49] with the server CD use the rescue mode [23:51] Anyone heard Radiohead - Reckoner and thought.. is this RHCP?