hggdh | kirkland: still there? | 01:35 |
---|---|---|
=== swift_ is now known as swift | ||
=== swift_ is now known as swift | ||
steven_t | heh! | 01:49 |
steven_t | i installed nginx with aptitude install nginx... and removed it with aptitude --purge remove nginx... but guess what? /etc/init.d/nginx is still there, as is /etc/nginx and all the manpages etc | 01:49 |
zul | steven_t: file a bug then | 01:58 |
steven_t | lol | 01:59 |
lamont | ScottK: sup? | 02:10 |
ScottK | lamont: I was thinking about the new postscreen tool in 2.7. | 02:11 |
ScottK | Upstream has clearly labled it "Experimental" in 2.7. | 02:11 |
ScottK | Should it be split out into a separate binary that's not installed by default? | 02:11 |
lamont | hrm... possibly | 02:12 |
ScottK | I was thinking that experiments shouldn't be part of the default mail server task for an LTS. | 02:12 |
lamont | I would not be averse to such a thing | 02:12 |
lamont | ah, come on.... where's your sense of ADVENTURE?? | 02:12 |
lamont | er, I mean, I agree | 02:12 |
ScottK | Heh. | 02:12 |
lamont | you wanna work up a diff? | 02:13 |
ScottK | I know that 'experiment' in the default install is now an Ubuntu Desktop tradition, but for Server, I think not so much a great idea. | 02:13 |
lamont | harsh dude | 02:13 |
ScottK | Accurate. | 02:13 |
ScottK | OK. Let me see what I can do. | 02:15 |
cloakable | hmmm | 02:17 |
cloakable | Oh, how is spam filtering in the mail task going? | 02:17 |
ScottK | I lost track of how far we got on that. | 02:19 |
ScottK | ivoks would know, but he's not around. | 02:19 |
ScottK | In any case, amavisd-new with spamassassin and clamav is the standard, documentation approach. | 02:20 |
cloakable | awesome | 02:22 |
cloakable | I've been trying with dovecot-antispam, because it seems it would give the best result, if I could get it working (: | 02:22 |
cloakable | (I.e monitors a spamfolder, and on movement out of the folder, calls the spamfilter automatically to mark as 'ham') | 02:25 |
cloakable | However, documentation on it is nonexistent :( | 02:26 |
ScottK | cloakable: Start with the Ubuntu Server Guide documentation on spam filtering. | 02:27 |
ScottK | dovecot-antispam would be an advanced part you might bolt onto it later. | 02:28 |
cloakable | ScottK: That's a little clunky to train for nonspam, though. | 02:28 |
cloakable | Needs ssh-ing into the server to call manually. | 02:28 |
cloakable | And while I can do that, I'd rather not have to, and there's users on my server that cannot :) | 02:29 |
ScottK | With a good set of RBLs + amavisd-new/spamassassin/clamav you get rid of an awful lot of it without having to mess with bayesian filter training. | 02:31 |
ScottK | I'd get that set up first and then see if you want to bother. | 02:31 |
cloakable | Mmmmm. | 02:34 |
cloakable | And when I get spam in my ham and ham in my spam? :P | 02:34 |
ScottK | First see how much of it is before you solve the problem. | 02:35 |
* cloakable finds out what was wrong with dovecot-antispam >.> | 02:47 | |
cloakable | It's been compiled with the wrong backend :) | 02:48 |
* cloakable gets the source, comments out 'mailtrain' and puts in 'dspam' | 02:49 | |
=== cyphermox_ is now known as cyphermox | ||
cloakable | There, added 'dovecot' to the dspam trusted list | 03:22 |
uvirtbot | New bug: #559745 in eucalyptus (main) "NC failed to start a session with a libvirt internal error" [Undecided,New] https://launchpad.net/bugs/559745 | 03:32 |
uvirtbot | New bug: #559752 in samba (main) "package samba-common 2:3.4.0-3ubuntu5.6 failed to install/upgrade: el subproceso script post-installation instalado devolvió el código de salida de error 1" [Undecided,New] https://launchpad.net/bugs/559752 | 03:41 |
ScottK | lamont: Never mind. Apparently it's so experimental Wietse didn't include it in the tarball. No wonder I couldn't find it. | 03:49 |
uvirtbot | New bug: #553853 in samba (main) "(Kubuntu) Samba shares (fstab) slow down system shutdown/reboot" [Undecided,New] https://launchpad.net/bugs/553853 | 04:04 |
lamont | ScottK: he tends to be more pedantic than me about experimental vs official - which lets me ignore that aspect without thinking much about it | 04:11 |
lamont | to the point that you made me go "wut, yeah kill that" thinking you'd actually seen it there already | 04:11 |
ScottK | Which is a good thing to have in an MTA author. | 04:11 |
lamont | very much so | 04:12 |
lamont | on that note, sleep time. | 04:12 |
ScottK | I thought I'd seen it referred to as being in the release onthe mail list. | 04:12 |
lamont | head->pillow | 04:12 |
ScottK | Good night. | 04:12 |
GhostFreeman | I just killed two birds | 05:50 |
GhostFreeman | good hello | 05:50 |
GhostFreeman | I wish I would use the right channel | 05:51 |
ScottK | It's more fun for us when you don't. | 06:01 |
=== swift_ is now known as swift | ||
aetaric | hey, i added a scsi drive to my live server and it isn't showing up, do i have to reboot it? | 07:00 |
=== swift_ is now known as swift | ||
histo | ZenMasta_: are you there? | 07:10 |
ZenMasta_ | need some help installing pdo and pdo_mysql i get a message sh: phpsize not found | 07:11 |
histo | looks like pdo is part of the php code now and you should just need to skip to pdo_mysql | 07:12 |
ZenMasta_ | i see, let me try and see what happends | 07:12 |
ZenMasta_ | histo same error | 07:13 |
histo | ZenMasta_: yeah what version of ubuntu are you using? | 07:13 |
ZenMasta_ | 9.10 | 07:13 |
ZenMasta_ | on a side note, when I try to install pdo_mysql it downloads pdo_mysql and then after it downloads pdo still | 07:14 |
histo | and you're using sudo pecl install pdo | 07:14 |
ZenMasta_ | yep | 07:14 |
histo | do you have php5-dev installed? | 07:15 |
ZenMasta_ | I think so how can i find out without trying to install it again | 07:16 |
histo | dpkg -l | grep php5 | 07:16 |
ZenMasta_ | just decided to install before you typed that | 07:17 |
histo | should show a php5-dev package but like I said i think pdo is obsolete | 07:17 |
ZenMasta_ | so we'll see what happends | 07:17 |
histo | yeah pdo has been moved into the php source | 07:18 |
ZenMasta_ | histo that did it | 07:18 |
ZenMasta_ | installing now so i'll try the web app when its done and hopefully it will progress | 07:18 |
histo | did you try the webapp prior to running pecl | 07:18 |
histo | !info php | 07:19 |
ubottu | Package php does not exist in karmic | 07:19 |
histo | !info php5 | 07:19 |
ubottu | php5 (source: php5): server-side, HTML-embedded scripting language (metapackage). In component main, is optional. Version 5.2.10.dfsg.1-2ubuntu6.4 (karmic), package size 1 kB, installed size 20 kB | 07:19 |
histo | ZenMasta_: http://pecl.php.net/package/PDO/php-src/pdo | 07:20 |
histo | see | 07:20 |
ZenMasta_ | thanks | 07:21 |
=== swift__ is now known as swift | ||
=== swift__ is now known as swift | ||
ZenMasta_ | how do I edit php.ini? when I try to open it with vi it's as if it doesn't exist so it pretends to make a new file | 07:51 |
binBASH | Hi | 11:59 |
KristianDK | Hello - i would like to test out the ubuntu enterprise cloud with more than 1 or 2 nodes, so i was considering if you can run UEC on Amazon Ec2 for testing? It seems this is the only way to "rent" a lot of computers for a short period of time | 13:10 |
RoyK^ | hi all. seems I'm doing something strange here. I try to mount an nfs filesystem on the host from a virtualbox VM, but I get 'mount.nfs: mount to NFS server 'rpcbind' failed: RPC Error: Program not registered' - 'mount.nfs: internal error' - /etc/exports looks right, services are started and ufw has 'allow from x.x.x.x' (the VMs address). The VM runs in bridge mode. Any ideas? | 13:32 |
pmatulis | RoyK^: is mountd running on the server? | 14:05 |
RoyK^ | hm. no. what starts that? thought that should be in nfs-kernel-server or something | 14:06 |
twb | Start by checking "rpcinfo -p" | 14:08 |
RoyK^ | http://pastebin.com/98hsiw0x | 14:14 |
binBASH | KristianDK: You could use public cloud from Eucalyptus | 14:14 |
KristianDK | binBASH, but i want to check out the configuration :-) Not how the instances works | 14:15 |
twb | Isn't the whole point of EUC that it's backwards-compatible with Amazon? | 14:15 |
twb | Er, UEC | 14:15 |
binBASH | KristianDK: It's possible to run everything on one node. | 14:16 |
binBASH | Not really a need for multiple nodes....... | 14:16 |
uvirtbot | New bug: #560011 in ntp (main) "Time cannot be fixed with ntpdate" [Undecided,New] https://launchpad.net/bugs/560011 | 14:16 |
KristianDK | binBASH, both controller and node in one box? | 14:16 |
binBASH | yeah | 14:16 |
RoyK^ | twb, pmatulis any ideas? | 14:16 |
binBASH | have the same here | 14:16 |
twb | RoyK^: pastebin "exportfs -vra" | 14:17 |
binBASH | KristianDK: I have 7 nondes, including the one with cluster and cloud controller | 14:17 |
RoyK^ | twb: exporting 213.236.233.67:/var/www | 14:18 |
binBASH | planning to have 150 nodes ;) | 14:18 |
RoyK^ | tried with * as well | 14:18 |
KristianDK | binBASH, cool :-) Well, i want to test things out before deploying it in a big scale | 14:18 |
binBASH | Like me then ;) | 14:19 |
KristianDK | binBASH, do all your nodes have the VT extension as its recommended? | 14:19 |
binBASH | yup | 14:19 |
binBASH | KristianDK: http://www.hetzner.de/en/hosting/produkte_rootserver/eq6/ | 14:19 |
binBASH | have those as nodes | 14:20 |
RoyK^ | 150 nodes??? how many racks? | 14:20 |
KristianDK | binBASH, i was actually considering http://www.hetzner.de/en/hosting/produkte_rootserver/eq8/ :-D | 14:20 |
KristianDK | im already a customer there for some other servers | 14:20 |
binBASH | KristianDK: It will be a problem with network configuration ;) | 14:21 |
binBASH | still stucked at this...... | 14:21 |
KristianDK | yeah, i guess because of the IP adresses being bound the MAC and the limitation of the 100mbit router, right? | 14:21 |
binBASH | Yeah, the ips are bound to server. | 14:22 |
binBASH | RoyK^: Dunno, this providers does tower hosting...... | 14:22 |
binBASH | KristianDK: With 150 nodes, there will be a lot of ram anyways, don't need eq8 really ;) | 14:23 |
RoyK^ | binBASH: if you need 150 nodes, I'd guess hosting it locally may be a lot cheaper | 14:23 |
binBASH | RoyK^: Don't think so. | 14:24 |
binBASH | don't have money to buy all those servers ;) | 14:24 |
KristianDK | binBASH, true :) I think i'll end up with an ESXi more on the EQ8 anyway, since everything else seems complicated with hetzner :( | 14:24 |
RoyK^ | binBASH: but .... 150 nodes? you can run like 10 VMs on a single node - perhaps more - what do you need this for? | 14:24 |
binBASH | KristianDK: Well, I'll try to start vms now with vnc option and configure networking inside there manually. | 14:25 |
binBASH | RoyK^: Need them not for the vms, but for storage | 14:25 |
RoyK^ | binBASH: how much storage do you need? | 14:25 |
KristianDK | binBASH, i've configured a router VM which the IPs are bound to, it forwards the IPs | 14:26 |
binBASH | RoyK^: 200 TB | 14:26 |
RoyK^ | binBASH: I just got an offer for such a box - NOK 250k | 14:26 |
binBASH | KristianDK: I don't want to NAT, because 2 TB Traffic Limit per node | 14:26 |
RoyK^ | binBASH: and storage should be done on zfs imho | 14:26 |
RoyK^ | NOT on a VM | 14:26 |
RoyK^ | but on hardware | 14:26 |
binBASH | RoyK^: I'll use GlusterFS | 14:27 |
RoyK^ | why not just a big supermicro box stuffed with 2TB drives and a SAS expander and some extra chassises for disks? | 14:27 |
binBASH | RoyK^: Like I said, I'm limited in Finances ;) | 14:27 |
RoyK^ | it'll be cheaper | 14:27 |
KristianDK | binBASH, i dont use NAT, its Ip forwarding, i use the router as gateway in the network config - but i don't think you can get around the 2tb limit anyway? Its bound to the IPs | 14:28 |
RoyK^ | you need 200TB and can't afford it? | 14:28 |
binBASH | RoyK^: It's a single point of failure. | 14:28 |
RoyK^ | binBASH: then get two of them and use zfs send/receive to keep them in sync | 14:28 |
binBASH | KristianDK: if you forward to one server it will count there | 14:28 |
binBASH | RoyK^: Atm I'm using 20 TB Raid 6 NFS Server. | 14:29 |
RoyK^ | binBASH: I REALLY doubt you can get something cheaper from somewhere else | 14:29 |
RoyK^ | binBASH: we bought this box some time back with 30TB net storage - it just cost like USD 10k | 14:29 |
binBASH | RoyK^: Everything I was looking for was more expensive. like NetApp or Isilon | 14:29 |
KristianDK | binBASH, as i understood from hetzner you need one router VM per physical server | 14:29 |
RoyK^ | binBASH: hah - use supermicro hardware, cheap drives, and opensolaris with zfs (with compression and dedup) | 14:30 |
RoyK^ | binBASH: where I work, we have rather high storage demands - windfield and other satellite data takes up space, and we're getting more and more all the time | 14:31 |
binBASH | KristianDK: If it would be like this, I wouldn't use the vm for routing. | 14:31 |
binBASH | Would just use the main box itself, because the ip is unusable anyways from within vms. | 14:32 |
KristianDK | binBASH, i was told this was the only option - what else would you do? | 14:32 |
KristianDK | true | 14:32 |
KristianDK | i was thinking ESXi again | 14:32 |
KristianDK | sorry :P | 14:32 |
binBASH | KristianDK: I already, started vms manually and they had a usable ip address. | 14:32 |
binBASH | but I dunno how to automate it within Eucalyptus, that's the only problem | 14:32 |
KristianDK | yeah | 14:32 |
RoyK^ | binBASH: really, using VMs for storage is a BAD idea | 14:33 |
binBASH | RoyK^: I don't wanna use vms for storage ;) | 14:33 |
binBASH | the storage will be on the real server itself, though I will embedd it from inside the vms | 14:34 |
RoyK^ | binBASH: but - please - give opensolaris+zfs a try - it's well worth it. no raid controller, just zfs doing it all | 14:34 |
RoyK^ | zfs rocks rather loudly | 14:34 |
binBASH | RoyK^: If I would use opensolaris on some boxes I wouldn't be able to use their processors. | 14:35 |
binBASH | I have rather high demand on cpu | 14:35 |
binBASH | storage speed is not that important | 14:35 |
RoyK^ | for your needs, I would say separate storage and computation | 14:36 |
RoyK^ | use storage computer cpu for compression and dedup | 14:36 |
binBASH | Don't need compression | 14:37 |
RoyK^ | depending on the data, both can give you quite a bit of gain without much cpu use | 14:37 |
binBASH | for jpgs it's useless | 14:37 |
RoyK^ | what sort of data is this? | 14:37 |
RoyK^ | indeed | 14:37 |
binBASH | RoyK^: We're hosting image agencies. | 14:37 |
binBASH | Things like www.gettyimageslatam.com | 14:37 |
RoyK^ | but stuff like zfs snapshotting is quite priceless | 14:37 |
twb | btrfs and LVM do snapshotting | 14:38 |
RoyK^ | btrfs is NOT stable | 14:38 |
twb | Granted | 14:38 |
RoyK^ | LVM snapshotting is crap | 14:38 |
twb | LVM snapshotting is adequate for my purposes | 14:38 |
RoyK^ | LVM snapshotting moves data out of the original place for each write instead of writing new data and moving pointers | 14:38 |
RoyK^ | meaning if you have lots of snapshots, everything will be very, very slow | 14:39 |
twb | Um, both LVM and ZFS snapshotting are block COW. | 14:39 |
twb | I grant you that LVM is probably a lot slower. | 14:39 |
RoyK^ | lvm moves data out before overwriting them - not like zfs, which writes new data | 14:39 |
twb | Shrug. | 14:40 |
RoyK^ | CoW is two different things - either write new data and move pointers, which is what ZFS and NetApp does, or move the old data prior to overwriting the old ones, which is what LVM does | 14:40 |
binBASH | maybe I'll take Strato HiDrive Pro for Storage ;P | 14:40 |
twb | At the end of the day, ZFS is not enough to make me adopt osol. | 14:40 |
binBASH | 5 TB mirrored = 149 Eur / Month | 14:40 |
RoyK^ | twb: heh - then you really haven't looked into it | 14:41 |
twb | I'm running a 2TB osol server for ZFS right now. | 14:41 |
binBASH | RoyK^: If you really can afford a big storage you should go to Isilon ;) | 14:42 |
twb | But as soon as btrfs is ready, it will die | 14:42 |
binBASH | It's a much better technology | 14:42 |
RoyK^ | btrfs is decent, but lacks a lot of what's in zfs atm. give it a year or two and it might catch up | 14:43 |
RoyK^ | but yes, I will also switch to btrfs once it's there | 14:43 |
twb | Exactly | 14:43 |
twb | Which will unfortunately not be until 2012 (for LTS) :-( | 14:44 |
binBASH | KristianDK: I really don't know how to master that network problem ;) | 14:45 |
RoyK^ | but then, I can't wait two years for a storage solution, and then opensolaris is the way togo | 14:45 |
KristianDK | binBASH, i think we need to talk to hetzner | 14:45 |
KristianDK | they are kind of blocking for allowing this | 14:45 |
binBASH | :p | 14:45 |
binBASH | KristianDK: Like they're blocking gigabit as well | 14:46 |
KristianDK | exactly, i asked them for gbit | 14:46 |
KristianDK | and you can actually get that | 14:46 |
binBASH | KristianDK: You can have it, just costs........ | 14:46 |
KristianDK | yep | 14:46 |
binBASH | you need to have flexipack and another nic | 14:46 |
binBASH | and additionally a switch | 14:47 |
binBASH | and additionally 69 Eur for moving all your servers so they are beside each other. | 14:47 |
RoyK^ | binBASH: what makes me wonder is why you (or your company) are hosting terabytes of data and can't afford a decent (and quite cheap) storage solution, like the osol-based one we have. It can be expanded quite easily with a SAS expander and won't cost a lot using WD Green drives or so | 14:49 |
binBASH | RoyK^: Because agencies don't pay that much ;) | 14:50 |
binBASH | RoyK^: And they pay monthly. Not a year in advance :p | 14:51 |
RoyK^ | well, loan some money and it'll pay back quite quickly | 14:51 |
RoyK^ | for EUR 10k, you get 30-35TB, which I guess will be sufficient for some time | 14:51 |
binBASH | RoyK^: Like I said we have already 20 TB. | 14:52 |
RoyK^ | no, wait, more - wait... | 14:52 |
KristianDK | lol | 14:52 |
binBASH | and we bought it for 9K 2 years ago | 14:52 |
binBASH | though it's a single point of failure and not mirrored | 14:52 |
binBASH | hi leonel | 14:53 |
leonel | ea binBASH | 14:54 |
RoyK^ | binBASH: just got this offer - supermicro box with 36x2TB disk and an ok motherboard, a bunch of memory, some cpus etc, meaning if you use three RAIDz2 groups of 12 drives each, it gives you 20x3=60TB -> price NOK 86k, around EUR 10k, and possibly cheaper outside Norway | 14:54 |
RoyK^ | 34 2TB drives, that was, but still (forgot about the root SSDs) | 14:55 |
binBASH | 10K for one box I assume ;) | 14:55 |
RoyK^ | yes, but it's still cheap | 14:55 |
binBASH | Here we pay 8500 Eur for a box with 24 x 2 TB | 14:55 |
RoyK^ | with 3xraidz2, you can loose six drives in total | 14:55 |
RoyK^ | wierd - that's _more_ expensive :) | 14:56 |
* RoyK^ thought Norway was meant to be the expensive place | 14:56 | |
binBASH | though if a box fails raid is useless ;) | 14:56 |
RoyK^ | binBASH: then get two and use zfs send/receive to mirror the two | 14:57 |
RoyK^ | and when storage fills up, get a SAS expander and an extra chassis and some drives | 14:57 |
twb | osol not supporting ext2 is a real pain in the arse. | 14:58 |
RoyK^ | over 3-5 years, I would guess you would save LOTS of doing this yourself instead of paying others to do the same | 14:58 |
RoyK^ | twb: why should it??? | 14:58 |
twb | So that I can seed the osol box by sneakernet instead of our shitty 100baseT and ADSL lines | 14:58 |
binBASH | RoyK^: Well I would lack then cpu power for video processing | 14:58 |
RoyK^ | binBASH: don't! | 14:59 |
RoyK^ | binBASH: use NFS | 14:59 |
RoyK^ | or iSCSI | 14:59 |
RoyK^ | or CIFS | 14:59 |
binBASH | RoyK^: We're having NFS already ;) | 14:59 |
RoyK^ | nfs performs well enough for that - for those storage needs, it would be silly to put all services in one place | 15:00 |
binBASH | huh? | 15:00 |
RoyK^ | get a storage server with sufficient memory and cpu for the storage alone and get compute nodes to do the ugly stuff | 15:00 |
twb | RoyK^: what's his use case? Just normal office documents and such? | 15:01 |
RoyK^ | twb: images and video | 15:01 |
binBASH | I think getting 150 Nodes which offer 1200 cpu cores + 200 TB Storage is better ;) | 15:01 |
RoyK^ | if planning for 3-6 months, sure, but if you are planning to be in business for a long time, buying hardware will save you a lot of money | 15:02 |
twb | Yeah, NFS over 1000baseT to a single NAS or SAN is probably the Right Thing. | 15:02 |
binBASH | RoyK^: You forget the fact you normally throw out servers every 2 years | 15:03 |
RoyK^ | binBASH: not really - storage servers can last a LONG time | 15:03 |
RoyK^ | especially with zfs - autogrow is nice | 15:03 |
RoyK^ | take a zfs mirror, replace one part with a larger drive, resilver, replace the other, resilver, and zfs says 'oops - I'm bigger' | 15:04 |
RoyK^ | same with raidz volumes | 15:04 |
binBASH | Raid rebuild will take ages | 15:04 |
RoyK^ | not really - for 30TB a scrub takes a couple of days | 15:05 |
binBASH | I already takes 4 hours to rebuild the current raid ;p | 15:05 |
RoyK^ | resilver about the same | 15:05 |
RoyK^ | and replacing drives isn't what's done daily | 15:05 |
RoyK^ | but hey, I've just been working with storage for 10+ years, do as you please | 15:05 |
binBASH | I think a distributed storage architecture is much better. | 15:06 |
binBASH | Companies like NetApp or Isilon doing it as well. | 15:06 |
RoyK^ | how much do they charge you per month for 200TB? | 15:06 |
KristianDK | binBASH, have you, btw, checked for hetzner alternatives? | 15:06 |
binBASH | KristianDK: Yup | 15:06 |
binBASH | KristianDK: But only in Germany | 15:07 |
KristianDK | binBASH, and they have the same sucky setup? :P | 15:07 |
KristianDK | binBASH, are you german? | 15:07 |
binBASH | They are even mor worse. | 15:07 |
RoyK^ | NetApp is doing quite well, yes, but they charge you EUR 100k for a few terabytes | 15:07 |
binBASH | KristianDK: I live in Switzerland :) | 15:07 |
KristianDK | binBASH, ok - cool :-) I'm from Denmark, so i speak a bit German, but sometimes i really don't get what they are trying to tell me at hetzner :P | 15:08 |
binBASH | RoyK^: Things like GlusterFS are working like this. | 15:08 |
binBASH | KristianDK: I moved from Germany to Switzerland in 2007 | 15:08 |
RoyK^ | does that support stuff like versioning or snapshotting? | 15:09 |
KristianDK | binBASH, well, the problem is i've been searching for alternatives to hetzner, but they seem remarkably cheap compared to everything else | 15:10 |
KristianDK | and im actually satisfied with everything but their network setup :P | 15:10 |
KristianDK | binBASH, however - they recently introduced the failover IP thing | 15:11 |
KristianDK | which redirects and IP to another server | 15:12 |
KristianDK | maybe we can work something out with this thing? | 15:12 |
RoyK^ | binBASH: but how much for 100T? | 15:12 |
RoyK^ | or 200 | 15:12 |
uvirtbot | New bug: #560047 in dovecot (main) "new upstream version available" [Undecided,New] https://launchpad.net/bugs/560047 | 15:17 |
binBASH | RoyK^: Like I said each node gives 2,7 TB | 15:17 |
binBASH | For mirroring you need 2 nodes. | 15:17 |
binBASH | A node costs 69 Eur / month | 15:18 |
binBASH | and provides 8 cpu cores which I can use for video rendering and image processing | 15:18 |
binBASH | because we're a swiss company we don't have to pay German VAT | 15:19 |
binBASH | so it's cheaper. | 15:19 |
binBASH | so it costs like 9500 Eur / Month. | 15:20 |
binBASH | KristianDK: For the failover ip you need flexipack, which is 15 Eur / Month | 15:23 |
binBASH | RoyK^: I would agree as pure storage it's too expensive. | 15:24 |
jondowd | good morning - I have a Dell Precision 650. I want to run SATA drives on it as boot devices. Can I install a 3rd party SATA PCI card and boot from a drive connected to it? thanks | 15:27 |
ScottK | You should be able to. | 15:28 |
ScottK | Absolute worst case scenario you unplug the installed drives from the built in controller, install, and then reconnect them. | 15:29 |
jondowd | ScottK: how do I get the BIOS to see the PCI card? | 15:29 |
ScottK | The one time I've had to worry about it, it just did. | 15:30 |
jondowd | (never booted from a add-in card) Cool - I'll give it a try - Thanks ! | 15:30 |
=== martin- is now known as zx | ||
=== zx is now known as martin- | ||
binBASH | RoyK^: http://gluster.com/community/documentation/index.php/Main_Page#Gluster_Filesystem | 15:33 |
koolhead17 | binBASH: :P | 16:06 |
binBASH | koolhead17: ? | 16:07 |
koolhead17 | gluster | 16:07 |
binBASH | koolhead17: It works here without problems so far. | 16:07 |
koolhead17 | binBASH: it rocks | 16:07 |
binBASH | koolhead17: are you using it? | 16:11 |
koolhead17 | binBASH: my friend owns the company behind this project :D | 16:12 |
binBASH | ohh :p | 16:13 |
koolhead17 | binBASH: he is the lead developer too :D | 16:14 |
binBASH | very cool | 16:14 |
binBASH | glusterfs is very good design I think. | 16:14 |
binBASH | Too bad I can use it with 100 Mbit only koolhead17 :p | 16:15 |
koolhead17 | binBASH: heh. poke them | 16:15 |
koolhead17 | i think #gluster | 16:15 |
binBASH | koolhead17: It's not a gluster issue, servers only have 100 mbit ;) | 16:15 |
twb | ScottK: absolute worst case is a wincontroller :-P | 16:16 |
ScottK | twb: True. | 16:16 |
twb | Or my boss's favourite trick -- buy a server with hotswap bays, but forget to buy the RAID5 chip for the hardware RAID controller | 16:17 |
binBASH | lol | 16:18 |
twb | In which case I could create up to two RAID0 arrays of one drive each, so I couldn't even make an md RAID5 | 16:18 |
binBASH | twb: Sounds more like epic fail than a trick ;) | 16:18 |
twb | binBASH: he has done it TWICE | 16:18 |
binBASH | twb: So he didn't learn? | 16:19 |
twb | And we're still running Pentium IIIs, so you can imagine how rarely we buy new gear | 16:19 |
binBASH | Pentium 3 omg | 16:19 |
binBASH | twb: How much people you are in company? | 16:20 |
twb | Probably about ten | 16:20 |
binBASH | ok, more than us then ;) | 16:21 |
twb | It's hard to tell because some spend months pimped out, and some ex-employees continue to lurk on the lists | 16:21 |
twb | We replaced the LaserJet 4 last month, and the sysadmin deploying it went "oh, cool, the NEW unit has only been EOLd by HP since 2008" | 16:21 |
binBASH | lol | 16:21 |
twb | Having said that, I loved that little LJ4 | 16:22 |
binBASH | sounds like a lack of money | 16:22 |
twb | There's a policy of handing most of the profits to the engineers instead of the company | 16:23 |
twb | But it's also a mindset thing. | 16:23 |
twb | We have a pair of Q9550 with 2TB of storage, but one got stolen to run rpppoe. | 16:24 |
binBASH | twb: We had such a policy as well twb | 16:25 |
binBASH | All money to personal | 16:25 |
binBASH | ;) | 16:25 |
binBASH | Get a project manager for 100K Eur / Year | 16:25 |
binBASH | kick him out 9 months later because he sucked | 16:25 |
binBASH | and have a second boss, which was also not very useful | 16:26 |
* cloakable eyes gluster storage platform >.> | 16:26 | |
pjp3rd | hi id like to set up monthly bandwidth quotas for my home network something like http://www.digirain.com/en/trafficquota-overview.html but ive been googling like crazy and i cant find anything like that for ubuntu, any suggestions? | 17:16 |
brianherman | pjp3rd: You can use iptables to set a quota | 17:28 |
brianherman | http://linuxgazette.net/108/odonovan.html | 17:28 |
brianherman | pjp3rd:http://linuxgazette.net/108/odonovan.html | 17:28 |
brianherman | pjp3rd: https://help.ubuntu.com/community/UFW | 17:29 |
pjp3rd | brianherman, thanks that looks like a good place to start | 17:33 |
binBASH | pjp3rd: why you want quote in home network? | 17:33 |
brianherman | pjp3rd: Use the ubuntu one it seems the simplest | 17:34 |
brianherman | Quota not quote | 17:34 |
binBASH | quota yeah ;) | 17:34 |
pjp3rd | brianherman, but id need to set up a seperate quota for each user, a way for user to check his quota and to automate it everymonth. so it would be nice if someone has already done that work.. | 17:35 |
pjp3rd | binBASH, cus the ISP gives me a quota and im sharing it with 16 people, seems like the most sensible way to avoid fights when we are running out after 2 weeks each month | 17:35 |
binBASH | pjp3rd: If you share the line with 16 people why not limit bandwidth? | 17:36 |
binBASH | pjp3rd: http://manpages.ubuntu.com/manpages/karmic/man8/tc.8.html | 17:37 |
pjp3rd | binBASH, im not sure what you mean? | 17:37 |
binBASH | pjp3rd: http://manpages.ubuntu.com/manpages/hardy/man8/tc-cbq-details.8.html | 17:37 |
pjp3rd | binBASH, im not looking to shape the bandwidth for speed im looking to make sure we dont exceed the monthly limit | 17:38 |
binBASH | ok | 17:38 |
sherr | pjp3rd: there's something called "bandwidthd" that might help in some way. | 17:42 |
brianherman | pjp3rd:http://bandwidthd.sourceforge.net/ | 17:43 |
pjp3rd | sherr, bandwidthd helps half the problem if it can moniter the bandwidth. id like to enforce limit as well | 17:44 |
binBASH | pjp3rd: At least then all would know who steal the traffic :p | 17:44 |
pjp3rd | binBASH, yip it would tell me but too late rather than fighting with people every month id just like to divide it equally and no more worries | 17:45 |
binBASH | pjp3rd: if you limit bandwidth for everyone it will be equal | 17:46 |
binBASH | pjp3rd: Think limiting bandwidth is better than having no internet for half of the month ;) | 17:48 |
pjp3rd | k, based on brianherman's tip ive found www.linuxquestions.org/questions/linux-networking-3/iptables-to-stop-bandwidth-completely-592827 which is what im basically looking for | 17:48 |
binBASH | but your decission ;) | 17:48 |
pjp3rd | problem is it doesnt seem like such a polished solution | 17:49 |
pjp3rd | binBASH, im not sure what you mean | 17:49 |
binBASH | pjp3rd: You said you have a quota, what if you exceed it? | 17:49 |
pjp3rd | my isp enforces a quota when we exceed it the connection is throttled to basically unusable speeds | 17:50 |
binBASH | pjp3rd: So why don't you distribute a max. bandwidth equally? | 17:51 |
pjp3rd | binBASH, what do you mean? | 17:51 |
binBASH | I mean, I wouldn't accept the fact if I'm amongst the 16 people, and one causing so much traffic, so I would have no internet then for half a month | 17:51 |
binBASH | pjp3rd: With the tc links I posted, you can assign each user equal bandwidth. | 17:52 |
binBASH | so it's not possible traffic limit will be exceeded | 17:53 |
pjp3rd | binBASH, correct me if im wrong but what i can understand from the link you posted tc/cbq can shape my connection meaning how much is been used by any given user/protocol at a given time, thats not going to help me to stop total monthly usage from exceeding the isp quota is it? | 17:55 |
binBASH | pjp3rd: With that you can limit the bandwidth for each user. So every user has equal line. | 17:56 |
binBASH | and you can setup a max. bandwidth rule as well, so with that you can't exceed your providers traffic limit. | 17:56 |
pjp3rd | binBASH, oh i didnt see details about that? how can i set up a monthly limit? | 17:57 |
binBASH | pjp3rd: You don't set a traffic limit. You set a bandwidth limit. | 17:58 |
binBASH | The Linespeed will be slower though | 17:58 |
pjp3rd | binBASH, can you give me more details? | 17:59 |
binBASH | pjp3rd: http://www.oamk.fi/~jukkao/lartc.pdf | 18:00 |
binBASH | read this ;) | 18:00 |
RoyK^ | binBASH: do you really need like 5k cores for this? I thought you were doing storage | 18:00 |
RoyK^ | or 800 cores, that is | 18:01 |
binBASH | RoyK^: Storage and Processing ;) | 18:01 |
binBASH | 1200 cpus actually | 18:02 |
RoyK^ | yeah, but you were talking about hosting images | 18:02 |
RoyK^ | yeah | 18:02 |
RoyK^ | 1200 cores | 18:02 |
RoyK^ | with 1200 cores you can do some rather fancy stuff, but then, what is it you're going to do with them? | 18:02 |
binBASH | RoyK^: Hosting images, resize them, watermark them, recalculate videos, etc...... | 18:02 |
RoyK^ | you might need 8 cores for that | 18:03 |
RoyK^ | not 1200 | 18:03 |
binBASH | RoyK^: If you don't wanna wait ages, you need more ;) | 18:03 |
RoyK^ | a resize normally can't be shared amongst cores | 18:04 |
RoyK^ | and I somehow doubt you have 1200 concurrent resizes | 18:04 |
pjp3rd | binBASH, i just skimmed through the whole book can you point me to which chapter should help me? | 18:04 |
binBASH | pjp3rd: Chapter 9 | 18:05 |
RoyK^ | binBASH: what are you using for this - imagemagick? can you upload some files for me to test? | 18:06 |
binBASH | RoyK^: ImageMagick for the images, yup | 18:07 |
pjp3rd | binBASH, im sorry i must be misunderstanding you but how is shaping traffic going to limit total usage per month per user? | 18:07 |
binBASH | RoyK^: One jpg is like 128 MB in worst case;) | 18:07 |
RoyK^ | binBASH: and how often are these uploaded/resized? | 18:07 |
binBASH | RoyK^: Very often ;) | 18:08 |
RoyK^ | seems to me 1200 cores for this job is like shooting sparrow with heavy artillery | 18:08 |
binBASH | RoyK^: Getty Images uses it for their editorial press content. | 18:08 |
RoyK^ | binBASH: define 'very often' | 18:08 |
binBASH | RoyK^: The problem is, the images will be transfered to news agencies. | 18:08 |
RoyK^ | well, it's only resized on upload, so how many images do need to convert per second? | 18:09 |
binBASH | RoyK^: The faster the better. | 18:09 |
RoyK^ | well, of course, but wasting a ton of money is useless | 18:10 |
binBASH | like I said it's editorial press content. Means, someone makes a photo of a football match. | 18:10 |
binBASH | it should be transfered immediately to news agencies when uploaded. | 18:10 |
binBASH | time counts...... | 18:10 |
RoyK^ | if you have 1200 concurrent file uploads in general, 1200 cores might be worth it, but 600 will probably do well, even 300 | 18:10 |
RoyK^ | then again, I somehow doubt you have 1200 _concurrent_ jobs in such a system | 18:11 |
RoyK^ | most likely 10+ | 18:11 |
binBASH | RoyK^: Yup, though we have some more customers, ;) | 18:11 |
RoyK^ | have you monitored your current system to see its load? | 18:11 |
binBASH | no | 18:11 |
RoyK^ | it should tell quite easily how much is needed | 18:12 |
RoyK^ | if load average at peak times is 4, give it 4 cores, etc | 18:12 |
RoyK^ | we have a 40 core compute farm at work for doing models, and that's eating some data. 1200 cores must be overkill for your use | 18:13 |
binBASH | RoyK^: There is another problem as well. The company develops some visual search engine atm. And noone knows what it will consume ;) | 18:13 |
RoyK^ | binBASH: then get a separate box for that as well | 18:14 |
RoyK^ | binBASH: it will probably need a truckload of RAM and fast disk access to its index, but not a lot of cpu | 18:14 |
binBASH | RoyK^: And how to do it without money? :P | 18:14 |
RoyK^ | binBASH: hey, kid, if you try to make business work, you need to invest. I'm just trying to give you simple advices, but it seems to me you know it all better than the rest of the world. keep on, kid, and you might be wanting to find a new job in a few months | 18:15 |
binBASH | RoyK^: Boss refuses to take new investors | 18:17 |
RoyK^ | tell your boss you can't do this without EUR 20k | 18:17 |
RoyK^ | that's not really a lot of money | 18:17 |
RoyK^ | tell him it'll cost several times as much even during the first year | 18:18 |
RoyK^ | or perhaps the first year will make it break even | 18:18 |
RoyK^ | binBASH: also, please understand that making large system work well usually means dividing services amongst servers, some for storage, som for computing | 18:20 |
RoyK^ | get a supermicro system for the first 60TB or so and add more disks later with SAS expanders - use it on opensolaris - share it with NFS - it can grow easily | 18:21 |
RoyK^ | then get small 1U boxes for doing the computing - start off with a quad intel or opteron with a bunch of cores, perhaps less, and you might see it's not really very heavily loaded | 18:22 |
binBASH | RoyK^: There is another problem as well. :-) We need entry points through geoip | 18:22 |
RoyK^ | if it is, add more | 18:22 |
binBASH | Means, getty has offices in asia, russia etc. | 18:22 |
binBASH | so we don't want to send them to german servers | 18:22 |
binBASH | but also to servers in usa | 18:22 |
RoyK^ | binBASH: fuck this - you reall don't listen - you've decided to use this or that already - I'm done trying to advice anymore now | 18:23 |
binBASH | and we really don't want to fly to usa to build something up there in a datacenter | 18:23 |
binBASH | good ;) | 18:23 |
RoyK^ | seems to me what you want is to brag about a truckload of terabytes, and you're doing it the wrong way, wasting money and making things worse | 18:23 |
RoyK^ | keep on, kid, but don't blame the ones of us that tried to help | 18:23 |
binBASH | RoyK^: Really don't wanted advice | 18:24 |
RoyK^ | I can see that | 18:27 |
RoyK^ | binBASH: out of interest - what is your current system's load average? | 18:27 |
binBASH | RoyK^: It should have a reason that companies like google have a shared storage ;) | 18:27 |
RoyK^ | google uses its own storage | 18:28 |
RoyK^ | for good reasons | 18:28 |
RoyK^ | I'm still curious about this load average of yours | 18:28 |
RoyK^ | also - how do you plan to parallelize that across 300 machines? | 18:29 |
* RoyK^ guesses binBASH was in a hurry and perhaps a little drunk when he made those plans | 18:31 | |
binBASH | RoyK^: That is what gearman is for | 18:35 |
RoyK^ | binBASH: what is the load average on your current box? | 18:37 |
bogeyd6 | binBASH, for very large deployments that require lots of storage that is similar you might consider a de-dupe filesystem such as lessfs | 18:38 |
bogeyd6 | sdfs also comes to mind | 18:39 |
RoyK^ | erm | 18:39 |
RoyK^ | does lessfs do dedup? | 18:39 |
bogeyd6 | RoyK, you been around long enough to know what google is for | 18:39 |
RoyK^ | bogeyd6: I've tried hinting on using zfs, but it seems binBASH has already decided and is just here to bra | 18:39 |
RoyK^ | bogeyd6: I've tried hinting on using zfs, but it seems binBASH has already decided and is just here to brag | 18:39 |
=== GhostFreeman_ is now known as GhostFreeman | ||
bogeyd6 | good im glad he is bragging about using Ubuntu Server in large environments and i hope he proudly announces it to his customers | 18:40 |
RoyK^ | bogeyd6: you've been around for long enough to know that to answer a yes or no might perhaps be a little more sophisticated and nice than just barking fgfi | 18:40 |
bogeyd6 | !google | RoyK | 18:41 |
ubottu | RoyK: While Google is useful for helpers, many newer users don't have the google-fu yet. Please don't tell people to "google it" when they ask a question. | 18:41 |
bogeyd6 | also, condescension is highly frowned upon, please refrain | 18:41 |
RoyK^ | bogeyd6: I know, SIR, but you spent more time on telling me to google it than a yes/no answer would take | 18:42 |
bogeyd6 | * RoyK^ guesses binBASH was in a hurry and perhaps a little drunk when he made those plans << belong in another linux support channel | 18:42 |
RoyK^ | bogeyd6: not really | 18:42 |
bogeyd6 | well i said my peace, i hope you consider signing the ubuntu code of conduct | 18:42 |
sherr | RoyK^: I thought your discussion with binBASH was quite interesting and useful until you ruined things by being rude and a little obnoxious. | 18:43 |
sherr | Let's all be civil. | 18:43 |
bogeyd6 | !conduct | RoyK | 18:43 |
ubottu | RoyK: The Ubuntu Code of Conduct is a community etiquette document to which we ask all Ubuntu users to adhere, and can be found at http://www.ubuntu.com/community/conduct/ . For information on how to electronically sign the CoC, see https://help.ubuntu.com/community/SigningCodeofConduct . | 18:43 |
RoyK^ | well, people, listen | 18:43 |
bogeyd6 | sherr, agreed and royk should also be congratulated on his level of participation in previous instances | 18:44 |
bogeyd6 | would make a very valuable member of the server community | 18:44 |
RoyK^ | mr binBASH first tried to ask about how to do his storage, and talked about 1200 cores doing image resizing for uploads, at which I asked why, and why not central storage, to which he merely barked that he didn't need my input | 18:45 |
RoyK^ | this is something that can annoy the one (me) trying to help one (him) out | 18:45 |
bogeyd6 | RoyK, zfs is in fact available in opensolaris | 18:46 |
RoyK^ | bogeyd6: yes, and did you know ext3 is available in linux? | 18:47 |
bogeyd6 | which is pretty awesome | 18:47 |
RoyK^ | scroll up :) | 18:47 |
RoyK^ | I was trying to tell him that | 18:47 |
bogeyd6 | wasnt available until 27a | 18:48 |
RoyK^ | but it seems like he wants a truckload of cpu nodes with 2TB each for some reason | 18:48 |
RoyK^ | 27a? what? | 18:48 |
bogeyd6 | i thought if you could recommend ZFS you would know a bit of its history and usage | 18:49 |
binBASH | RoyK^: Looks like you're totally mistaken, I never asked for storage. | 18:50 |
bogeyd6 | binBASH, did you have something you did need help with? | 18:52 |
binBASH | bogeyd6: originally I asked how I can start vms in ubuntu enterprise cloud with the -vnc parameter. | 18:52 |
bogeyd6 | i personally think that zfs needs to much horsepower and that makes it a disadvantage | 18:53 |
RoyK^ | bogeyd6: I just started using osol at 2009.06 - the old solaris platforms were just something I plaied with | 18:53 |
bogeyd6 | binBASH, that is a good question, i know it can be done on a VPS but in a desktop in a cloud? | 18:53 |
RoyK^ | bogeyd6: I know it needs a lot, but for dedicated storage, it's nice | 18:53 |
binBASH | RoyK^: If you want details what is the problem with our current setup you can come in query :) | 18:55 |
bogeyd6 | binBASH, https://wiki.edubuntu.org/UEC/Images/Testing | 18:55 |
bogeyd6 | my google-fu is 10th degree master | 18:55 |
RoyK^ | binBASH: I tried asking about the current load, since you insist on needing 1200 cores | 18:56 |
bogeyd6 | i got a load that would blow your mind | 18:57 |
RoyK^ | how nice | 18:58 |
bogeyd6 | 13:57:15 up 28 days, 5:48, 1 user, load average: 0.84, 1.38, 2.39 | 18:58 |
RoyK^ | well, seems like the system uses a core or two quite well | 18:58 |
bogeyd6 | hah! | 18:59 |
bogeyd6 | single processor | 18:59 |
* RoyK^ had a server peaking at load avg 32 the other day | 18:59 | |
RoyK^ | something went wrong in freeradius | 18:59 |
_ruben | only 32? ... i've reached 100+ on mailservers that were "spammed" :) | 19:11 |
RoyK^ | yeah, but this box was running a single radiusd that shouldn't really have been busy | 19:13 |
RoyK^ | guess it started a bunch of threads that went mad | 19:14 |
binBASH | RoyK^: You know not only cpu usage causes load | 19:14 |
RoyK^ | binBASH: yeah, but there weren't any funny processes in D or Z state or similar | 19:15 |
RoyK^ | just a truckload of threads that went spinning | 19:16 |
binBASH | back your question about our sys. Like I told you we're using imagick. Libjpeg doesn't use smp so we could calculate 8 images at once with one server. | 19:16 |
binBASH | one image takes around 3-6 seconds | 19:17 |
binBASH | the images with bigger filesizes take much longer | 19:18 |
RoyK^ | have you tried graphicsmagick? | 19:18 |
RoyK^ | it's said to be faster by far than imagemagick | 19:18 |
binBASH | yup, it lacks some features we need. | 19:19 |
RoyK^ | ok | 19:19 |
RoyK^ | but still - the time for one image to be resized is ok, but how about the system load over time? | 19:19 |
RoyK^ | that's what you should worry about when designing something new | 19:19 |
binBASH | if there is high processing the load is like 25 | 19:19 |
RoyK^ | can you distribute this load somehow? | 19:20 |
binBASH | it's already distributed :) | 19:20 |
RoyK^ | I mean, I guess there's a common web front | 19:20 |
binBASH | we have dedicated servers for web, for image processing, for sphinxsearch and for exports to partners/ftp ... | 19:21 |
binBASH | also database | 19:21 |
RoyK^ | ok | 19:21 |
RoyK^ | with glusterfs, what happens if you remove a node? are nodes mirrored as well as the drives on those nodes? | 19:22 |
binBASH | RoyK^: every node is backuped by another one | 19:22 |
RoyK^ | ok, so mirroring, somehow? | 19:23 |
binBASH | yup | 19:23 |
RoyK^ | I guess it still lacks the stuff zfs/btrfs has, though :P | 19:23 |
binBASH | Well, it's scalabe and a complete different technology. | 19:24 |
binBASH | I dunno if you know NetApp or Isilon. | 19:24 |
RoyK^ | I do | 19:24 |
RoyK^ | isilton, no, but netapp, yes | 19:24 |
binBASH | I personally would prefer Isilon over NetApp from what I've heard | 19:25 |
binBASH | bogeyd6: What does this link have to do with eucalyptus? It's just for testing kvm setup | 19:27 |
* cloakable eyes eucalyptus | 19:27 | |
binBASH | kvm works perfectly for me already ;) | 19:27 |
cloakable | Anyone here use eucalyptus? | 19:28 |
binBASH | cloakable: Yes ;) | 19:28 |
binBASH | cloakable: Ubuntu Enterprise Cloud is built on it. | 19:28 |
cloakable | binBASH: If I have two four-core nodes in the cloud, can I give, say, six to an instance? | 19:29 |
binBASH | cloakable: No | 19:29 |
RoyK^ | cloakable: you can't run a vm across multiple machines | 19:29 |
cloakable | Damn D: | 19:29 |
RoyK^ | get an amd 12-core :D | 19:29 |
cloakable | Which would suck up how many hundred watts? :P | 19:30 |
binBASH | cloakable: http://www.linuxvirtualserver.org/ | 19:30 |
RoyK^ | cloakable: not really a lot | 19:30 |
cloakable | RoyK^: More than 45W? | 19:30 |
cloakable | :P | 19:30 |
binBASH | cloakable: I think lvs can do it. | 19:30 |
cloakable | binBASH: awesome, will look at | 19:31 |
RoyK^ | erm - iirc lvs is a network thing, not a processing thing | 19:31 |
binBASH | RoyK^: lvs will let multiple nodes appear as one supernode afaik | 19:32 |
cloakable | What would be really awesome would be a network-aware hypervisor >.> | 19:33 |
binBASH | cloakable: Too slow | 19:33 |
cloakable | Possibly | 19:33 |
cloakable | has it been tried? ;) | 19:33 |
binBASH | don't think so. | 19:33 |
RoyK^ | binBASH: yes, on IP, but not sharing computing tasks | 19:33 |
binBASH | RoyK^: yeah, could be. | 19:34 |
RoyK^ | lvs is nice for web servers and so on, but not for VMs | 19:34 |
cloakable | Would like to deploy an LTSP image onto a group of say 3-4 4-core machines :) | 19:35 |
cloakable | Or just use it as a desktop :D | 19:35 |
binBASH | impossible afaik ;) | 19:36 |
cloakable | Which is a shame, because it would be awesome :) | 19:36 |
binBASH | hehe | 19:36 |
cloakable | An area Atom would shine in ;) | 19:37 |
binBASH | RoyK^: The worst thing about that many nodes. Administration overhead. So puppet to the rescue ;) | 19:37 |
cloakable | heh | 19:37 |
cloakable | or cluster-ssh ;) | 19:37 |
binBASH | cloakable: I have it already. | 19:38 |
cloakable | :) | 19:38 |
=== vegar is now known as Guest79698 | ||
Guest79698 | Hi, I have 2 ubuntu desktop machines and 1 ubuntu server machine. Using tcpdump, both desktop machines receive a multicast audio stream on my network, while the server machine does not | 20:07 |
Guest79698 | is there something on the server edition which might block multicast traffic? | 20:07 |
histo | the only thing that is different should be the kernel as far as I can tell | 20:31 |
animeloe[net] | Apr 10 15:29:24 server deliver(root): msgid=<20100407105335.E878661D8@$DOMAINt>: save failed to INBOX: Internal error occurred. Refer to server log for more information. [2010-04-10 15:29:24] | 20:36 |
animeloe[net] | Apr 10 15:29:24 server deliver(root): stat(/root/Maildir/tmp) failed: Permission denied (euid=65534(nobody) egid=65534(nogroup) missing +x perm: /root) | 20:36 |
animeloe[net] | I still haven't figured out how to fix that issue | 20:36 |
animeloe[net] | (not that I've been looking very hard) | 20:36 |
jared_1 | Had a quick question for you guys. I'd like to build a simple home server since it's something I've been without for way too long (I've been so lucky to never lose data ... yet). Anyways I'll probably be doing basic stuff... File server, FTP, simple webserver....But I would like a graphical environment (kde, gnome, etc) for VNC. What hardware would you recommend to run a raid 1 or raid 0+1 (4 drives). Trying to keep it affordab | 21:03 |
jared_1 | Looking for mobo / raid controller / processor recommended. | 21:03 |
animeloe[net] | for your data I'd definately say a good raid 5 | 21:06 |
jared_1 | I'm fine with a raid 5 setup too | 21:07 |
jared_1 | but personally never done really any raid setups | 21:07 |
* animeloe[net] only uses hardware raid, so can't help with software raid | 21:07 | |
jared_1 | replaced drives in raid arrays and whatnot, but never purchased hardware | 21:07 |
animeloe[net] | got lots of money to spare? | 21:08 |
animeloe[net] | get a nice areca or equivelent | 21:08 |
jared_1 | Hah I do, but cheaper the better obviously :) | 21:08 |
animeloe[net] | well | 21:08 |
jared_1 | more a hobby and for work experience then a necessity | 21:08 |
animeloe[net] | you want raid, you'll be spending at least a thousand just on a card | 21:09 |
histo | jared_1: you may want to check this out https://wiki.ubuntu.com/ReliableRaid | 21:09 |
histo | jared_1: explains some of the current issues | 21:09 |
blue-frog | jared_1, if you don't want to lose data, make backups. raid has nothing to do with keeping data safe | 21:09 |
RoyK^ | Guest79698: servers should receive multicast just as clients do - the kernel isn't that different | 21:12 |
RoyK^ | Guest79698: is the server running on hardware or is it a vm? | 21:12 |
Guest79698 | on hardware | 21:26 |
Guest79698 | i vaguely remember an incident with a port mapping to a server which didnt work because there was some form of hardening on the server | 21:27 |
RoyK^ | Guest79698: ufw status | 21:28 |
RoyK^ | if ufw is enabled, it might block multicast | 21:29 |
jdstrand | there are rules for multicast in /etc/ufw/before.rules | 21:30 |
jdstrand | they are allowed in the default install | 21:31 |
Guest79698 | yeah, it wasn't that.. this is pretty weird, it seems the RTP control traffic(length 220) comes through but the audio traffic itself(length 1292) does not | 21:34 |
RoyK^ | erm - RTCP gomes through but not RTP? | 21:36 |
Guest79698 | seems like it | 21:36 |
Guest79698 | it might be a router problem, i'm not sure.. the rtp traffic pops up on both the other machines regardless of having pulseaudio rtp receive on | 21:36 |
RoyK^ | RTCP is usually imbedded in RTP, though - perhaps RTSP? | 21:36 |
Guest79698 | i'll fix up a paste for you with some info, hold on | 21:37 |
Guest79698 | http://pastebin.com/kzVynkCB | 21:48 |
Guest79698 | note the different port in the traffic on the server machine, which got me thinking it might be RTCP traffic which because of a lot of actual traffic doesnt neccessarily show up on the others | 21:49 |
Guest79698 | both ports are owned by the pulseaudio process | 21:50 |
=== vegar is now known as Guest62434 | ||
Guest62434 | i'm Guest79698 btw :) | 22:17 |
Guest62434 | with the multicast issue | 22:17 |
RoyK^ | Guest62434: why not get a proper nick? | 22:18 |
Guest62434 | good question | 22:18 |
=== Guest62434 is now known as vegar_ | ||
RoyK^ | vegar uten d og greier | 22:21 |
guntbert | !no | RoyK^ | 22:21 |
ubottu | RoyK^: Hvis du vil diskutere på Norsk, vennligst gå til #ubuntu-no. Takk! | 22:21 |
RoyK^ | jada :) | 22:22 |
vegar_ | wrong response :p | 22:22 |
RoyK^ | I don't discuss stuff in Norwegian in here, but a short comment or two should be accepted | 22:23 |
guntbert | RoyK^: it was no reprimand - I wanted to be helpful | 22:24 |
vegar_ | we have feelings too you know guntbert :) | 22:25 |
RoyK^ | I know the rules, thanks :) | 22:25 |
guntbert | vegar_: why wouldn't you? :) | 22:26 |
* RoyK^ hands guntbert a bunch of dried fish to snac on | 22:26 | |
RoyK^ | snack, even | 22:26 |
* guntbert nibbles | 22:27 | |
RoyK^ | mér finns gott að eta harðfiskur í kvölð | 22:29 |
vegar_ | now it's starting to get out of hand | 22:29 |
RoyK^ | :D | 22:29 |
RoyK^ | I'll quit it | 22:30 |
RoyK^ | it'd be nice if someone could fast-forward the btrfs progress so that I could use linux for storage and not having to use friggin' opensolaris | 22:30 |
MTecknology | Any of you know how a partition UUID is calcualted? | 22:32 |
MTecknology | RoyK^: put your own effort into it :D | 22:32 |
guntbert | MTecknology: only how you can find it: blkid :) | 22:32 |
RoyK^ | MTecknology: heh - I'm not a coder - it's easier to just use zfs | 22:32 |
MTecknology | guntbert: :P - I use ls -l /dev/disk/by-uuid/ | 22:33 |
guntbert | MTecknology: right - but if I remember correctly that is not always correctly populated | 22:34 |
MTecknology | guntbert: no, not after you change things before a reboot - I never knew blkid before today :P | 22:35 |
guntbert | MTecknology: ok | 22:35 |
uvirtbot | New bug: #560299 in samba (main) "package samba-common-bin 2:3.4.0-3ubuntu5.6 failed to install/upgrade: Unterprozess installiertes post-installation-Skript gab den Fehlerwert 2 zurück (dup-of: 514963)" [Undecided,Confirmed] https://launchpad.net/bugs/560299 | 22:37 |
MTecknology | guntbert: how are ya? | 22:39 |
GhostFreeman_ | How do i reset the password on a server using an install CD | 23:48 |
animeloe[net] | you can use a liveCD | 23:49 |
animeloe[net] | with the server CD use the rescue mode | 23:49 |
vegar_ | Anyone heard Radiohead - Reckoner and thought.. is this RHCP? | 23:51 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!