[00:01] <andol> GammalSokk: Well, in that case I'd say you'd also have to modify your new /etc/init.d/samba to call smbd and nmbd using the -s flag. That's probably just one of many defaults you now have to be explicit about.
[00:02] <GammalSokk> ye, guess I'm gonna try getting it done tomorrow tho, getting late now, and I can't find any usefull about it when I search the forum or on google...
[00:03] <GammalSokk> oh and nmbd doesn't restart properly when I issue '/etc/init.d/samba restart' it seems, heh, I blame me being tired
[00:03] <andol> GammalSokk: That's a normal problem :)
[00:03] <GammalSokk> ah, ok
[00:04] <andol> GammalSokk: That is, things going wrong due to the system administrator being tired :)
[00:08] <GammalSokk> I guess I can just blame my boss for demanding this to be done in a too small time frame :P Buuut then again he's paying my overtime so...
[00:23] <andol> GammalSokk: Well, if nothing else the smb.conf man page is really good.
[00:23] <GammalSokk> gives me something to do at work tomorrow I guess :)
[00:23] <GammalSokk> ty for help so far, gotta try and sleep 4 hours before going back to work :P
[00:24] <andol> yeah, sleep is probably something I should look into myself :)
[00:27] <crohakon> How do I setup SSL?
[00:27] <crohakon> (Error code: sec_error_untrusted_issuer) <--- I am getting this error when trying to access a https website on my server
[00:28] <billybigrigger> don't have proper certs setup?
[00:28] <billybigrigger> check the server guide
[00:28] <crohakon> billybigrigger, good idea
[00:28] <crohakon> =)
[00:29] <billybigrigger> https://help.ubuntu.com/9.10/serverguide/C/certificates-and-security.html
[00:30] <crohakon> okat, the issue seems to be that the cert is self signed
[00:30] <crohakon> Okay*
[00:30] <crohakon> So... wtf? I am not going to pay to have it authorized.
[00:30] <crohakon> This is for a development server in my basement.
[00:32] <crohakon> oh, i'm an idiot
[00:32] <crohakon> never mind, I missed the "make an exception" part =)
[00:33] <billybigrigger> :P
[00:41] <billybigrigger> anyone here familiar with ssh tunneling?
[00:41] <billybigrigger> i'm trying to setup a tunnel between my friends computer, and my server...
[00:41] <billybigrigger> so that we can both use my usenet account at the same time
[00:41] <billybigrigger> from the same IP address
[00:42] <billybigrigger> i've created an account on my server, and i can ssh into my box, from his...with this command ssh -p 2222 68.146.139.247 -L 2222:news.astraweb.com:119
[00:42] <billybigrigger> that connects fine, and then after i launch pan on his pc, via vnc, i try to connect to localhost:2222
[00:43] <billybigrigger> this should redirect him to news.astraweb.com:119 correct?
[00:43] <billybigrigger> or am i missing something here?
[00:43] <billybigrigger> 2222 is the port i have sshd running on my server
[00:43] <billybigrigger> or do i need to specify a different port to tunnel through? ie......
[00:44] <billybigrigger> ssh -p 2222 68.146.139.247 -L 3333:news.astraweb.com:119
[00:44] <billybigrigger> and have him connect through pan via localhost:3333
[00:45] <epinky> ?
[00:45] <billybigrigger> hmm
[00:45] <billybigrigger> i guess we're both downloading now at the same time...everything seems to be ok i guess
[00:49] <billybigrigger> this tunnel is pretty effin slow i might add haha, maybe this isn't the best way to go about this
[00:49] <billybigrigger> i guess this tunnel would be capped at my upstream wouldn't it?
[00:49] <jmarsden> billybigrigger: Yes.
[00:49] <billybigrigger> since i'm technically sending it to him
[00:50] <billybigrigger> hmmm
[00:50] <jmarsden> Might be better to have him use X forwarding, so he sshes into your server and then runs pan on that server, with its display forwarded over ssh back to his local workstation>  That assumes he has X on his local workstation...
[00:51] <billybigrigger> either way that data he downloads with still be capped via my upstream
[00:51] <billybigrigger> s/with/will
[00:52] <jmarsden> billybigrigger: No, using X forwarding the data between your server and him is just video and keystrokes/mouse movement.  The news stays on your server machine.
[00:52] <billybigrigger> my server is a VM :)
[00:52] <billybigrigger> my upstream is 120kb/s max :P
[00:52] <billybigrigger> maybe i should look into renting a host for this :)
[00:52] <jmarsden> Then why are you offering to share it with friends?? :)
[00:53] <billybigrigger> yeah, having my upstream being the bottleneck totally slipped my mind
[00:54] <jmarsden> 120kb/sec is slow... you have a connection using 2 56k dialup modems bonded together??
[00:54] <billybigrigger> no thats my cable modem
[00:55] <billybigrigger> 2.5MB/s down 120KB/s up :)
[00:55] <jmarsden> Ah, OK.
[00:55] <billybigrigger> he has the same ISP
[00:55] <jmarsden> I'm spoiled here -- Verizon FIOS, so 10Mbps down / 2Mbps up :)
[00:55] <billybigrigger> even using my server as a proxy would not help us out in this situation would it
[00:56] <billybigrigger> ya canadian ISP's suck for upstream, they all suck
[00:56] <jmarsden> billybigrigger: Not that much -- I'm not sure whether remote X over 120kbps would be better or worse than the news feed going over that 120kbps link...
[00:57] <billybigrigger> in either option, the ssh tunnel, or setting up the proxy server, he will still be capped at my upstream
[00:57] <billybigrigger> so either tell him to buy his own usenet account or split the cost of a co-located server....
[00:57] <billybigrigger> $11/month for the usenet account seems to be the best option :) haha mind you i wouldn't mind having a server setup with a decent connection
[00:58] <jmarsden> $20/mo for a small slice on Linode might work -- $10each if you share it... ?
[00:58] <billybigrigger> linode, never heard of it
[00:59] <jmarsden> http://www.linode.com  -- well reputed place for getting Linux virtual servers
[00:59] <billybigrigger> checking it out now
[01:03] <billybigrigger> doesn't say what kind of link the servers are on though...unless im missing something
[01:05] <zroysch1> how can I get the output of dmesg with timestamps so I know when these things happened
[01:05] <jmarsden> Several Mbits/sec per VM, I'm sure -- they are at huge data centers buying bandwidth in bulk... you can ask them if you want a clear answer
[01:06] <jmarsden> zroysch1: The number in [] on the left of dmesg output is the number of seconds since server startup... doesn't that tell you when things happened?
[01:07] <zroysch1> jmarsden: yea i'm not trying to sit here and calculate for every event.
[01:08] <jmarsden> zroysch1: You could write a trivial script to accept a time (the server boot time) as a parameter and dmesg output as input and display the times any way you want... probably a two or 3 line Perl script would do it.
[01:09] <zroysch1> yea i wouldnt know where to start
[01:10] <jmarsden> You are a server admin and have no scripting skills?  Time to learn, maybe ?
[01:11] <epinky> server admin, what is that?
[01:12] <zroysch1> uh yea i have a computer sitting next to me running ubuntu server
[01:12] <zroysch1> i guess that makes me a server admin
[01:15] <jmarsden> If you prefer, get the dmesg output into a spreadsheet and set that up to do the time conversions, maybe?  Use whatever tools you *do* know.
[01:32] <zroysch1> jmarsden: dmesg -h would be ideal.
[01:33] <billybigrigger> jmarsden, can i still use ssl through an ssh tunnel?
[01:33] <jmarsden> zroysch1: There is no -h option to dmesg.  You mean like du -h, where "h" means "human-reladable format"?  Sure.
[01:33] <jmarsden> billybigrigger: Yes.
[01:33] <zroysch1> correct.
[01:33] <billybigrigger> linode is by far the best VPS option i can find
[01:42] <jmarsden> zroysch1: Try this Perl oneliner: while (<STDIN>) { /^\[([0-9]+)(.*)$/ ; print "[" . localtime($ARGV[0] + $1) . $2 . "\n"; }
[01:43] <zroysch1> jmarsden: thanks, but how would i implement that
[01:44] <zroysch1> and why is my /var/log/messages filled with only -- MARK --
[01:44] <zroysch1> sorry i cannot google that
[01:44] <jmarsden> Stick it into a file that starts with #!/usr/bin/perl on one line and the perl I gave you on another line.  Let's say the file is called display-time.pl  Then do   dmesg |perl display-time.pl 1234567890
[01:45] <zroysch1> ok thanks will try
[01:45] <jmarsden> Where 1234567890 is the time offset when you booted your serer
[01:45] <jmarsden> /var/log/messages is filled with only -- MARK -- if you have a server that is doing nothing at all and has the syslog mark option enabled.
[01:47] <jmarsden> zroysch1: Actually you can do the date conversions on the command line if you prefer, just type
[01:47] <jmarsden> dmesg |perl -e 'while (<STDIN>) { /^\[([0-9]+)(.*)$/ ; print "[" . localtime($ARGV[0] + $1) . $2 . "\n"; }' 1234567890
[01:48] <jmarsden> And adjust the 1234567890 to the correct value for your machine :)
[01:49] <zroysch1> appreciate it
[01:50] <zroysch1> it seems that an ssh connection from the internet is finally stable.
[02:01] <billybigrigger> jmarsden, do you have a linode account?
[02:02] <jmarsden> billybigrigger: No, I've just heard good things from several Ubuntu people who do.
[02:02] <billybigrigger> ahh ok
[02:02] <billybigrigger> just wondering what the setup time is
[02:02] <jmarsden> Minutes, they advertise.
[02:03] <billybigrigger> fair enough
[02:05] <jmarsden> The signup page says "Accounts are activated instantly when possible. " :)
[03:04] <billybigrigger> jmarsden, hmmm linode network link doesn't seem that great
[03:04] <billybigrigger> i've tunneled both me and my buddy to my linode server and we're both getting only 200kb/sec
[03:05] <jmarsden> billybigrigger: If you create a user for me on your server I can ssh in from here and test bandwidth to/from both my home and from other servers which have plenty of bandwidth,,,
[03:08] <billybigrigger>  1% [                                       ] 77,941,856  2.52M/s  eta 27m 50s
[03:09] <billybigrigger> thats from wget
[03:09] <billybigrigger> just don't have a decent place to scp a file to test this upstream
[03:09] <jmarsden> 2.52M/s == 2.52 Megabytes per second, so that's 20 mbits/sec which seems reasonably quick to me...
[03:10] <billybigrigger> not no 100mbit i thought i would have though :)
[03:10] <billybigrigger> that's the same downlink as my home connection
[03:10] <billybigrigger> just that my home connection has a crap uplink
[03:10] <billybigrigger> and by the looks of it, so does linode
[03:11] <jmarsden> get me an ssh login and I'll test both ways from a server at a major datacenter to and from your server...
[03:11] <billybigrigger> check pm
[03:11] <jmarsden> Got it... here we go...
[03:15] <billybigrigger> jmarsden, i don't see you logged in
[03:15] <jmarsden> 1.7Mbytes/sec from me to you, 1.4Mbytes/sec from you to me, over ssh.  Pretty decent for a small slice
[03:15] <jmarsden> I scped rather than sshing in for each connection, use last to see the two brief scp sessions
[03:16] <billybigrigger> hmm some claim in the linode irc chan 50mbps
[03:16] <billybigrigger> for uplink
[03:17] <jmarsden> Do they have a larger slice?  it may be allocating bandwidth based on the size of your slice??
[03:17] <billybigrigger> i asked for my 360 account
 50mbps, upgradeable for free if you have legitimate/acceptable reason to be so.
[03:18] <jmarsden> Hmm.  Well, at the moment you're not seeing that, at least not to where I tested.  And I don't *think* the server I used would be the limiting factor...
[03:18] <billybigrigger> did you test from a datacenter?
[03:18] <billybigrigger> or just your home link
[03:19] <jmarsden> Yes, from a Verio datacenter where I admin a work server
[03:20] <billybigrigger> what's 50mbps, like 6Mbytes/sec roughly?
[03:21] <jmarsden> Yes.  But does it matter to you -- if you get anywhere close to 2Mbits/sec your cable will become the limiting factor anyway :)
[03:22] <billybigrigger> of course
[03:22] <billybigrigger> my connection SHOULD be the bottleneck
[03:22] <billybigrigger> but it's not by the looks of things
[03:23] <billybigrigger> not even seeing close to the 1.7/1.4 mbytes you saw though
[03:23] <billybigrigger> 200k/sec here and 250k/sec for him
[03:23] <jmarsden> So if you do    scp -pv -P 2222 bigfile user@ipaddress:      what do you see?  Then scp -pv -P 2222 usedr@ipaddress:bigfile bigfile2  to try it from the server to you.
[03:24] <billybigrigger> ssh -p 2222 74.207.252.123 -L 2222:news.astraweb.com:119
[03:24] <billybigrigger> does that look like a correct ssh tunnel?
[03:24] <jmarsden> Yes, looks fine to me.
[03:24] <billybigrigger> thought so
[03:25] <jmarsden> News may not be a good bandwidth test... lots of small articles...
[03:25] <billybigrigger> whats a quick way to spit out a 10MB test file on this server?
[03:26] <pmatulis> use dd
[03:26] <jmarsden> dd if=/dev/random of=testfile bs=1024 count=10240
[03:26] <billybigrigger> 100%[[03:27] <billybigrigger> nevermind, found one on the net
[03:27] <billybigrigger> that was quick
[03:28] <jmarsden> There is also one in ~jmarsden on your server (from my tests) :)
[03:28] <billybigrigger> ahh :)
[03:29] <billybigrigger> could it be the limitation of openssh or the tunnel?
[03:29] <jmarsden> You'd have to have a very slow CPU for the ssh crypto to slow down that far.
[03:30] <jmarsden> On a 486, sure, it might be a limitation :)
[03:32] <jmarsden> If you are really testing newsfeed speed, can you download news fast on the server itself using a shell-based newsreader?
[03:35] <billybigrigger> well i'm just going to have to setup apache and host this 10mb.bin somewhere
[03:35] <billybigrigger> this is odd
[03:39] <billybigrigger> http://74.207.252.123/10mb.bin
[03:40] <jmarsden> What's odd?  1.14Mbytes/sec download to here ~= 10Mbit/sec which is my download speed... seems fine to me :)
[03:42] <jmarsden> 1.6Mbit/sec to "my" server in a datacenter, but I think the file is too small to really be a good test at those speeds, it was still speeding up when the download ended.
[03:43] <jmarsden> *1.6Mbyte/sec
 a little slow to get going at first (mind you, i'm coming at it from approx. 3000 miles away), but 3.11MB/sec -> 24.88Mb/sec, trending faster.  with a larger file, it'd fly
 22:39:55 (3.11 MB/s) - `/dev/null' saved [10485760/10485760]
 3.11MB/s is nowhere near my 231K/s :)
 from my house, 2009-11-22 22:41:45 (1.68 MB/s) - `/dev/null' saved [10485760/10485760]
[03:45] <billybigrigger> he's 3000 miles from my server, i'm only 1500 miles
[03:45] <billybigrigger> i'd be happy to see 1MB/s
[03:48] <twb> Is WUBI the same thing as goodbye-windows.com?
[03:56] <kshah> I somehow botched my postfix configuration, I set home_mailbox to Maildir/ but I still see mail going to /var/mail/user .. ideas?
[03:57] <billybigrigger> did you restart postfix?
[03:57] <kshah> yes
[03:58] <kshah> billybigrigger: yes I was following Ubuntu server guide on postfix, so I also have dovecot up.. I'm not great setting up email daemons
[03:59] <kshah> my ultimate goal here is to setup procmail
[03:59] <WALoeIII> use google apps
[03:59] <WALoeIII> mail SUCKS
[03:59] <WALoeIII> but you already know that.
[03:59] <kshah> but it seems like procmail needs the mail in the /home/user/Maildir format
[03:59] <jmarsden> twb: No, WUBI installs Linux within files inside the WIndows filesystem, or used to... goodbye-windows.com looks like a way to boot a Debian installer from Windows, but you need to repartition etc etc as normal.
[04:02] <jmarsden> kshah: No, procmail will work on normal mailbox files too, or it did a few years ago for me...
[04:02] <twb> jmarsden: OK.  I was confused on that point, since goodbye-windows also appears to run as a Windows .exe
[04:02] <kshah> jmarsden: awesome, and I'll go that route if I can't figure this out, but I do also want to know why my setting isn't taking effect
[04:02] <billybigrigger> jmarsden, would a proxy server help out my speeds here at all?
[04:03] <kshah> cat /etc/postfix/main.cf | grep home_mailbox # => home_mailbox = Maildir/
[04:03] <jmarsden> billybigrigger: Well, for browsing static web pages it might, but that's not what you are trying to speed up...
[04:03] <billybigrigger> so pretty much, my connection to my server sucks, but it's great for everyone else :)
[04:04] <qman__> billybigrigger, a proxy server only increases speeds on files you have already downloaded before
[04:04] <jmarsden> billybigrigger: Looks like it :)  Which is pretty odd...
[04:04] <qman__> so it helps in multi user environments
[04:04] <qman__> but that's about it
[04:04] <billybigrigger> jmarsden, i should have looked into a canadian vps
[04:04] <twb> billybigrigger: a proxy for what?  HTTP?
[04:05] <jmarsden> billybigrigger: Well, you have 7 days to test it for free, if you find something better you can drop Linode within that time and get your money back.
[04:05] <jmarsden> At least, they used to offer that, I think they still do.
[04:06] <twb> Probably takes a week to get a VPS fully configured anyway
[04:06] <qman__> billybigrigger, what type of internet connection are you using?
[04:06] <twb> (Just like any other server.)
[04:06] <qman__> 1MB/s is more than a lot of home connections can do
[04:06] <billybigrigger> 25mbps advertised
[04:07] <billybigrigger> i can get around 2.0 - 2.5/MB/s downloads, with a 120K/s upload
[04:07] <qman__> ah
[04:07] <twb> Incidentally, an HTTP proxy like polipo uses some tricks to reduce latency even for URLs that aren't cached, such as upgrading the connection to HTTP 1.1 and using multiplexing.
[04:07] <twb> billybigrigger: that'll just be because you're a ways from the exchange, or have a lot of line noise
[04:08] <twb> Obviously another way to make browsing faster is to disable flash, images, js, css, etc.
[04:08] <billybigrigger> not trying to speed up browsing
[04:08] <jmarsden> twb: or use lynx :)
[04:08] <twb> I use w3m, actually.
[04:09] <billybigrigger> me and a buddy are sharing a usenet account, and we're both tunneling over ssh into this VPS i bought, so we can both use the news server at the same time
[04:09] <billybigrigger> but we're only seeing like 200k/sec each
[04:09] <billybigrigger> 200K/sec sorry
[04:09] <billybigrigger> http://74.207.252.123/10mb.bin
[04:09] <twb> billybigrigger: you could set up leafnode (an NNTP proxy)
[04:09] <billybigrigger> what do you guys get for download speeds from this server?
[04:09] <billybigrigger> twb, is it going to be any faster than this ssh tunnel?
[04:10] <twb> billybigrigger: latency is not the same as speed
[04:10] <billybigrigger> even when i ssh into this server it seems lagged to hell
[04:10] <twb> billybigrigger: if leafnode has already downloaded news to your local machine overnight, then you don't need to wait for it to come down while you're reading it -- so latency is reduced even though you're probably downloading more overall
[04:11] <billybigrigger> typing takes forever...
[04:11] <twb> billybigrigger: you should also investigate QoS
[04:11] <twb> billybigrigger: also, you should check the load on the remote host -- it might be that someone is running e.g. emacs or firefox there
[04:11] <twb> 15:09 <billybigrigger> http://74.207.252.123/10mb.bin
[04:11] <twb> 100 10.0M  100 10.0M    0     0   127k      0  0:01:20  0:01:20 --:--:--  129k
[04:11] <twb> That's 129kB/s, I think.
[04:11] <billybigrigger> hmm
[04:13] <jmarsden> twb: He'd need a fair amount of disk space and bandwidth to maintain a leaf node, though -- how big is a full Usenet feed these days?
[04:13] <twb> jmarsden: leafnode can proxy selective groups
[04:13] <jmarsden> billybigrigger: ssh to your VPS has no discernible lag from here in Southern California...
[04:13] <twb> jmarsden: actually its default behaviour is only to pre-fetch groups you have tried to read in the last N days
[04:14] <jmarsden> twb: OK, that sounds workable.
[04:14] <twb> So if you read all articles in a group, leafnode shouldn't be significantly more intensive than not using leafnode
[04:16] <billybigrigger> hmmm....i use nzb's mostly, i don't even subscribe to any groups
[04:16] <twb> nzb's?
[04:16] <twb> Is that a newsreader?
[04:17] <billybigrigger> no
[04:17] <billybigrigger> pan i use for the newsreader
[04:17] <billybigrigger> nzb is just for downloading binaries
[04:18] <twb> Oh, you are an alt.sex.binaries weenie
[04:18] <billybigrigger> bahaha
[04:18] <billybigrigger> not quite
[04:19] <twb> alt.sex.furries.binaries?
[04:19]  * jmarsden thinks alt.sex.* preferences are probably off topic in #ubuntu-server :)
[04:20] <twb> So, has anybody tried ext3's transparent compression functionality?  Is it reliable?
[04:20] <twb> I'm wondering if I can/should turn it on for stuff like ~/Mail and ~/News, which are guaranteed to be lots of small text files.
[04:28] <jmarsden> I've never tried it, but have wondered about it... is it still "an unofficial patch" ?  I'm not sure how much I trust an unofficially patched filesystem...
[04:30] <billybigrigger> you doing anything important on that vps jmarsden? :)
[04:30] <jmarsden> Nope :) I just left myself logged in after testing for keyboard lagginess that you reported :)
[04:31] <billybigrigger> do you see it?
[04:31] <jmarsden> No, it's lag-free for me.
 billybigrigger: ssh to your VPS has no discernible lag from here in Southern California...
[04:31] <billybigrigger> that vps is in cali, i'd sure hope not :)
[04:34] <jmarsden> Looks like I'm ten hops and about 25ms away from it.
[04:37] <mylogic> o.o
[04:39] <jmarsden> billybigrigger: 1 100MByte test file makes the bandwidth of your VPS look better: 4.2Mbytes/sec scp transfer.
[04:41] <billybigrigger> k i moved it to /var/www
[04:42] <billybigrigger> 4% [>                                      ] 4,233,872    178K/s  eta 5m 37s
[04:42] <billybigrigger> wget http://74.207.252.123/100MB.testing
[04:42] <billybigrigger> i think i just need to get a VPS host here in canada or something
[04:43] <jmarsden> Could be.
[04:43] <billybigrigger> everyone else seems to be able to pull over a MB/s from it, and i can barely break 300KB/sec
[04:43] <jmarsden> Are binaries from Usenet really worth all this effort? :)
[04:44] <billybigrigger> no i actually have a host, thefrozencanuck.ca that i have www/mail and a bunch of junk on here on a VM on my home connection
[04:44] <billybigrigger> i wouldn't mind having it hosted somewhere else
[04:44] <jmarsden> OK.
[04:45] <billybigrigger> but on a host that has a better connection than my home connection :)
[04:51] <uvirtbot`> New bug: #486950 in php5 (main) "php5-cgi should be compiled with the --enable-pcntl option." [Undecided,New] https://launchpad.net/bugs/486950
[05:56] <smackdaddy> whats a good webmail server for ubuntu 9.10 that lets users create their own accounts?
[05:57] <Sam-I-Am> generally users shouldnt be creating their own accounts
[05:59] <smackdaddy> well, yes , i mean that lets them change their passwords from within the webmail page
[06:00] <billybigrigger> check out roundcube
[06:00] <smackdaddy> i tried squirrelmail it didnt have it
[06:00] <billybigrigger> dunno if you can change user/pass though, as it just reads your systems users
[06:00] <smackdaddy> ah
[06:00] <Sam-I-Am> usually password management is not a function of the mail client
[06:00] <billybigrigger> i think you can setup roundcube to read users from a db though
[06:00] <billybigrigger> Sam-I-Am, yeah exactly
[06:00] <Sam-I-Am> what i've done in the past is made a web page for password changes
[06:01] <crohakon> billybigrigger, you drive semi trucks?
[06:01] <billybigrigger> nope
[06:01] <billybigrigger> work on oil rigs :)
[06:01] <smackdaddy> alright, thanks
[06:01]  * smackdaddy installs roundcube
[06:01] <crohakon> billybigrigger, damn... ever been to an asteroid? =)
[06:01] <billybigrigger> ever been to an asteroid?
[06:02] <billybigrigger> i don't understand your question
[06:02] <crohakon> billybigrigger, do you often sing "Leavin on a jet plane"?
[06:02] <billybigrigger> ahh...haha not in awhile
[06:02] <crohakon> =)
[06:02] <Sam-I-Am> billybigrigger: they have internet connections on those?
[06:02] <crohakon> Sam-I-Am, of course they do.
[06:02] <billybigrigger> yeah they do
[06:03] <crohakon> Sam-I-Am, they have to send and receive data all the time. Most likely satellite?
[06:03] <billybigrigger> yeah usually the operator's office usually wants to watch the rig data, and usually some bigshot's with all the $$$ in houston like to watch what your doing aswell :)
[06:04] <crohakon> billybigrigger, one last off topic question... Are you in the gulf?
[06:04] <billybigrigger> nope
[06:04] <billybigrigger> i live/work in canada
[06:04] <billybigrigger> eh
[06:04] <crohakon> oh, nice
[06:12] <pwnguin> (Error code: ssl_error_rx_record_too_long)
[06:12] <crohakon> pwnguin, ssl with zen-cart? =)
[06:12] <pwnguin> just followed the wiki
[06:12] <pwnguin> https://help.ubuntu.com/8.04/serverguide/C/certificates-and-security.html
[06:13] <pwnguin> crohakon: any idea?
[06:13] <maxagaz> hi
[06:13] <crohakon> pwnguin, was I right? Zen Cart?
[06:13] <pwnguin> no
[06:13] <crohakon> pwnguin, oh... nope, I can't help. I am getting the same issue with zencart and ssl
[06:14] <pwnguin> i have no idea what zencart is
[06:14] <pwnguin> im guessing a php app for ecommerce
[06:14] <crohakon> pwnguin, shopping cart e commerce stuff
[06:15] <pwnguin> crohakon: im pretty sure the problem is unrelated to your cart, except for the part where ecommerce requires SSL
[06:16] <pwnguin> crohakon: check your virtualdirectory apache config
[06:16] <crohakon> pwnguin, figured as much as well... I just reinstalled it without ssl as I am just playing around with it.
[06:16] <crohakon> seeing if I like it
[06:17] <crohakon> *shrugs*
[06:17] <pwnguin> yea, i had <VirtualHost *:80>
[06:17] <pwnguin> SSL dont like that
[06:18] <Sam-I-Am> well you can run one ssl vhost... then the other ones wont work without other IPs heh
[06:27] <pwnguin> well, i just have the one domain
[06:28] <Sam-I-Am> time for zzz here...
[06:30] <maxagaz> i have put my id_dsa.key in the .ssh/authorized_keys of a server, but still when i try to ssh to the server, it returns: Permission denied (publickey). why?
[06:34] <pwnguin> because you did it backwards
[06:35] <pwnguin> you need to put the .pub in the authorized keys file
[06:35] <pwnguin> that way the server doesn't have your private key
[06:36] <pwnguin> the id_dsa.key is stored wherever you wish to ssh FROM, and the id_dsa.pub is needed wherever you wish to ssh INTO
[06:40] <pwnguin> maxagaz: there's a program that will actually deploy keys for you
[06:40] <pwnguin> ssh-copy-id
[06:42] <smackdaddy> how do i configure roundcube
[06:44] <pwnguin> judging by my server logs, poorly
[06:45] <pwnguin> seems like im always getting roundcube attack attempts =/
[06:47] <smackdaddy> it sucks?
[06:47] <smackdaddy> i cant even get it installed
[06:47] <smackdaddy> or working..
[06:47] <smackdaddy> its installed
[06:53] <maxagaz> pwnguin, i don't have password access to the server, so ssh-copy-id won't work
[06:54] <pwnguin> well, then you get to do it the hard way
[07:02] <maxagaz> pwnguin, what is the hard way ? I already put the content of my user's id_dsa.key at the end of the authorized_keys of the distant user on the remote server
[07:02] <maxagaz> pwnguin, is there something else to do ?
[07:08] <pwnguin> maxagaz: yes. delete that, becuase it's the wrong thing
[07:08] <pwnguin> maxagaz: do you know how public key encryption works?
[07:08] <maxagaz> pwnguin, partly
[07:08] <pwnguin> you want the user's public key on the server
[07:09] <pwnguin> however, you put the private key on the server
[07:09] <maxagaz> pwnguin, no, i did put the public key
[07:09] <maxagaz> pwnguin, id_dsa.pub
[07:09] <maxagaz> (pwnguin, sorry for saying id_dsa.key)
[07:10] <pwnguin> then you have a long night ahead of you
[07:11] <pwnguin> perhaps blow away the auth_keys file
[07:11] <pwnguin> and maybe make sure the keys are matched
[07:16] <maxagaz> pwnguin, actually i can access the server via another address and port, with password, so I've add the pub key from it using ssh-copy-id, now i can access the server from this way without password, but if i try to access the server from its other address and other port, it returns: Permission denied (publickey). Why?
[07:21] <pwnguin> not sure. im not quite the expert at configuring servers yet
[07:30] <crohakon> so, when I try to connect to my ftp server from outside my lan I get Response:	227 Entering Passive Mode (192,168,1,2,209,60) and Status:	Server sent passive reply with unroutable address. Using server address instead.
[07:30] <crohakon> How do I fix this?
[07:31] <jmarsden> crohakon: Tell your FTP server what your external address is and that it needs to use it in port commands.
[07:32] <crohakon> I use vsftpd... where do I start?
[07:32] <jmarsden> crohakon: the man page for vsftpd, I would think... :)  Let me look...
[07:33] <crohakon> jmarsden, nothing in the man page
[07:34] <jmarsden> Did you also read the man page it points to, man vsftpd.conf ?  I think not.
[07:35] <crohakon> =(
[07:35] <jmarsden> Hint: search for pasv_address
[07:38] <crohakon> okay, what if I have a dynamic IP?
[07:40] <jmarsden> I think you are somewhat stuck; you can use pasv_addr_resolve to resolve your dyndns hostname at vsftpd startup time, but if it changes underneath the vsftpd instance it will break until you restart vsftpd.
[07:41] <jmarsden> Does your ISP really sanction file servers on dynamic IP addresses, by the way?
[07:41] <crohakon> So I can used the pasv_addr_resolve=YES with pasv_address=whatever.dynhost.com
[07:41] <crohakon> ?
[07:41] <jmarsden> Right.
[07:41] <crohakon> And that should work?
[07:41] <crohakon> Great.
[07:41] <crohakon> Thanks man.
[07:42] <jmarsden> It will "work" until your dynamic address changes, I think.
[07:44] <crohakon> Well, it now resolves, but still fails to connect.
[07:44] <smackdaddy> i keep getting connection refused with vsftpd
[07:44] <qman__> FTP is a nightmare, suggest SFTP instead
[07:44] <smackdaddy> whats the command to open ftp
[07:45] <jmarsden> crohakon: do you have the relevant range of ports open for incoming PASV FTP connections?
[07:46] <crohakon> do they use something different then the normal port? I currently have the server listening on port 93
[07:46] <crohakon> and I have the router set to forward all connections on port 93 to the server
[07:46] <qman__> crohakon, you need both the FTP listening port and a range of high ports
[07:47] <crohakon> How do I get that range?
[07:47] <qman__> assigned to the FTP server, all forwarded
[07:47] <jmarsden> crohakon: Yes.  Very much so.  To run an FTP server that supports PASV mode FTP you need a range of ports too. ... read the vsftpd.conf man page again... :)
[07:47] <qman__> this is why I hate FTP, and suggest SFTP instead
[07:47] <qman__> on top of only needing one port, the default is not filtered by your ISP
[07:47]  * crohakon sighs
[07:48] <qman__> and you won't have any dyndns issues
[07:48] <jmarsden> crohakon: pasv_min_port and pasv_max_port are your friends .  As you are discovering, FTP was not designed to have FTP servers run behind home NAT/firewall boxes.
[07:48] <jmarsden> It can be made to work, as long as you understand it.
[07:50] <crohakon> those are not in the man page, but I guess I get how they work. pasv_min_port=5000 pasv_max_port=5100  and it will then use 5000 through 5100?
[07:50] <qman__> yes
[07:50] <crohakon> okay
[07:50] <qman__> and you need one port per connection
[07:50] <jmarsden> They are in my man page... but yes.
[07:50] <crohakon> Is the page alphabetical?
[07:50] <qman__> and it will choose randomly, so make sure you forward the entire range
[07:51] <jmarsden> crohakon: No idea, I searched for the word "range" to find them quickly.
[07:51] <crohakon> so if I only expect say, 4 connections at a time then I only have to have a 4 port range?
[07:52] <jmarsden> Yes.
[07:52] <qman__> technically yes, but you should have extras
[07:52] <qman__> and be aware that one person may make multiple connections
[07:52] <qman__> some clients transfer multiple files and browse at the same time
[07:52] <qman__> opening lots of connections
[07:53] <jmarsden> I've generally used 1000 ports for this on FTP servers behind NAT.  Just so there are plenty available :)
[07:53] <jmarsden> 100 should be fine in practice.  4 .. could be limiting.
[07:54] <qman__> yeah
[07:54] <pwnguin> anyone know of a photo gallery webapp that's similar to the flickr API?
[07:54] <pwnguin> or otherwise popular enough to have linux apps supporting it?
[07:54] <crohakon> Response:	425 Security: Bad IP connecting. <---- getting this now =( damn
[07:55] <qman__> as was mentioned before, FTP was designed before firewalls and NAT
[07:55] <qman__> as such it's very difficult to make it work
[07:55] <crohakon> I am almost to the point that I want to connect the server directly to the modem and place the router and switches behind it...
[07:55] <crohakon> I have a spare nic card lol
[07:55] <qman__> still not sure why you want FTP, SFTP is better in every way
[07:56] <crohakon> Well, I already have vsftpd setup to work with my MySQL server for account names and such....
[07:56] <crohakon> So, I kind of want to push on and make it work.
[07:57] <qman__> ok
[07:57] <qman__> well, check the connection log and see what IP your client is giving to the server
[07:59] <crohakon> okay, so the log tells me that I am connecting from 192.168.1.3 (which is correct, it is the IP I have set for my laptop)
[08:00] <qman__> ok, let me put this into perspective
[08:00] <qman__> since FTP isn't designed to work with NAT, in order to allow external connections, you have to tell the FTP server it's using the external IP
[08:00] <qman__> but when you do that, connections from LAN cease to work
[08:00] <jmarsden> crohakon: Wait... I thought you were configuring this for connections from the outside...!
[08:00] <twb> You can run FTP over a NAT
[08:00] <qman__> so you can either go from the net, or you can go from local
[08:00] <twb> You need to use some conntrack magic on the router
[08:01] <qman__> but not both at the same time unless you configure the router specially
[08:01] <crohakon> jmarsden, I am configuring it to work from the out side... but I also want to connect from the lan as well. I have friends that need to connect from the out side.
[08:01] <qman__> and unless you have a router with dd-wrt or linux or something, you probably can't do that
[08:02] <jmarsden> crohakon: qman__ is correct -- you didn't specify you needed this to work from the LAN earlier.  Unles you can make your router sing and dance, pick one or the other.
[08:02] <crohakon> I honestly don't use the ftp access much as I mostly wget files to the server...
[08:02] <qman__> the FTP server can only accept connections to a certain IP, and it must either be your LAN IP or your internet IP, not both
[08:02] <jmarsden> crohakon: Then test it from the Internet, not from a machine on your lcoal LAN.
[08:02] <crohakon> How do I test it from the internet?
[08:03] <qman__> call one of your friends ;)
[08:03] <jmarsden> crohakon: ssh out to some other box, ftp in from there...
[08:03] <crohakon> ... *sigh*
[08:04] <billybigrigger> open your ftp connection to your IP address should route outside the lan, and back in
[08:04] <billybigrigger> ie 79.25.154.245 for example, not your LAN ip of 192.168.1.1 or whatever
[08:05] <qman__> it would, but only if the router can handle it
[08:05] <qman__> most routers can't by default
[08:05] <crohakon> and I doubt this router can
[08:05] <crohakon> So...
[08:05] <qman__> it requires some magic
[08:05] <crohakon> So, if I connect the server directly to the modem, and then route my other computer through it, would that resolve the issue?
[08:06] <qman__> yeah, but it would bring up a whole bunch more
[08:06] <billybigrigger> hehe not worth it
[08:06] <qman__> you'd be running ftp on your router
[08:06] <qman__> which is a bad idea
[08:06] <qman__> every day of the week
[08:06] <crohakon> modem <-- server <--- wireless/4port router <--- switches
[08:06]  * crohakon sighs once more
[08:07] <qman__> when you do that, your server becomes the router
[08:07] <qman__> you have to configure NAT and masquerading
[08:07] <qman__> and be very careful how you set up your firewall
[08:07] <crohakon> qman__, I figured that.
[08:07] <qman__> and running services on the router itself to the internet is a bad idea
[08:07] <billybigrigger> whats wrong wtih sftp or scp?
[08:08] <crohakon> okay, so, when it comes down to it I don't really care if I can ftp from inside my network. I mostly wget and edit files via ssh anyway.
[08:08] <crohakon> My friend that is attempting to connect to it, however, is still unable to connect.
[08:08] <qman__> then the configuration you have now is likely correct
[08:08] <jmarsden> crohakon: What exact error does your friend see?
[08:08] <crohakon> port forwarding is set correctly, conf looks correct as well
[08:09] <crohakon> connection was closed by remote host
[08:09] <qman__> what does the server log say
[08:10] <crohakon> CONNECT: Client "xxx.yyy.zzz.vvv"
[08:10] <crohakon> no other information
[08:12] <qman__> I just made a connection attempt
[08:12] <qman__> it asked me for a user/pass and gave me incorrect login
[08:12] <qman__> so it's probably a problem with your friend's client
[08:15] <crohakon> It seems he was using an SFTP client
[08:15] <crohakon> fugu or something for max
[08:15] <crohakon> mac
[08:16] <crohakon> He is going to download a new client and try again. =)
[08:16] <crohakon> thanks for everyones help thus far.
[08:19] <jmarsden> crohakon: Assuming his Mac runs OS X, can't he open a Terminal window and use the command line ftp client?
[08:19] <crohakon> jmarsden, I don't know.. never used a mac... and he is not exactly a power users...
[08:19] <crohakon> user*
[08:19] <jmarsden> OK.
[08:20] <billybigrigger> i never touched a mac or osx but isn't it based on a linux kernel?
[08:20] <qman__> BSD actually
[08:20] <twb> OS X runs a FreeBSD-derived userland and a Mach-derived microkernel
[08:21] <twb> Then they bolted on some GNU stuff
[08:21] <twb> It's basically the sort of messy clustercruft you'd expect from the Unix Wars of the 1980s
[08:22] <twb> (Fortunately, Debian runs perfectly well on any post-"old world" mac.)
[08:22] <crohakon> he is running MacOS 10.4.11
[08:23] <crohakon> I am trying to convert him to ubuntu, though not sure if it can install on his computer
[08:25] <twb> crohakon: is it PowerPC or x86-64?
[08:25] <twb> crohakon: Ubuntu will run on either, but I believe the former's support is unofficial
[08:26] <crohakon> powerPC
[08:26] <qman__> yeah, not every release has a ppc version, and they're generally unsupported
[08:27] <qman__> but they do exist
[08:27] <crohakon> btw, qman__ tested the ftp server and it works fine. Thanks for all the help.
[08:27] <Bo7> Hello! How can I limit the bandwidth that my apache2 web-server is using?
[08:27] <twb> Bo7: tx or rx?
[08:27] <Bo7> upsteam mostly
[08:27] <crohakon> Well, when I convince him to try ubuntu I will bother the people in #ubuntu =)
[08:28] <twb> Bo7: first of all, look at your httpd logs and realize that most of it is web crawlers like the google bot.
[08:28] <twb> Bo7: then, either write a robots.txt that simply tells them to bugger off, or instead actually fix your website so it is "cache friendly", e.g. using e-tag and expiry headers.
[08:32] <Bo7> twb, well, I host some big files and I want to limit the total bandwidth for all downloaders, so the other apps don't suffer. I don't think robots is a big problem for me really
[08:33] <twb> You could set up per-IP recency and rate limits in iptables.
[08:33] <twb> Probably this can be done in apache, too.
[08:34] <Bo7> aha, if I do that in iptables will it interfere with UFW which I use?
[08:34] <twb> IIRC the hentai.plan9.de webmaster has set up something pretty solid, you could email him and ask for details.
[08:36] <Bo7> but there's not like a simple config-setting in apache for limiting then?
[08:36] <twb> I don't know.  #httpd (apache's channel) would
[08:36] <twb> I tend to stick to extremely simple httpds like thttpd and busybox httpd.
[08:37] <Bo7> allright
[08:58] <martin-> Does the jeos edition of ubuntu 8.04 have lts?
[09:01] <twb> LTS is provider on a per-package basis, AFAIK
[09:02] <martin-> yeah, you're right
[09:02] <twb> Whether any given package receives five years of support depends on something obscure
[09:02] <twb> http://bazaar.launchpad.net/%7Enijaba/ubuntu-maintenance-check/trunk/
[09:02] <twb> I use that to find out whether a package will be supported.
[09:02] <martin-> but it doesn't matter anyway as there doesn't seem to be an amd64 version of jeos 8.04
[09:04] <twb> I have to say I take a rather jaundiced view of just slapping together some branding on top of some arbitrary subset of the main archive.
[09:05] <twb> Or does JeOS actually do something useful, like repace coreutils with busybox?
[09:05] <twb> martin-: wikipedia claims there is an x86-64 version
[09:06] <martin-> then where is it? :o
[09:06] <twb> Oh sorry, it says "AMD x86"
[09:06] <twb> I think they just mean "x86" and are writing for non-techs
[09:07] <martin-> ok
[09:08] <_ruben> jeos isnt even all that much smaller than a clean server install .. so disk footprint wouldnt be an issue .. it does come with fair ammount of less packages, which mostly annoyed me, stuff like tab completion and the likes
[09:08] <twb> _ruben: it says 380MB -- I'm pretty sure a stock d-i install without tasksel tasks checked is more like 200MB
[09:10] <martin-> disk footprint doesn't really matter
[09:10] <martin-> more interested in the optimized kernel and the vmware-optimizations
[09:10] <twb> martin-: what are they?
[09:11] <twb> martin-: the jeos documentation conspicuously doesn't say
[09:11] <martin-> no idea, it just sounds good :P
[09:11] <twb> If Ubuntu wasn't partly FOSS, I'd be inclined to dismiss it as marketing vapourware
[09:12] <martin-> the VMs I'm setting up have a very specific purpose (one DB and one application server)
[09:12] <martin-> anything else doesn't matter
[09:12] <martin-> well, yeah
[09:13] <martin-> it's currently running some ancient red hat enteprise linux 4, which doesn't even have yum
[09:13] <twb> I suspect all that jeos is is a preseed that disabled ubuntu-standard (but leaves ubuntu-minimal in), and forcibly installs openvm-tools, the FOSS fork of the crap that VMware wants guest OSs to taint their kernels with.
[09:13] <_ruben> there's no vmware optimizations in jeos
[09:14] <_ruben> its just a stripped down -server kernel (less modules)
[09:14] <twb> And even that kernel tainting doesn't provide anything useful if you're using VMware Server, since hgfs isn't implemented there and you (presumably) aren't doing 3D graphics
[09:14] <twb> _ruben: so they're using kernel packages that aren't in the main archive?
[09:14] <_ruben> nor does it do open-vm-tools, as jeos isnt vmware specific
[09:14] <twb> Heh.
[09:15] <martin-> so nothing special about -virtual kernels?
[09:15] <_ruben> only that it provides the bare minimum of modules for a vm to work
[09:15] <twb> _ruben: depends on the VM, too, I expect :-)
[09:15] <twb> _ruben: for example, some VMs might want ipt_*
[09:15] <_ruben> and perhaps a few tweaked clock settings, which usualy dont need recompile anyway
[09:15]  * \sh uses always the standard -server flavour with vmware modules ... which gives me a bit better memory sharing between the vms...but I'm not using vmware-server but vmware ESX
[09:16] <martin-> esx here too
[09:16] <_ruben> esxi here
[09:16] <twb> As for me, I am eagerly awaiting LXC productization
[10:54] <maxagaz> how to ssh with a given private key ?
[10:55] <\sh> ssh -o IdentityFile=<path>/<filename of priv key> user@host
[10:55] <\sh> or use ~/.ssh/config
[10:58] <Gorlist> good day, does anyone here run fail2ban on 8.04, proftpd?
[10:59] <twb> In current openssh-client, you can even use %r, %h, etc. in your .ssh/config
[11:00] <twb> Gorlist: nope.  Have you considered migrating to SFTP (read-write access) + HTTP (read-only access)?
[11:00] <Gorlist> ive not, using plesk however
[11:00] <twb> And/or a simple iptables -m recent rule to limit repeated connection attempts from specific IPs?
[11:01] <twb> plesk doesn't really have anything to do with how you provide remote file access to your users...
[11:01] <Gorlist> ive considered that :) and may use it later on but trying to figure out this specific problem
[11:01] <Gorlist> still would like to have fail2ban working, just getting a fault with proftpd
[11:02] <twb> Depending on your use case, if -m recent was working you could get rid of fail2ban
[11:03] <Gorlist> well at the moment im using ufw, though was going to sit down at somepoint, hopefully learn iptable setups as well as applying the rate limit
[11:04] <twb> Hm, does fail2ban even use ipset when you're hooking it into iptables?  Or does it simply add ridiculous numbers of individual iptables rules to INPUT?
[11:05] <Gorlist> ipset I believe, might be wrong however
[11:05] <twb> Good, good.
[11:09] <Zeboss> hello
[11:41] <acalvo> someone using ldap with replication?
[11:50] <twb> acalvo: what's your real question?
[11:53] <acalvo> I've been working with ldap and replication for a month or so, but the last days one of the servers does not respond to queries. However, I can retrieve all the objects of the tree, and I can browse it thru the apache directory studio
[11:54] <acalvo> and I was wondering why this behaviour, and if it's realted to the some cn=config attribute
[12:53] <uvirtbot`> New bug: #236719 in ntp (main) "ntp doesn't support proxy" [Undecided,Invalid] https://launchpad.net/bugs/236719
[13:05] <zul> morning
[13:16] <jbernard> zul: morning
[13:17] <jbernard> zul: made it back okay, no jetlag?
[13:17] <zul> jbernard: yep no delays and no jetlag
[13:18] <zul> jbernard: you?
[13:19] <jbernard> zul: no delays for me, im in good shape
[13:20] <zul> jbernard: coolio
[13:42] <uvirtbot`> New bug: #228442 in virt-manager (universe) "KVM eats 100% CPU, Host Hardy64, Guest XP with more than 1 VCPU" [High,Triaged] https://launchpad.net/bugs/228442
[13:42] <uvirtbot`> New bug: #239068 in tftp-hpa (main) "tftpd-hpa is not working on Edubuntu 8.04 upgraded system." [Low,Incomplete] https://launchpad.net/bugs/239068
[13:42] <uvirtbot`> New bug: #399993 in tftp-hpa (main) "package tftpd-hpa 0.48-2.3ubuntu1 failed to install/upgrade: subprocess post-installation script returned error exit status 71" [Low,Invalid] https://launchpad.net/bugs/399993
[13:42] <uvirtbot`> New bug: #415410 in squid-langpack (main) "MIR for squid-langpack" [Low,Incomplete] https://launchpad.net/bugs/415410
[13:46] <uvirtbot`> New bug: #487098 in quota (main) "package quota (not installed) failed to install/upgrade: subprocess post-installation script returned error exit status 2" [Undecided,New] https://launchpad.net/bugs/487098
[13:47] <uvirtbot`> New bug: #345712 in samba4 (universe) "package samba4-common 4.0.0~alpha4~20080727-1ubuntu1 failed to install/upgrade: subproces post-installation script gaf een foutwaarde 2 terug" [Undecided,Incomplete] https://launchpad.net/bugs/345712
[14:02] <Italian_Plumber> is there a contest for oldest machine running hardy?  I have mine on a Pentium III 450... I'm sure I'm not the oldest.
[14:05] <incorrect> i know someone running a PII
[14:05] <incorrect> with 256mb
[14:06] <Italian_Plumber> sounds fun
[14:06] <incorrect> i would imagine we could find someone out there running a k6
[14:06] <Italian_Plumber> thats an old AMD processor, right?
[14:06] <incorrect> yes
[14:06] <Italian_Plumber> equivalent to Intel....
[14:06] <incorrect> Pentium
[14:07] <incorrect> i think i might have a K6-233mhz
[14:07] <incorrect> maybe i could find my P166
[14:07] <incorrect> mm 16mb
[14:07] <incorrect> that was an awesome machine
[14:07] <Italian_Plumber> would it run on a 486 or 386?
[14:07] <incorrect> suck it and see
[14:09] <incorrect> depends if it is compiled for 686 or 386
[14:09] <incorrect> i would imagine its 686 minimum these days
[14:12] <Italian_Plumber> 686is equivalent to PII?
[14:13] <incorrect> http://gcc.gnu.org/onlinedocs/gcc/i386-and-x86_002d64-Options.html#i386-and-x86_002d64-Options
[14:20] <_ruben> there's still a 386 kernel avail .. wouldnt surprise if me if that'd get dropped sometime
[14:21] <soren> stgraber: Heheh.... That thing I though was preventing LXC to work from libvirt.. That was in Jaunty. I'm getting old.
[14:21] <soren> stgraber: The only reason it doesn't work in Karmic is because of Apparmor.
[14:22] <soren> stgraber: If you switch libvirtd to complain mode, it works just fine.
[14:31] <jdstrand> stgraber: you can also adjust the profile. See bug #480478 for details
[14:31] <uvirtbot`> Launchpad bug 480478 in libvirt "libvirt's apparmor profile doesn't allow execution of /usr/lib/libvirt/libvirt_lxc" [Medium,Triaged] https://launchpad.net/bugs/480478
[14:31] <soren> jdstrand: I'm not entirely convinced that's sufficient.
[14:32] <soren> jdstrand: I will know in a minute. You're supposed to be on holiday, by the way :)
[14:32] <soren> jdstrand: Ok, so if I add that to the profile, what do I need to to do reload it?
[14:33] <jdstrand> soren: apparmor_parser -r -W -T /etc/apparmor.d/usr.sbin.libvirtd
[14:34] <jdstrand> soren: that will make it work with apparmor. as to how well lxc works with libvirt atm, I can't say-- I've heard 0.7.0 doesn't work too well
[14:34] <soren> jdstrand: Obviously
[14:34] <soren> Well, it seems to work for me.
[14:34] <soren> I wasn't entirely sure about some of the interactions there, but it seems to actually do what I want it to.
[14:34]  * jdstrand has no idea
[14:35] <soren> jdstrand: Do you see any reason not to SRU this into Karmic?
[14:35] <soren> It seems like very low hanging fruit.
[14:36] <jdstrand> soren: I plan to  SRU it and another bug. but the SRU will use a different rule to enable it
[14:36] <soren> jdstrand: Can I see it?
[14:37] <jdstrand> soren: bug #484562
[14:37] <uvirtbot`> Launchpad bug 484562 in libvirt "apparmor prevents libvirt-vnc certificate from being read" [Undecided,New] https://launchpad.net/bugs/484562
[14:38] <jdstrand> soren: I think for bug #480478 I would actually use:
[14:38] <uvirtbot`> Launchpad bug 480478 in libvirt "libvirt's apparmor profile doesn't allow execution of /usr/lib/libvirt/libvirt_lxc" [Medium,Triaged] https://launchpad.net/bugs/480478
[14:38] <jdstrand> /usr/lib/libvirt/* PUx,
[14:38] <soren> jdstrand: Sorry, not the other bug, but the different rule.
[14:38] <soren> What's P for?
[14:38] <jdstrand> soren: the P says to transition to another profile
[14:38] <jdstrand> soren: the U says to go unconfined if the profile doesn't exist
[14:39] <jdstrand> soren: I would do this because in 0.7.2 virt-aa-helper is moving to /usr/lilb/libvirt
[14:39] <jdstrand> s/lilb/lib/
[14:39] <soren> I'm not sure I understand that. I mean.. This is being defined /in a profile/. How can the profile not exist?
[14:39] <jdstrand> and therefore it would be more consistent and slightly easier on upgrades for people who modify the profile
[14:40] <jdstrand> soren: the rule is a globbing rule
[14:40] <jdstrand> soren: there are several helpers in /usr/lib/libvirt
[14:40] <jdstrand> soren: in the future, one will have a profile, and the other two won't
[14:41] <soren> Ok.
[14:41] <jdstrand> soren: we can either be very specific and list the helpers individually, or stick with the globbing rule and use PUx
[14:41] <jdstrand> I like the globbing rule so that it will work if libvirt adds more helpers
[14:42] <soren> Right, ok.
[14:42] <jdstrand> soren: actually, if you plan to be doing the SRU, perhaps use 'PUxr', I see 'r' is in the existing profile
[14:43] <jdstrand> soren: but I plan to do the SRU next week
[14:43] <soren> jdstrand: I'm in no hurry :)
[14:44] <jdstrand> heh
[14:44] <jdstrand> np
[14:44] <soren> Ok, so the P transitions to another profile. Which other profile? How is that defined?
[14:44] <soren> Oh, I see it at the bottom.
[14:45] <soren> Let me just take that for a quick spin.
[14:45] <jdstrand> soren: unless you name the profile explicitly using '->' in the rule, it will transition to a profile for the binary it matches
[14:46] <jdstrand> soren: in this case, it will go unconfined for anything in /usr/lib/libvirt, cause there are no profiles defined for binaries in that dir
[14:46] <jdstrand> soren: in 0.7.2, we will have /usr/lib/libvirt-virt-aa-helper
[14:46] <soren> Oh, so the P is a no-op in this case?
[14:47] <jdstrand> soren: yes. just there for consistency with the upgrade to 0.7.2 (for reducing the diff if people modified the profile on their own)
[14:52] <uvirtbot`> New bug: #485361 in samba (main) "CIFS mounted drives do not allow write access to program other than nautilus, gedit or the command line" [Low,Incomplete] https://launchpad.net/bugs/485361
[14:54] <stgraber> jdstrand: I'm pretty sure I'm the one who opened that bug ;)
[14:54] <jdstrand> stgraber: oh, heh, so you are :)
[14:55] <jdstrand> someone else hit it last week too, so I was thinking he reported it :)
[14:56]  * jdstrand wanders off
[14:59] <incorrect> how irritating sara.nl aren't giving the source to their dellomsa package
[15:05] <stgraber> soren: started to play with lxc ?
[15:19] <soren> stgraber: Yeah, just for giggles so far :)
[15:32] <uvirtbot`> New bug: #486178 in ntp (main) "package ntp (not installed) failed to install/upgrade: subprocess installed post-installation script returned error exit status 127" [Low,Incomplete] https://launchpad.net/bugs/486178
[15:49] <majuk> WOO HOO! Samba PDC makes me wanna UUUUHNNNN
[16:13] <uvirtbot`> New bug: #454302 in munin (universe) "Missing dependency - apache_process plugin" [Wishlist,Triaged] https://launchpad.net/bugs/454302
[16:40] <kshah> jeeeeez.. I'm really struggling here
[16:40] <kshah> I've been trying to setup postfix to use /home/%u/Maildir to store mail
[16:40] <kshah> and I've told dovecot to do the same
[16:40] <kshah> now i see mail still coming in and using mbox
[16:41] <kshah> except instead of /var/mail/user it's /home/user/mbox
[16:41] <kshah> there is some key config setting i'm clearly missing
[16:43] <essial> Hey guys, I have a mail server set up, and I can email anyone, BUT emails hosted at secureserver.net reject (as in, they can't recieve them). I am not on a blacklist, and reverse DNS APPEARS to be correct
[16:43] <essial> I even opted out of that in-by-default blacklist
[16:44] <essial>  (host mailstore1.secureserver.net[72.167.238.201] refused to talk to me: 554-p3pismtp01-003.prod.phx3.secureserver.net 554 Your access to this mail system has been rejected due to the sending MTA's poor reputation.
[16:44] <essial> host is metro1ems.com and every website that tests domains says it's clean and good
[16:45] <ScottK> essial: Reputation services are all propietary and everyone uses a different one, so you've have to ask the people that run the server that's rejecting you,
[16:45] <kshah> solved... mailbox_command... as my sys admin told me to do *sigh* listening
[16:46] <essial> ok so basically I have to call godaddy then, right?
[16:47] <ScottK> Yep
[16:47] <ScottK> Good luck.
[16:49] <essial> Yeah I had to do this once before
[16:49] <essial> I really dislike godaddy
[16:49] <majuk> 1and1 ftw?
[16:49] <majuk> :D
[16:50] <essial> I was thinking that maybe my reverse dns was not correct or something but I guess not
[16:51] <billybigrigger> anyone here know a good vps host? preferably in canada?
[16:53] <kshah> just use slicehost like everyone else ;)
[16:56] <uvirtbot`> New bug: #288052 in dhcp3 (main) "/etc/resolv.conf inserts commas between Search Domains" [Medium,Confirmed] https://launchpad.net/bugs/288052
[17:22] <kshah> that bot is making me wonder if their is a zero day policy for ubuntu
[17:33] <ivoks> hi all
[17:43] <ivoks> aj
[17:56] <zul> hey ivoks
[17:56] <zul> nijaba: done
[17:56] <ivoks> hey guys
[17:56] <nijaba> zul: that was QUICK :)
[17:56] <nijaba> zul: thanks a lot
[17:56] <zul> nijaba: well i just got it
[17:57] <nijaba> zul: I know, I just wrote the request !
[17:57] <nijaba> ivoks: hello Ante.  had a good trip back?  got your luggage too?
[17:58] <ivoks> yes, got my luggage, but i'm very tired
[17:58] <ivoks> i've spent 20 hours on planes and airports
[17:58] <zul> only?
[17:58] <ivoks> tomorrow i'm back in packaging business :)
[17:59] <nijaba> ivoks: I bet you are more in the ubpacking business at the moment ;)
[17:59] <ivoks> hehe
[17:59] <ivoks> usually, i just leave my bags packaged and don't touch them for couple of days :D
[18:16] <nijaba> Daviey: heya.  Safe trip back home?
[19:12] <incentifit> Using ubuntu 9.10, I've set /var/www permissions to 0775 and group to root:publisher.  My user incentifit is a member of incentifit:publisher.  That user still cannot create new files and folders in /var/www.  What have I over looked?  (I've got notes from previous setup of 9.04 that work on 9.04 using same setup so I suspect something new or a bug)
[19:20] <ivoks> incentifit: ls -dl /var/www
[19:23] <incentifit> ivoks: I'm confused now.  I skimmed the -dl flags in man...  I sudo mkdir /var/www/hello then ran ls -dl /var/www and it returns nothing.  I plain ls shows the new folder.
[19:24] <bogeyd6> uhm
[19:24] <bogeyd6> that is impossible
[19:24] <ivoks> ls -dl shows only the folder you are asking it
[19:24] <ivoks> so ls -dl /var/www will not return /var/www/hello
[19:24] <ivoks> just /var/www
[19:24] <incentifit> right
[19:24] <incentifit> ls /var/www shows the new hello
[19:24] <ivoks> that's right
[19:25] <orudie_> can i run xen on ubuntu server? if yes, what is the process of installing xen ?
[19:25] <ivoks> so, what's confusing?
[19:25] <ivoks> orudie_: xen?
[19:25] <ivoks> orudie_: return to 21. century :)
[19:25] <incentifit> I guess I expected the same... I need to reread ls -dl in the man.  So, what is it that you wanted me to return, which leads to an answer to my first question?
[19:26] <ivoks> incentifit: -d doesn't do recursive
[19:26] <incentifit> I don't see how ls -dl /var/www resolves the apparent permission issue
[19:26] <ivoks> i do, that's why i asked
[19:26] <orudie_> ivoks, what are you suggesting ?
[19:26] <ivoks> you claim that /var/www has some permissions
[19:26] <ivoks> i'd like to check them
[19:26] <incentifit> ok...
[19:27] <ivoks> so, please, paste the output of 'ls -dl /var/www'
[19:28] <incentifit> ivoks: sorry, just sec...
[19:29] <ivoks> or don't
[19:29] <incentifit> drwxrwsr -x 3 root publisher 4096 2009-11-23 12:55 /var/www
[19:29] <ivoks> ok
[19:29] <incentifit> patience! :P  couldn't copy and paste
[19:29] <ivoks> so, group publisher should be able to write there
[19:29] <incentifit> yup
[19:29] <ivoks> you do know you have setgid on that dir?
[19:30] <incentifit> yes
[19:30] <ivoks> and your user is member or publisher group?
[19:30] <incentifit> yes
[19:31] <imlad> Hello, what would I need to install on a client machine already running Karmic to run the 9.10 Server?
[19:31] <ivoks> touch /var/www/testing_123 doesn't work?
[19:31] <ivoks> orudie_: kvm
[19:31] <incentifit> no, permission denied
[19:32] <incentifit> confirmed cat /etc/groups shows my user in that group
[19:32] <ivoks> did you log out and log in after adding that user into group?
[19:32] <incentifit> yes,rebooted to
[19:33] <bogeyd6> imlad depends, what services are you wanting to offer?
[19:34] <imlad> bogeydo, I want to look at UEC on the same machine I am running my client on.
[19:34] <incentifit> I've a very detailed setup of steps I created when building such a machine on 9.04.  I built many using those steps.  So, something is different about 9.10.  I suspect stronger protection, just dunno.
[19:34] <ivoks> this are basic permissions
[19:34] <incentifit> right
[19:34] <ivoks> ls -dl /tmp/TEST/
[19:34] <ivoks> drwxrwsr-x 2 root ivoks 4096 2009-11-23 20:33 /tmp/TEST/
[19:34] <ivoks> touch /tmp/TEST/test
[19:34] <ivoks> works
[19:34] <ivoks> 9.10
[19:35] <incentifit> chmod -R 0777 /var/www allows incentifit user to rw of course...
[19:35] <incentifit> chmod -R 0775 /var/www and incentifit can no longer create files or directories
[19:35] <bogeyd6> imlad i dont know much about the cloud, but here is something, http://www.ubuntu.com/cloud/private
[19:35] <incentifit> cat /etc/groups shows user in group
[19:36] <imlad> thanks, bogeydo.
[19:36] <ivoks> incentifit: hm, it works here
[19:36] <incentifit> and of course ls -l shows the user and group
[19:37] <ivoks> ok
[19:37] <ivoks> just to be sure:
[19:37] <ivoks> adduser incentifit publisher
[19:37] <kshah> I'm using postfix, and I have .forward file that I want to trigger a script, but I want to mail itself as well
[19:37] <kshah> I can't seem to do this.. i"ve been trying for far far far too long
[19:38] <incentifit> The user 'incentifit' is already a member of 'publisher'
[19:38] <kshah> my .forward file looks like: | "echo 'awesome' >> /home/stream/foo.txt"
[19:38] <lamont> kshah: \user, "|script"
[19:38] <ivoks> incentifit: ok, chmod 777 /var/www
[19:38] <kshah> lamont: is 'user' a variable there?
[19:38] <ivoks> incentifit: then as user, touch /var/www/testing_123
[19:39] <ivoks> incentifit: ls -dl /var/www/testing_123
[19:39] <lamont> kshah: yeah
[19:39] <lamont> the \ says "don't do forward file processing here, just use the user, dammit"
[19:39] <bogeyd6> !permission
[19:39] <kshah> lamont: and thank you, #postfix.. was having too much trying holding their knowledge above my head
[19:39] <kshah> friendlier crowd here
[19:40] <bogeyd6> !help @ kshah
[19:40] <bogeyd6> !help | kshah
[19:41] <bogeyd6> kshah i meant !ohmy not help
[19:41] <kshah> did i just get !help'ed after complimenting the channel :) ?
[19:41] <kshah> heh all good
[19:41] <lamont> kshah: actually, could you file a bug against postfix that the "manpage for aliases(5) does not document leading backslash"
[19:41] <lamont> and I'll forward that upstream
[19:41] <bogeyd6> !ohmy | kshah
[19:41] <kshah> amen!
[19:42] <lamont> kshah: (postfix is my package in debian, you see...)
[19:42] <lamont> kshah: and I'd rather be forwarding a user's original report than one of my own crafting
[19:43] <kshah> lamont: and I thank you for it, I'll file that request. My only gripe was that the channel was less than kind to me
[19:43] <kshah> (theirs not this one)
[19:44] <lamont> fwiw, the procmail manpages document it, as does the sendmail aliases(5) manpage, as does......
[19:44] <lamont> (I believe - haven't actually bothered to go verify any of that pile of festering assertion)
[19:44] <kshah> i typically need to see examples / usage to be able to learn anything
[19:45] <kshah> which I also fully recognize is unreasonable to expect all the time
[19:45] <incentifit> ivoks: -rw-r--r-- 1 incentifit publisher 0 ............. /var/www/testing_123
[19:45] <lamont> kshah: OTOH, the postfix aliases(5) manpage documents everything else about forward files --> iz bug
[19:46] <lamont> kshah: if it's any help, I got told to go to #ubuntu last night.  meh.
[19:46] <kshah> irc *sigh*
[19:46] <lamont> mind you, I probably should have been there, I suppose.
[19:46] <ivoks> hm
[19:47] <ivoks> incentifit: same thing doesn't work if /var/www is 0775?
[19:47] <kshah> I got told to use procmail which and got into an argument since I said I knew it could be done without.. and then the merits of add a component or not, etc, etc >> /dev/null
[19:47] <ivoks> incentifit: just change permissions and try touch again
[19:48] <incentifit> ivoks:  look at the permissions when doing 0777 see how publisher doesn't have write, is that right?
[19:48] <ivoks> incentifit: /var/www isn't mounted share or something?
[19:48] <incentifit> ivoks: no
[19:48] <ivoks> incentifit: that's ok, umask controls that
[19:49] <incentifit> ivoks: thanks for your help... I just got called into a meeting, be back later, thanks again
[19:49] <ivoks> ok
[20:16] <billybigrigger> jmarsden, ping
[20:17] <bogeyd6> lamont this is server support channel and desktop support is frowned upon but not unheard of
[20:18] <lamont> bogeyd6: and?
[20:19] <lamont> the postfix question was definitely in-scope for this channel.  my grumpiness last night was actually in the devel channel, not here.
[20:21] <billybigrigger> where can i find what the default MTU is set at for a 9.04 server install
[20:21] <ivoks> 1500
[20:22] <billybigrigger> hmm
[20:22] <ivoks> ifconfig would give you that
[20:22] <billybigrigger> well i just purchased a VPS host...
[20:22] <billybigrigger> but it's not set in interfaces, just wondering where it gets the default value
[20:22] <billybigrigger> anyway...
[20:23] <billybigrigger> newark1.linode.com i get 100%[[20:23] <ivoks> 1500 is default value
[20:23] <ivoks> that's the one you should use for ethernet
[20:23] <ivoks> pppoe should be smaller 1492
[20:24] <billybigrigger> while newark129.linode.com (my node) i only get anywhere from 400K/s to 800K/s
[20:24] <billybigrigger> from the same server to my home connection
[20:24] <ivoks> so, you know it's a mtu problem or you are guessing?
[20:24] <billybigrigger> guessing
[20:24] <bogeyd6> sounds like a guess
[20:24] <bogeyd6> more likely oversold hosting
[20:24] <billybigrigger> just wondering where i can start tweaking, if needed
[20:25] <billybigrigger> yeah they claim 50Mbps PER NODE
[20:25] <billybigrigger> my ass
[20:25] <ivoks> it's vps
 poor tuning?
 could be many reasons
 too many variables
 MTU, window scaling, server load, node load, standard TCP sawtooth behavior, etc
 also, urmom might be sitting on the tube limiting your bandwidth
[20:25] <billybigrigger> yeah
[20:25] <bogeyd6> !pastebin | billybigrigger
[20:25] <bogeyd6> billybigrigger were you upping or downloading
[20:26] <bogeyd6> Cuz on a single 1gbs connect with two raid 5 scsi servers, can only get like 34.* mbs transfer
[20:26] <billybigrigger> downloading from their servers to my house
[20:27] <bogeyd6> for instance i just transfered a virtual machine -_-_-_-> 3,794,279,374 59.7M/s   in 97s
[20:27] <ivoks> billybigrigger: problems with mtu would be 'i can see this site, but i can't see that site'
[20:27] <ivoks> for example, you'd be able to see all web sites from your ISP, but not any other
[20:28] <billybigrigger> well im not asking for the 50M/s they claim (6.25M/s) as my home connection maxes at 3M/s
[20:28] <billybigrigger> but 400k-800K/s? come on
[20:28] <bogeyd6> my guess is most likely is oversold VPS
[20:28] <ivoks> mtu should be 1500 on ethernet
[20:28] <bogeyd6> linode is famous for that
[20:29] <billybigrigger> http://pastebin.ca/1684173
[20:30] <billybigrigger> so apparently they blame the config on my node, ie fresh as can be 9.04 install
[20:31] <ivoks> they have no clue
 billybigrigger: Short answer is that your node probably isn't tweaked the way your home connection wants
[20:32] <ivoks> i'm getting 5MB/s peek and 3,78MB/s average
[20:32] <ivoks> http://69.164.211.53/Tailing-Aaron.mov
[20:32] <bogeyd6> billybigrigger im checking that download speed right now
[20:32] <bogeyd6> base ubuntu install
[20:32] <billybigrigger> yeah
[20:32] <bogeyd6> wget ftw
[20:33] <billybigrigger> installed nano and wget
[20:33] <bogeyd6> 15:32:57 (2.38 MB/s) - `Tailing-Aaron.mov.1' saved [95545644/95545644]
[20:33] <billybigrigger> oh apache, and created my user
[20:33] <billybigrigger> so why the hell do i get 400k from it?
[20:33] <bogeyd6> cable modem?
[20:33] <billybigrigger> yeah
[20:33] <ivoks> maybe your MTU at home isn't right :)
[20:33] <bogeyd6> wireless?
[20:33] <billybigrigger> but from the same server, i can max out my connection
[20:33] <billybigrigger> 100%[[20:33] <ivoks> 21:33:51 (3.77 MB/s) - `Tailing-Aaron.mov' saved [95545644/95545644]
[20:33] <billybigrigger> ^^ newark1.linode.com
[20:33] <uvirtbot`> billybigrigger: Error: "^" is not a valid command.
[20:34] <billybigrigger> 100%[[20:34] <billybigrigger> ^^ same file, same wget command from my linode newark129.linode.com
[20:34] <uvirtbot`> billybigrigger: Error: "^" is not a valid command.
[20:34] <billybigrigger> both have same hops and same ping
[20:34] <billybigrigger> it's not my home connection
[20:34] <ivoks> try from another location
[20:34] <ivoks> try from that second server
[20:35] <billybigrigger> what second server?
[20:35] <ivoks> newark129.linode.com or whatever the name is
[20:35] <billybigrigger>  thats my linode
[20:35] <billybigrigger> the one your all downloading from
[20:35] <ivoks> so, on newark1.linode.com wget from newark129.linode.com
[20:36] <billybigrigger> i can't wget on newark1
[20:36] <ivoks> then wget somewhere else
[20:36] <ivoks> as you've seen
[20:36] <ivoks> both bogeyd6 and i have normal speeds
[20:36] <ivoks> and others on IRC had normal speeds
[20:37] <billybigrigger> ok, but what i don't understand...
[20:37] <billybigrigger> is that from the same datacenter....newark1 and newark129 are on the same connection
[20:37] <billybigrigger> everyone else can get normal speeds, but from my node i can only get 400-800k
[20:37] <ivoks> and only you
[20:37] <ivoks> at home
[20:38] <ivoks> everybody else gets a lot more
[20:38] <ivoks> from that same server
[20:38] <billybigrigger> but...
[20:38] <ivoks> yet, you still think it's a server issue
[20:38] <billybigrigger> from linode1 i can max out my connection at 3.0M/s
[20:38] <ivoks> true
[20:38] <ivoks> but if everybody else gets normal speed from newark129
[20:38] <billybigrigger> i know it's not me
[20:39] <ivoks> then problem isn't in that server
[20:39] <billybigrigger> node configuration?
[20:39] <ivoks> i give up
[20:41] <uvirtbot`> New bug: #487280 in eucalyptus "move the database away from hsql" [Wishlist,Confirmed] https://launchpad.net/bugs/487280
[20:52] <linuxamoeba> hello. i am trying to make a largish (11TB) ext4 partition with mkfs, and it keeps showing up in df as 2 tb. any ideas?
[20:52] <embrik> when I sshfs to my server I get write-protected on every document I open on the client. Is there an option to the sshfs command to give my self direct write permissions?
[20:55] <embrik> anybody knows about sshfs?
[20:59] <linuxamoeba> embrik, when i've used sshfs as user x, i've always gotten user x's permissions
[21:00] <linuxamoeba> i thought that was a major advantage
[21:08] <linuxamoeba> you know anything about large ext4 partitions?
[21:09] <kane_> embrik: sshfs takes uid & guid options, which are meant to solve the permission problems
[21:10] <kane_> this is what i use in my scripts: sshfs TARGET MOUNTPIONT  -o uid=`id -u` -o gid=`id -g`
[21:16] <SyL> linuxamoeba: You have an 11TB drive?
[21:18] <linuxamoeba> SyL, hardware raid5
[21:19] <SyL> linuxamoeba: have you checked how big the partitions are?
[21:19] <majuk> Hey guys, I had to change the IP address of my PDC, now Samba is complaining that my domain already has a PDC at the old address. Restarted the server entirely, no change. Any ideas?
[21:20] <linuxamoeba> syl, can i do that with something other than fdisk?
[21:23] <majuk> Got it, wins.dat ftl
[21:24] <linuxamoeba> syl, on closer inspection, fdisk won't let me create a partition bigger than 2tb
[21:26] <majuk> linuxamoeba! This isn't a great solution, but you could bust it up into smaller chunks with LVM
[21:27] <majuk> I dunno, nevermind, my idea sucks, gg thinking things through
[21:28] <linuxamoeba> according to some internets (sic), i need GPT support in the kernel, which is probably not on by default
[21:35] <pmatulis> linuxamoeba: what do you intend to do with this 11TB?
[21:37] <linuxamoeba> back up another one:)
[21:38] <crohakon> linuxamoeba, what on earth are you storing that is taking up 11TB? hehe
[21:39] <linuxamoeba> lots of physics data
[21:39] <majuk> crohakon! He's making a copy of the MIT cat brain.
[21:39] <linuxamoeba> i have a sunfire x4500 (20tb) that hosts data + my users homes
[21:39] <linuxamoeba> which makes backing things up sort of a pain!
[21:42] <linuxamoeba> i tried again in parted rather than fdisk
[21:43] <pmatulis> linuxamoeba: have you considered xfs?
[21:44] <ahe> i just setup my first UEC but when i try to start a instance with euca-run-instances as described in the documentation i get this error message:
[21:45] <ahe>    FinishedVerify: Not enough resources: vm instances.
[21:45] <crohakon> majuk, I want a copy of the MIT cat brain. I bet it does not bite and claw me like my real cat does....
[21:45] <linuxamoeba> i hadn't though of xfs
[21:45] <linuxamoeba> i'll check it out
[21:45] <ahe> my nc has vt extensions since i get matches for svm in /proc/cpuinfo
[21:45] <linuxamoeba> (considered opensolaris + zfs!)
[21:46] <pmatulis> linuxamoeba: it's made for large filesystems and/or large files
[21:47] <bogeyd6> xfs makes data recovery nearly impossible, but in a properly admin'ed system you have backups
[21:47] <bogeyd6> i use XFS, but all my servers include a /boot in ext3
[21:48] <bogeyd6> !xfs | linuxamoeba
[21:48] <SyL> linuxamoeba: what OS is your 20TB running?
[21:48] <linuxamoeba> solaris 10
[21:49] <linuxamoeba> w/ zfs
[21:49] <SyL> ahe: when you do a "euca-describe-availibility verbose" do you get anything?
[21:49] <linuxamoeba> zfs+nfs serving to linux == hella slow!
[21:50] <ahe> SyL: is this command in euca2ools?
[21:50] <ahe> i get "command not found"
[21:51] <ahe> did you mean "euca-describe-availibility-zones" ?
[21:52] <ahe> with that i get the same list of preconfigured VM sizes that i can also see in the web interface
[21:54] <pmatulis> linuxamoeba: are you running a 32-bit system?
[21:54] <linuxamoeba> pmatulis, 64
[21:55] <SyL> linuxamoeba: http://spiralbound.net/2008/01/11/how-to-make-gnarly-big-linux-filesystems
[21:56] <SyL> ahe: yes, it's a euca-tools command. I might not be spelling it correctly.
[21:56] <SyL> linuxamoeba: I love me some ZFS
[21:56] <linuxamoeba> syl, thanks -- i found parted and gave it a try, it mkfs *seems* to be making a big one
[21:57] <linuxamoeba> (fingers crossed)
[21:57] <linuxamoeba> i love my zfs but don't love administratifying solaris
[21:58] <ahe> SyL: euca-describe-availability-zones verbose returns the same list as shown on https://help.ubuntu.com/community/UEC/CDInstall
[21:59] <SyL> ahe: right, but do you see anything under "free" ?
[22:00] <ahe> SyL: got me
[22:00] <ahe> everything 0000
[22:01] <ahe> i installed both machines from a ISO/usb key
[22:01] <ahe> and selected UEC in the installation menu
[22:02] <linuxamoeba> allllmooosssttt theeeeereee...
[22:04] <ahe> how can i find out which nodes are actually registered?
[22:05] <SyL> ahe: if you hit tab a few times when you type "euca" it should show you all the euca-tools commands.
[22:05] <SyL> I think euca-describe-regions is the command you are looking for
[22:09] <ahe> SyL: i get something back that looks like an json error message coming from a webservice: http://pastebin.com/m70a13b0c
[22:10] <SyL> ahe: that is a new error to me. have you looked on the server side logs to see if there is anything more useful?
[22:10] <ahe> not yet but i'm about to do that
[22:12] <SyL> yeah, check that next
[22:13] <oneseventeen> is there a reason not to use the LAMP server collection of software?
[22:13] <oneseventeen> (I normally shy away from automagic stuff, hence the Ubuntu Server install.)
[22:15] <linuxamoeba> lamp == <3
[22:19] <linuxamoeba> dev/sdb1             9.4T  167M  9.0T   1% /mnt/tank1
[22:19] <linuxamoeba> close enough!
[22:20] <kane_> linuxamoeba: there's usually a space reserved for root; you might want to shrink that a bit on 11TB
[22:21] <linuxamoeba> hmm
[22:21] <linuxamoeba> is there a way to check how much is reserved?
[22:22] <ahe> SyL: thanks for the help so far there is nothing interesting on the nc but on the cc there are some java exceptions but i will check that tomorrow
[22:23] <kane_> linuxamoeba: hdparm should be able to tell you
[22:24] <linuxamoeba> hdparm doesn't tell me anything, probs due to raid controller in between :(
[22:24] <SyL> linuxamoeba: you can remove the reserved with tunefs
[22:24] <kane_> *nods*
[22:27] <SyL> linuxamoeba: I think the standard is 10% of the total drive is saved for root
[22:27] <linuxamoeba> that makes sense
[22:27] <linuxamoeba> parted shows 10.5TB and i get 9.4
[22:28] <linuxamoeba> i think 1% will do
[22:28] <linuxamoeba> if that
[22:31] <linuxamoeba> i did tune2fs -m 0.5 /dev/sdb1 and it claimed to work, but df still shows 9.4 TB.. do i have to do other things?
[22:35] <SyL> linuxamoeba: are you doing df -h or just df?
[22:36] <ScottK> Make sure you are comparing the same kind of TB.  Some are made of 1,000 Byte KB and some of 1,024 KB.
[22:36] <linuxamoeba> that was df -h, good point
[22:36] <linuxamoeba> but still, i wouldn't expect the difference to be a whole TB
[22:36] <linuxamoeba> also it didn't change when i changed to reserved %
[22:36] <SyL> you might need to remount it?
[22:37] <linuxamoeba> i did, will again
[22:37] <linuxamoeba> nope
[22:38] <SyL> hrm... interesting.
[22:38] <SyL> maybe some of it for journaling? =)
[22:39] <SyL> ahe: you should do "tail -f /var/log/eucalyptus/cc.log|grep cores" and you should see something like this
[22:39] <SyL> [Mon Nov 23 16:37:44 2009][020340][EUCAINFO  ]  node=192.168.1.103 mem=3804/1756 disk=247525/246461 cores=2/0
[22:42] <linuxamoeba> that would be pretty sad for ext4 haha
[22:45] <linuxamoeba> i could start over and tell it not to reserve so much in the first place
[22:45] <ahe> SyL: oh thanks i'll try that
[22:46] <linuxamoeba> sigh... any other thoughts before i re-reformat 10.5tb?
[22:51] <Schmidt> If I want to host multiple mail domains on one server (with separate IP for every domain) should I select the Smarthost option when I do dpkg-reconfigure postfix or just Internet Site and enter all the domains I want ?
[22:51] <SyL> linuxamoeba: which File system is it?
[22:51] <linuxamoeba> ext4
[22:53] <SyL> linuxamoeba: not off the top of my head. I would run fsck on it first though
[22:54] <SyL> and check e2fsprogs helps any
[22:56] <SyL> linuxamoeba: and also check esize2fs
[22:56] <SyL> err... resize2fs
[22:59] <linuxamoeba> resize2fs 1.41.9 (22-Aug-2009)The filesystem is already 2563476558 blocks long.  Nothing to do!
[23:00] <linuxamoeba> fsck = happy
[23:02] <SyL> hrm... intersting
[23:03] <SyL> ok, my brain just turned off...
[23:03] <SyL> linuxamoeba: I would see how much the filesystem takes for journaling. I can't think anymore today.
[23:03] <linuxamoeba> ok
[23:04] <linuxamoeba> is there a non-hdparm way to do that?
[23:04] <SyL> I don't think so... I would look up some documents on ext4 by searching on google
[23:05] <linuxamoeba> will do
[23:05] <linuxamoeba> thanks for all the help